Analysis of neighborhood dynamics of forest ecosystems using likelihood methods and modeling.
Canham, Charles D; Uriarte, María
2006-02-01
Advances in computing power in the past 20 years have led to a proliferation of spatially explicit, individual-based models of population and ecosystem dynamics. In forest ecosystems, the individual-based models encapsulate an emerging theory of "neighborhood" dynamics, in which fine-scale spatial interactions regulate the demography of component tree species. The spatial distribution of component species, in turn, regulates spatial variation in a whole host of community and ecosystem properties, with subsequent feedbacks on component species. The development of these models has been facilitated by development of new methods of analysis of field data, in which critical demographic rates and ecosystem processes are analyzed in terms of the spatial distributions of neighboring trees and physical environmental factors. The analyses are based on likelihood methods and information theory, and they allow a tight linkage between the models and explicit parameterization of the models from field data. Maximum likelihood methods have a long history of use for point and interval estimation in statistics. In contrast, likelihood principles have only more gradually emerged in ecology as the foundation for an alternative to traditional hypothesis testing. The alternative framework stresses the process of identifying and selecting among competing models, or in the simplest case, among competing point estimates of a parameter of a model. There are four general steps involved in a likelihood analysis: (1) model specification, (2) parameter estimation using maximum likelihood methods, (3) model comparison, and (4) model evaluation. Our goal in this paper is to review recent developments in the use of likelihood methods and modeling for the analysis of neighborhood processes in forest ecosystems. We will focus on a single class of processes, seed dispersal and seedling dispersion, because recent papers provide compelling evidence of the potential power of the approach, and illustrate
Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano
2015-01-01
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926
Kubo, Taichi
2008-02-01
We have measured the top quark mass with the dynamical likelihood method. The data corresponding to an integrated luminosity of 1.7fb^{-1} was collected in proton antiproton collisions at a center of mass energy of 1.96 TeV with the CDF detector at Fermilab Tevatron during the period March 2002-March 2007. We select t$\\bar{t}$ pair production candidates by requiring one high energy lepton and four jets, in which at least one of jets must be tagged as a b-jet. In order to reconstruct the top quark mass, we use the dynamical likelihood method based on maximum likelihood method where a likelihood is defined as the differential cross section multiplied by the transfer function from observed quantities to parton quantities, as a function of the top quark mass and the jet energy scale(JES). With this method, we measure the top quark mass to be 171.6 ± 2.0 (stat.+ JES) ± 1.3(syst.) = 171.6 ± 2.4 GeV/c^{2}.
Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.
ERIC Educational Resources Information Center
Butler, Ronald W.
The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…
Likelihood methods for point processes with refractoriness.
Citi, Luca; Ba, Demba; Brown, Emery N; Barbieri, Riccardo
2014-02-01
Likelihood-based encoding models founded on point processes have received significant attention in the literature because of their ability to reveal the information encoded by spiking neural populations. We propose an approximation to the likelihood of a point-process model of neurons that holds under assumptions about the continuous time process that are physiologically reasonable for neural spike trains: the presence of a refractory period, the predictability of the conditional intensity function, and its integrability. These are properties that apply to a large class of point processes arising in applications other than neuroscience. The proposed approach has several advantages over conventional ones. In particular, one can use standard fitting procedures for generalized linear models based on iteratively reweighted least squares while improving the accuracy of the approximation to the likelihood and reducing bias in the estimation of the parameters of the underlying continuous-time model. As a result, the proposed approach can use a larger bin size to achieve the same accuracy as conventional approaches would with a smaller bin size. This is particularly important when analyzing neural data with high mean and instantaneous firing rates. We demonstrate these claims on simulated and real neural spiking activity. By allowing a substantive increase in the required bin size, our algorithm has the potential to lower the barrier to the use of point-process methods in an increasing number of applications. PMID:24206384
Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions
Barrett, Harrison H.; Dainty, Christopher; Lara, David
2008-01-01
Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255
Abulencia, A.; Budd, S.; Chu, P.H.; Ciobanu, C.I.; Errede, D.; Errede, S.; Gerberich, H.; Grundler, U.; Junk, T.R.; Kraus, J.; Liss, T.M.; Marino, C.; Pitts, K.; Rogers, E.; Taffard, A.; Veramendi, G.; Vickey, T.; Zhang, X.; Acosta, D.; Cruz, A.
2006-05-01
This paper describes a measurement of the top quark mass, M{sub top}, with the dynamical likelihood method (DLM) using the CDF II detector at the Fermilab Tevatron. The Tevatron produces top/antitop (tt) pairs in pp collisions at a center-of-mass energy of 1.96 TeV. The data sample used in this analysis was accumulated from March 2002 through August 2004, which corresponds to an integrated luminosity of 318 pb{sup -1}. We use the tt candidates in the 'lepton+jets' decay channel, requiring at least one jet identified as a b quark by finding a displaced secondary vertex. The DLM defines a likelihood for each event based on the differential cross section as a function of M{sub top} per unit phase space volume of the final partons, multiplied by the transfer functions from jet to parton energies. The method takes into account all possible jet combinations in an event, and the likelihood is multiplied event by event to derive the top quark mass by the maximum likelihood method. Using 63 tt candidates observed in the data, with 9.2 events expected from background, we measure the top quark mass to be 173.2(+2.6/-2.4)(stat.){+-}3.2(syst.) GeV/c{sup 2}, or 173.2(+4.1/-4.0) GeV/c{sup 2}.
Abulencia, A.; Acosta, D.; Adelman, Jahred A.; Affolder, Anthony A.; Akimoto, T.; Albrow, M.G.; Ambrose, D.; Amerio, S.; Amidei, D.; Anastassov, A.; Anikeev, K.; /Taiwan, Inst. Phys. /Argonne /Barcelona, IFAE /Baylor U. /INFN, Bologna /Bologna U. /Brandeis U. /UC, Davis /UCLA /UC, San Diego /UC, Santa Barbara
2005-12-01
This report describes a measurement of the top quark mass, M{sub top}, with the dynamical likelihood method (DLM) using the CDF II detector at the Fermilab Tevatron. The Tevatron produces top/anti-top (t{bar t}) pairs in p{bar p} collisions at a center-of-mass energy of 1.96 TeV. The data sample used in this analysis was accumulated from March 2002 through August 2004, which corresponds to an integrated luminosity of 318 pb{sup -1}. They use the t{bar t} candidates in the ''lepton+jets'' decay channel, requiring at least one jet identified as a b quark by finding an displaced secondary vertex. The DLM defines a likelihood for each event based on the differential cross section as a function of M{sub top} per unit phase space volume of the final partons, multiplied by the transfer functions from jet to parton energies. The method takes into account all possible jet combinations in an event, and the likelihood is multiplied event by event to derive the top quark mass by the maximum likelihood method. Using 63 t{bar t} candidates observed in the data, with 9.2 events expected from background, they measure the top quark mass to be 173.2{sub -2.4}{sup +2.6}(stat.) {+-} 3.2(syst.) GeV/c{sup 2}, or 173.2{sub -4.0}{sup +4.1} GeV/c{sup 2}.
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
ERIC Educational Resources Information Center
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.
Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications. PMID:26979681
Constrained maximum likelihood modal parameter identification applied to structural dynamics
NASA Astrophysics Data System (ADS)
El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim
2016-05-01
A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.
Comparisons of likelihood and machine learning methods of individual classification
Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.
2002-01-01
“Assignment tests” are designed to determine population membership for individuals. One particular application based on a likelihood estimate (LE) was introduced by Paetkau et al. (1995; see also Vásquez-Domínguez et al. 2001) to assign an individual to the population of origin on the basis of multilocus genotype and expectations of observing this genotype in each potential source population. The LE approach can be implemented statistically in a Bayesian framework as a convenient way to evaluate hypotheses of plausible genealogical relationships (e.g., that an individual possesses an ancestor in another population) (Dawson and Belkhir 2001;Pritchard et al. 2000; Rannala and Mountain 1997). Other studies have evaluated the confidence of the assignment (Almudevar 2000) and characteristics of genotypic data (e.g., degree of population divergence, number of loci, number of individuals, number of alleles) that lead to greater population assignment (Bernatchez and Duchesne 2000; Cornuet et al. 1999; Haig et al. 1997; Shriver et al. 1997; Smouse and Chevillon 1998). Main statistical and conceptual differences between methods leading to the use of an assignment test are given in, for example,Cornuet et al. (1999) and Rosenberg et al. (2001). Howeve
NASA Astrophysics Data System (ADS)
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan
2016-02-01
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. The thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.
NASA Technical Reports Server (NTRS)
Murphy, P. C.
1984-01-01
An algorithm for maximum likelihood (ML) estimation is developed primarily for multivariable dynamic systems. The algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). The method determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared with integrating the analytically determined sensitivity equations or using a finite-difference method. Different surface-fitting methods are discussed and demonstrated. Aircraft estimation problems are solved by using both simulated and real-flight data to compare MNRES with commonly used methods; in these solutions MNRES is found to be equally accurate and substantially faster. MNRES eliminates the need to derive sensitivity equations, thus producing a more generally applicable algorithm.
A composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews
Liu, Yulun; Ning, Jing; Nie, Lei; Zhu, Hongjian; Chu, Haitao
2014-01-01
Diagnostic systematic review is a vital step in the evaluation of diagnostic technologies. In many applications, it involves pooling pairs of sensitivity and specificity of a dichotomized diagnostic test from multiple studies. We propose a composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews. This method provides an alternative way to make inference on diagnostic measures such as sensitivity, specificity, likelihood ratios and diagnostic odds ratio. Its main advantages over the standard likelihood method are the avoidance of the non-convergence problem, which is non-trivial when the number of studies are relatively small, the computational simplicity and some robustness to model mis-specifications. Simulation studies show that the composite likelihood method maintains high relative efficiency compared to that of the standard likelihood method. We illustrate our method in a diagnostic review of the performance of contemporary diagnostic imaging technologies for detecting metastases in patients with melanoma. PMID:25512146
Llacer, J; Solberg, T D; Promberger, C
2001-10-01
This paper presents a description of tests carried out to compare the behaviour of five algorithms in inverse radiation therapy planning: (1) The Dynamically Penalized Likelihood (DPL), an algorithm based on statistical estimation theory; (2) an accelerated version of the same algorithm: (3) a new fast adaptive simulated annealing (ASA) algorithm; (4) a conjugate gradient method; and (5) a Newton gradient method. A three-dimensional mathematical phantom and two clinical cases have been studied in detail. The phantom consisted of a U-shaped tumour with a partially enclosed 'spinal cord'. The clinical examples were a cavernous sinus meningioma and a prostate case. The algorithms have been tested in carefully selected and controlled conditions so as to ensure fairness in the assessment of results. It has been found that all five methods can yield relatively similar optimizations, except when a very demanding optimization is carried out. For the easier cases. the differences are principally in robustness, ease of use and optimization speed. In the more demanding case, there are significant differences in the resulting dose distributions. The accelerated DPL emerges as possibly the algorithm of choice for clinical practice. An appendix describes the differences in behaviour between the new ASA method and the one based on a patent by the Nomos Corporation. PMID:11686280
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
A SIMPLE LIKELIHOOD METHOD FOR QUASAR TARGET SELECTION
Kirkpatrick, Jessica A.; Schlegel, David J.; Ross, Nicholas P.; Myers, Adam D.; Hennawi, Joseph F.; Sheldon, Erin S.; Schneider, Donald P.; Weaver, Benjamin A.
2011-12-20
We present a new method for quasar target selection using photometric fluxes and a Bayesian probabilistic approach. For our purposes, we target quasars using Sloan Digital Sky Survey (SDSS) photometry to a magnitude limit of g = 22. The efficiency and completeness of this technique are measured using the Baryon Oscillation Spectroscopic Survey (BOSS) data taken in 2010. This technique was used for the uniformly selected (CORE) sample of targets in BOSS year-one spectroscopy to be realized in the ninth SDSS data release. When targeting at a density of 40 objects deg{sup -2} (the BOSS quasar targeting density), the efficiency of this technique in recovering z > 2.2 quasars is 40%. The completeness compared to all quasars identified in BOSS data is 65%. This paper also describes possible extensions and improvements for this technique.
Huang, Jinxin; Lee, Kye-sung; Clarkson, Eric; Kupinski, Matthew; Maki, Kara L.; Ross, David S.; Aquavella, James V.; Rolland, Jannick P.
2016-01-01
In this Letter, we implement a maximum-likelihood estimator to interpret optical coherence tomography (OCT) data for the first time, based on Fourier-domain OCT and a two-interface tear film model. We use the root mean square error as a figure of merit to quantify the system performance of estimating the tear film thickness. With the methodology of task-based assessment, we study the trade-off between system imaging speed (temporal resolution of the dynamics) and the precision of the estimation. Finally, the estimator is validated with a digital tear-film dynamics phantom. PMID:23938923
A maximum likelihood method for determining the distribution of galaxies in clusters
NASA Astrophysics Data System (ADS)
Sarazin, C. L.
1980-02-01
A maximum likelihood method is proposed for the analysis of the projected distribution of galaxies in clusters. It has many advantages compared to the standard method; principally, it does not require binning of the galaxy positions, applies to asymmetric clusters, and can simultaneously determine all cluster parameters. A rapid method of solving the maximum likelihood equations is given which also automatically gives error estimates for the parameters. Monte Carlo tests indicate this method applies even for rather sparse clusters. The Godwin-Peach data on the Coma cluster are analyzed; the core sizes derived agree reasonably with those of Bahcall. Some slight evidence of mass segregation is found.
Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes
NASA Astrophysics Data System (ADS)
Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen
2016-06-01
Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.
NASA Astrophysics Data System (ADS)
Fu, Qiang; Luk, Wai-Shing; Tao, Jun; Zeng, Xuan; Cai, Wei
In this paper, a novel intra-die spatial correlation extraction method referred to as MLEMTC (Maximum Likelihood Estimation for Multiple Test Chips) is presented. In the MLEMTC method, a joint likelihood function is formulated by multiplying the set of individual likelihood functions for all test chips. This joint likelihood function is then maximized to extract a unique group of parameter values of a single spatial correlation function, which can be used for statistical circuit analysis and design. Moreover, to deal with the purely random component and measurement error contained in measurement data, the spatial correlation function combined with the correlation of white noise is used in the extraction, which significantly improves the accuracy of the extraction results. Furthermore, an LU decomposition based technique is developed to calculate the log-determinant of the positive definite matrix within the likelihood function, which solves the numerical stability problem encountered in the direct calculation. Experimental results have shown that the proposed method is efficient and practical.
Huang, Jinxin; Clarkson, Eric; Kupinski, Matthew; Lee, Kye-sung; Maki, Kara L.; Ross, David S.; Aquavella, James V.; Rolland, Jannick P.
2013-01-01
Understanding tear film dynamics is a prerequisite for advancing the management of Dry Eye Disease (DED). In this paper, we discuss the use of optical coherence tomography (OCT) and statistical decision theory to analyze the tear film dynamics of a digital phantom. We implement a maximum-likelihood (ML) estimator to interpret OCT data based on mathematical models of Fourier-Domain OCT and the tear film. With the methodology of task-based assessment, we quantify the tradeoffs among key imaging system parameters. We find, on the assumption that the broadband light source is characterized by circular Gaussian statistics, ML estimates of 40 nm +/− 4 nm for an axial resolution of 1 μm and an integration time of 5 μs. Finally, the estimator is validated with a digital phantom of tear film dynamics, which reveals estimates of nanometer precision. PMID:24156045
Evaluation of Dynamic Coastal Response to Sea-level Rise Modifies Inundation Likelihood
NASA Technical Reports Server (NTRS)
Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.
2016-01-01
Sea-level rise (SLR) poses a range of threats to natural and built environments, making assessments of SLR-induced hazards essential for informed decision making. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30x30m resolution predictions for more than 38,000 sq km of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.
Evaluation of dynamic coastal response to sea-level rise modifies inundation likelihood
Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.
2016-01-01
Sea-level rise (SLR) poses a range of threats to natural and built environments1, 2, making assessments of SLR-induced hazards essential for informed decision making3. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30 × 30 m resolution predictions for more than 38,000 km2 of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.
Estimation of bias errors in measured airplane responses using maximum likelihood method
NASA Technical Reports Server (NTRS)
Klein, Vladiaslav; Morgan, Dan R.
1987-01-01
A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.
Simulated likelihood methods for complex double-platform line transect surveys.
Schweder, T; Skaug, H J; Langaas, M; Dimakos, X K
1999-09-01
The conventional line transect approach of estimating effective search width from the perpendicular distance distribution is inappropriate in certain types of surveys, e.g., when an unknown fraction of the animals on the track line is detected, the animals can be observed only at discrete points in time, there are errors in positional measurements, and covariate heterogeneity exists in detectability. For such situations a hazard probability framework for independent observer surveys is developed. The likelihood of the data, including observed positions of both initial and subsequent observations of animals, is established under the assumption of no measurement errors. To account for measurement errors and possibly other complexities, this likelihood is modified by a function estimated from extensive simulations. This general method of simulated likelihood is explained and the methodology applied to data from a double-platform survey of minke whales in the northeastern Atlantic in 1995. PMID:11314993
Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods
ERIC Educational Resources Information Center
Zhong, Xiaoling; Yuan, Ke-Hai
2011-01-01
In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…
Xia, Xuhua
2016-09-01
While pairwise sequence alignment (PSA) by dynamic programming is guaranteed to generate one of the optimal alignments, multiple sequence alignment (MSA) of highly divergent sequences often results in poorly aligned sequences, plaguing all subsequent phylogenetic analysis. One way to avoid this problem is to use only PSA to reconstruct phylogenetic trees, which can only be done with distance-based methods. I compared the accuracy of this new computational approach (named PhyPA for phylogenetics by pairwise alignment) against the maximum likelihood method using MSA (the ML+MSA approach), based on nucleotide, amino acid and codon sequences simulated with different topologies and tree lengths. I present a surprising discovery that the fast PhyPA method consistently outperforms the slow ML+MSA approach for highly diverged sequences even when all optimization options were turned on for the ML+MSA approach. Only when sequences are not highly diverged (i.e., when a reliable MSA can be obtained) does the ML+MSA approach outperforms PhyPA. The true topologies are always recovered by ML with the true alignment from the simulation. However, with MSA derived from alignment programs such as MAFFT or MUSCLE, the recovered topology consistently has higher likelihood than that for the true topology. Thus, the failure to recover the true topology by the ML+MSA is not because of insufficient search of tree space, but by the distortion of phylogenetic signal by MSA methods. I have implemented in DAMBE PhyPA and two approaches making use of multi-gene data sets to derive phylogenetic support for subtrees equivalent to resampling techniques such as bootstrapping and jackknifing. PMID:27377322
A likelihood reformulation method in non-normal random effects models.
Liu, Lei; Yu, Zhangsheng
2008-07-20
In this paper, we propose a practical computational method to obtain the maximum likelihood estimates (MLE) for mixed models with non-normal random effects. By simply multiplying and dividing a standard normal density, we reformulate the likelihood conditional on the non-normal random effects to that conditional on the normal random effects. Gaussian quadrature technique, conveniently implemented in SAS Proc NLMIXED, can then be used to carry out the estimation process. Our method substantially reduces computational time, while yielding similar estimates to the probability integral transformation method (J. Comput. Graphical Stat. 2006; 15:39-57). Furthermore, our method can be applied to more general situations, e.g. finite mixture random effects or correlated random effects from Clayton copula. Simulations and applications are presented to illustrate our method. PMID:18038445
NASA Astrophysics Data System (ADS)
He, Jun; Zuo, Tian; Sun, Bo; Wu, Xuewen; Chen, Chao
2014-06-01
This paper is aiming at applying sparse representation based classification (SRC) on face recognition with disguise or illumination variation. Having analyzed the characteristics of general object recognition and the principle of the classifier of SRC method, authors focus on evaluating blocks of a probe sample and propose an optimized SRC method based on position-preserving weighted block and maximum likelihood model. Principle and implementation of the proposed method have been introduced in the article, and experiments on Yale and AR face database have been given too. From experimental results, it can be seen that the proposed optimized SRC method works well than existing methods.
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
A calibration method of self-referencing interferometry based on maximum likelihood estimation
NASA Astrophysics Data System (ADS)
Zhang, Chen; Li, Dahai; Li, Mengyang; E, Kewei; Guo, Guangrao
2015-05-01
Self-referencing interferometry has been widely used in wavefront sensing. However, currently the results of wavefront measurement include two parts, one is the real phase information of wavefront under test and the other is the system error in self-referencing interferometer. In this paper, a method based on maximum likelihood estimation is presented to calibrate the system error in self-referencing interferometer. Firstly, at least three phase difference distributions are obtained by three position measurements of the tested component: one basic position, one rotation and one lateral translation. Then, combining the three phase difference data and using the maximum likelihood method to create a maximum likelihood function, reconstructing the wavefront under test and the system errors by least square estimation and Zernike polynomials. The simulation results show that the proposed method can deal with the issue of calibration of a self-referencing interferometer. The method can be used to reduce the effect of system errors on extracting and reconstructing the wavefront under test, and improve the measurement accuracy of the self-referencing interferometer.
Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors
Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.
2009-01-01
In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527
Maximum-likelihood methods in cryo-EM. Part II: application to experimental data
Scheres, Sjors H.W.
2010-01-01
With the advent of computationally feasible approaches to maximum likelihood image processing for cryo-electron microscopy, these methods have proven particularly useful in the classification of structurally heterogeneous single-particle data. A growing number of experimental studies have applied these algorithms to study macromolecular complexes with a wide range of structural variability, including non-stoichiometric complex formation, large conformational changes and combinations of both. This chapter aims to share the practical experience that has been gained from the application of these novel approaches. Current insights on how to prepare the data and how to perform two- or three-dimensional classifications are discussed together with aspects related to high-performance computing. Thereby, this chapter will hopefully be of practical use for those microscopists wanting to apply maximum likelihood methods in their own investigations. PMID:20888966
Incorrect Likelihood Methods Were Used to Infer Scaling Laws of Marine Predator Search Behaviour
Edwards, Andrew M.; Freeman, Mervyn P.; Breed, Greg A.; Jonsen, Ian D.
2012-01-01
Background Ecologists are collecting extensive data concerning movements of animals in marine ecosystems. Such data need to be analysed with valid statistical methods to yield meaningful conclusions. Principal Findings We demonstrate methodological issues in two recent studies that reached similar conclusions concerning movements of marine animals (Nature 451∶1098; Science 332∶1551). The first study analysed vertical movement data to conclude that diverse marine predators (Atlantic cod, basking sharks, bigeye tuna, leatherback turtles and Magellanic penguins) exhibited “Lévy-walk-like behaviour”, close to a hypothesised optimal foraging strategy. By reproducing the original results for the bigeye tuna data, we show that the likelihood of tested models was calculated from residuals of regression fits (an incorrect method), rather than from the likelihood equations of the actual probability distributions being tested. This resulted in erroneous Akaike Information Criteria, and the testing of models that do not correspond to valid probability distributions. We demonstrate how this led to overwhelming support for a model that has no biological justification and that is statistically spurious because its probability density function goes negative. Re-analysis of the bigeye tuna data, using standard likelihood methods, overturns the original result and conclusion for that data set. The second study observed Lévy walk movement patterns by mussels. We demonstrate several issues concerning the likelihood calculations (including the aforementioned residuals issue). Re-analysis of the data rejects the original Lévy walk conclusion. Conclusions We consequently question the claimed existence of scaling laws of the search behaviour of marine predators and mussels, since such conclusions were reached using incorrect methods. We discourage the suggested potential use of “Lévy-like walks” when modelling consequences of fishing and climate change, and caution that
Retrospective likelihood-based methods for analyzing case-cohort genetic association studies.
Shen, Yuanyuan; Cai, Tianxi; Chen, Yu; Yang, Ying; Chen, Jinbo
2015-12-01
The case cohort (CCH) design is a cost-effective design for assessing genetic susceptibility with time-to-event data especially when the event rate is low. In this work, we propose a powerful pseudo-score test for assessing the association between a single nucleotide polymorphism (SNP) and the event time under the CCH design. The pseudo-score is derived from a pseudo-likelihood which is an estimated retrospective likelihood that treats the SNP genotype as the dependent variable and time-to-event outcome and other covariates as independent variables. It exploits the fact that the genetic variable is often distributed independent of covariates or only related to a low-dimensional subset. Estimates of hazard ratio parameters for association can be obtained by maximizing the pseudo-likelihood. A unique advantage of our method is that it allows the censoring distribution to depend on covariates that are only measured for the CCH sample while not requiring the knowledge of follow-up or covariate information on subjects not selected into the CCH sample. In addition to these flexibilities, the proposed method has high relative efficiency compared with commonly used alternative approaches. We study large sample properties of this method and assess its finite sample performance using both simulated and real data examples. PMID:26177343
Washeleski, Robert L; Meyer, Edmond J; King, Lyon B
2013-10-01
Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed. PMID:24182157
NASA Astrophysics Data System (ADS)
Washeleski, Robert L.; Meyer, Edmond J.; King, Lyon B.
2013-10-01
Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.
Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.
2013-10-15
Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.
New method to compute Rcomplete enables maximum likelihood refinement for small datasets
Luebben, Jens; Gruene, Tim
2015-01-01
The crystallographic reliability index Rcomplete is based on a method proposed more than two decades ago. Because its calculation is computationally expensive its use did not spread into the crystallographic community in favor of the cross-validation method known as Rfree. The importance of Rfree has grown beyond a pure validation tool. However, its application requires a sufficiently large dataset. In this work we assess the reliability of Rcomplete and we compare it with k-fold cross-validation, bootstrapping, and jackknifing. As opposed to proper cross-validation as realized with Rfree, Rcomplete relies on a method of reducing bias from the structural model. We compare two different methods reducing model bias and question the widely spread notion that random parameter shifts are required for this purpose. We show that Rcomplete has as little statistical bias as Rfree with the benefit of a much smaller variance. Because the calculation of Rcomplete is based on the entire dataset instead of a small subset, it allows the estimation of maximum likelihood parameters even for small datasets. Rcomplete enables maximum likelihood-based refinement to be extended to virtually all areas of crystallographic structure determination including high-pressure studies, neutron diffraction studies, and datasets from free electron lasers. PMID:26150515
Maximum-likelihood methods for array processing based on time-frequency distributions
NASA Astrophysics Data System (ADS)
Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.
1999-11-01
This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.
Schwab, Joshua; Gruber, Susan; Blaser, Nello; Schomaker, Michael; van der Laan, Mark
2015-01-01
This paper describes a targeted maximum likelihood estimator (TMLE) for the parameters of longitudinal static and dynamic marginal structural models. We consider a longitudinal data structure consisting of baseline covariates, time-dependent intervention nodes, intermediate time-dependent covariates, and a possibly time-dependent outcome. The intervention nodes at each time point can include a binary treatment as well as a right-censoring indicator. Given a class of dynamic or static interventions, a marginal structural model is used to model the mean of the intervention-specific counterfactual outcome as a function of the intervention, time point, and possibly a subset of baseline covariates. Because the true shape of this function is rarely known, the marginal structural model is used as a working model. The causal quantity of interest is defined as the projection of the true function onto this working model. Iterated conditional expectation double robust estimators for marginal structural model parameters were previously proposed by Robins (2000, 2002) and Bang and Robins (2005). Here we build on this work and present a pooled TMLE for the parameters of marginal structural working models. We compare this pooled estimator to a stratified TMLE (Schnitzer et al. 2014) that is based on estimating the intervention-specific mean separately for each intervention of interest. The performance of the pooled TMLE is compared to the performance of the stratified TMLE and the performance of inverse probability weighted (IPW) estimators using simulations. Concepts are illustrated using an example in which the aim is to estimate the causal effect of delayed switch following immunological failure of first line antiretroviral therapy among HIV-infected patients. Data from the International Epidemiological Databases to Evaluate AIDS, Southern Africa are analyzed to investigate this question using both TML and IPW estimators. Our results demonstrate practical advantages of the
NASA Technical Reports Server (NTRS)
Klein, V.
1980-01-01
A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.
A likelihood method for the detection of selection and recombination using nucleotide sequences.
Grassly, N C; Holmes, E C
1997-03-01
Different regions along nucleotide sequences are often subject to different evolutionary forces. Recombination will result in regions having different evolutionary histories, while selection can cause regions to evolve at different rates. This paper presents a statistical method based on likelihood for detecting such processes by identifying the regions which do not fit with a single phylogenetic topology and nucleotide substitution process along the entire sequence. Subsequent reanalysis of these anomalous regions may then be possible. The method is tested using simulations, and its application is demonstrated using the primate psi eta-globin pseudogene, the V3 region of the envelope gene of HIV-1, and argF sequences from Neisseria bacteria. Reanalysis of anomalous regions is shown to reveal possible immune selection in HIV-1 and recombination in Neisseria. A computer program which implements the method is available. PMID:9066792
An efficient frequency recognition method based on likelihood ratio test for SSVEP-based BCI.
Zhang, Yangsong; Dong, Li; Zhang, Rui; Yao, Dezhong; Zhang, Yu; Xu, Peng
2014-01-01
An efficient frequency recognition method is very important for SSVEP-based BCI systems to improve the information transfer rate (ITR). To address this aspect, for the first time, likelihood ratio test (LRT) was utilized to propose a novel multichannel frequency recognition method for SSVEP data. The essence of this new method is to calculate the association between multichannel EEG signals and the reference signals which were constructed according to the stimulus frequency with LRT. For the simulation and real SSVEP data, the proposed method yielded higher recognition accuracy with shorter time window length and was more robust against noise in comparison with the popular canonical correlation analysis- (CCA-) based method and the least absolute shrinkage and selection operator- (LASSO-) based method. The recognition accuracy and information transfer rate (ITR) obtained by the proposed method was higher than those of the CCA-based method and LASSO-based method. The superior results indicate that the LRT method is a promising candidate for reliable frequency recognition in future SSVEP-BCI. PMID:25250058
Determination of instrumentation errors from measured data using maximum likelihood method
NASA Technical Reports Server (NTRS)
Keskar, D. A.; Klein, V.
1980-01-01
The maximum likelihood method is used for estimation of unknown initial conditions, constant bias and scale factor errors in measured flight data. The model for the system to be identified consists of the airplane six-degree-of-freedom kinematic equations, and the output equations specifying the measured variables. The estimation problem is formulated in a general way and then, for practical use, simplified by ignoring the effect of process noise. The algorithm developed is first applied to computer generated data having different levels of process noise for the demonstration of the robustness of the method. Then the real flight data are analyzed and the results compared with those obtained by the extended Kalman filter algorithm.
Likelihood ratio meta-analysis: New motivation and approach for an old method.
Dormuth, Colin R; Filion, Kristian B; Platt, Robert W
2016-03-01
A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience. PMID:26837056
Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds
Conroy, M.J.
1985-01-01
It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.
Accelerated molecular dynamics methods
Perez, Danny
2011-01-04
The molecular dynamics method, although extremely powerful for materials simulations, is limited to times scales of roughly one microsecond or less. On longer time scales, dynamical evolution typically consists of infrequent events, which are usually activated processes. This course is focused on understanding infrequent-event dynamics, on methods for characterizing infrequent-event mechanisms and rate constants, and on methods for simulating long time scales in infrequent-event systems, emphasizing the recently developed accelerated molecular dynamics methods (hyperdynamics, parallel replica dynamics, and temperature accelerated dynamics). Some familiarity with basic statistical mechanics and molecular dynamics methods will be assumed.
Yang, Z
1994-09-01
Two approximate methods are proposed for maximum likelihood phylogenetic estimation, which allow variable rates of substitution across nucleotide sites. Three data sets with quite different characteristics were analyzed to examine empirically the performance of these methods. The first, called the "discrete gamma model," uses several categories of rates to approximate the gamma distribution, with equal probability for each category. The mean of each category is used to represent all the rates falling in the category. The performance of this method is found to be quite good, and four such categories appear to be sufficient to produce both an optimum, or near-optimum fit by the model to the data, and also an acceptable approximation to the continuous distribution. The second method, called "fixed-rates model", classifies sites into several classes according to their rates predicted assuming the star tree. Sites in different classes are then assumed to be evolving at these fixed rates when other tree topologies are evaluated. Analyses of the data sets suggest that this method can produce reasonable results, but it seems to share some properties of a least-squares pairwise comparison; for example, interior branch lengths in nonbest trees are often found to be zero. The computational requirements of the two methods are comparable to that of Felsenstein's (1981, J Mol Evol 17:368-376) model, which assumes a single rate for all the sites. PMID:7932792
Pseudo-empirical Likelihood-Based Method Using Calibration for Longitudinal Data with Drop-Out
Chen, Baojiang; Zhou, Xiao-Hua; Chan, Kwun Chuen Gary
2014-01-01
Summary In observational studies, interest mainly lies in estimation of the population-level relationship between the explanatory variables and dependent variables, and the estimation is often undertaken using a sample of longitudinal data. In some situations, the longitudinal data sample features biases and loss of estimation efficiency due to non-random drop-out. However, inclusion of population-level information can increase estimation efficiency. In this paper we propose an empirical likelihood-based method to incorporate population-level information in a longitudinal study with drop-out. The population-level information is incorporated via constraints on functions of the parameters, and non-random drop-out bias is corrected by using a weighted generalized estimating equations method. We provide a three-step estimation procedure that makes computation easier. Some commonly used methods are compared in simulation studies, which demonstrate that our proposed method can correct the non-random drop-out bias and increase the estimation efficiency, especially for small sample size or when the missing proportion is high. In some situations, the efficiency improvement is substantial. Finally, we apply this method to an Alzheimer’s disease study. PMID:25587200
NASA Astrophysics Data System (ADS)
Song, Yanxing; Yang, Jingsong; Cheng, Lina; Liu, Shucong
2014-09-01
An image restoration method based on Poisson-maximum likelihood estimation method (PMLE) for earthquake ruin scene is proposed in this paper. The PMLE algorithm is introduced at first, and automatic acceleration method is used in the algorithm to accelerate the iterative process, then an image of earthquake ruin scene is processed with this image restoration method. The spectral correlation method and PSNR (peak signal-to-noise ratio) are chosen respectively to validate the restoration effect of the method, the simulation results show that iterations in this method will effect the PSNR of the processed image and operation time, and this method can restore image of earthquake ruin scene effectively and has a good practicability.
Likelihood methods for binary responses of present components in a cluster
Li, Xiaoyun; Bandyopadhyay, Dipankar; Lipsitz, Stuart; Sinha, Debajyoti
2010-01-01
SUMMARY In some biomedical studies involving clustered binary responses (say, disease status) the cluster sizes can vary because some components of the cluster can be absent. When both the presence of a cluster component as well as the binary disease status of a present component are treated as responses of interest, we propose a novel two-stage random effects logistic regression framework. For the ease of interpretation of regression effects, both the marginal probability of presence/absence of a component as well as the conditional probability of disease status of a present component, preserve the approximate logistic regression forms. We present a maximum likelihood method of estimation implementable using standard statistical software. We compare our models and the physical interpretation of regression effects with competing methods from literature. We also present a simulation study to assess the robustness of our procedure to wrong specification of the random effects distribution and to compare finite sample performances of estimates with existing methods. The methodology is illustrated via analyzing a study of the periodontal health status in a diabetic Gullah population. PMID:20825395
Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods.
Benevides, Leandro de Jesus; Carvalho, Daniel Santana de; Andrade, Roberto Fernandes Silva; Bomfim, Gilberto Cafezeiro; Fernandes, Flora Maria de Campos
2016-07-14
Apolipoprotein E (apo E) is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL) and a group of high-density lipoproteins (HDL). Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML), and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1) and another with fish (C2), and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups. PMID:27419397
Likelihood Methods for Testing Group Problem Solving Models with Censored Data.
ERIC Educational Resources Information Center
Regal, Ronald R.; Larntz, Kinley
1978-01-01
Models relating individual and group problem solving solution times under the condition of limited time (time limit censoring) are presented. Maximum likelihood estimation of parameters and a goodness of fit test are presented. (Author/JKS)
FITTING STATISTICAL DISTRIBUTIONS TO AIR QUALITY DATA BY THE MAXIMUM LIKELIHOOD METHOD
A computer program has been developed for fitting statistical distributions to air pollution data using maximum likelihood estimation. Appropriate uses of this software are discussed and a grouped data example is presented. The program fits the following continuous distributions:...
Quantifying uncertainty in predictions of groundwater levels using formal likelihood methods
NASA Astrophysics Data System (ADS)
Marchant, Ben; Mackay, Jonathan; Bloomfield, John
2016-09-01
Informal and formal likelihood methods can be used to quantify uncertainty in modelled predictions of groundwater levels (GWLs). Informal methods use a relatively subjective criterion to identify sets of plausible or behavioural parameters of the GWL models. In contrast, formal methods specify a statistical model for the residuals or errors of the GWL model. The formal uncertainty estimates are only reliable when the assumptions of the statistical model are appropriate. We apply the formal approach to historical reconstructions of GWL hydrographs from four UK boreholes. We test whether a model which assumes Gaussian and independent errors is sufficient to represent the residuals or whether a model which includes temporal autocorrelation and a general non-Gaussian distribution is required. Groundwater level hydrographs are often observed at irregular time intervals so we use geostatistical methods to quantify the temporal autocorrelation rather than more standard time series methods such as autoregressive models. According to the Akaike Information Criterion, the more general statistical model better represents the residuals of the GWL model. However, no substantial difference between the accuracy of the GWL predictions and the estimates of their uncertainty is observed when the two statistical models are compared. When the general model is applied, significant temporal correlation over periods ranging from 3 to 20 months is evident for the different boreholes. When the GWL model parameters are sampled using a Markov Chain Monte Carlo approach the distributions based on the general statistical model differ from those of the Gaussian model, particularly for the boreholes with the most autocorrelation. These results suggest that the independent Gaussian model of residuals is sufficient to estimate the uncertainty of a GWL prediction on a single date. However, if realistically autocorrelated simulations of GWL hydrographs for multiple dates are required or if the
Dooley, Thomas P; Curto, Ernest V; Davis, Richard L; Grammatico, Paola; Robinson, Edward S; Wilborn, Teresa W
2003-06-01
In this article, some of the advantages and limitations of DNA microarray technologies for gene expression profiling are summarized. As a model experiment, DermArray DNA microarrays were utilized to identify potential biomarkers of cultured normal human melanocytes in two different experimental comparisons. In the first case, melanocyte RNA was compared with vastly dissimilar non-melanocytic RNA samples of normal skin keratinocytes and fibroblasts. In the second case, melanocyte RNA was compared with a primary cutaneous melanoma line (MS7) and a metastatic melanoma cell line (SKMel-28). The alternative approaches provide dramatically different lists of 'normal melanocyte' biomarkers. The most robust biomarkers were identified using principal component analysis bioinformatic methods related to likelihood ratios. Only three of 25 robust biomarkers in the melanocyte-proximal study (i.e. melanocytes vs. melanoma cells) were coincidentally identified in the melanocyte-distal study (i.e. melanocytes vs. non-melanocytic cells). Selected up-regulated biomarkers of melanocytes (i.e. TRP-1, melan-A/MART-1, silver/Pmel17, and nidogen-2) were validated by qRT-PCR. Some of the melanocytic biomarkers identified here may be useful in molecular diagnostics, as potential molecular targets for drug discovery, and for understanding the biochemistry of melanocytic cells. PMID:12753397
An alternative method to measure the likelihood of a financial crisis in an emerging market
NASA Astrophysics Data System (ADS)
Özlale, Ümit; Metin-Özcan, Kıvılcım
2007-07-01
This paper utilizes an early warning system in order to measure the likelihood of a financial crisis in an emerging market economy. We introduce a methodology, where we can both obtain a likelihood series and analyze the time-varying effects of several macroeconomic variables on this likelihood. Since the issue is analyzed in a non-linear state space framework, the extended Kalman filter emerges as the optimal estimation algorithm. Taking the Turkish economy as our laboratory, the results indicate that both the derived likelihood measure and the estimated time-varying parameters are meaningful and can successfully explain the path that the Turkish economy had followed between 2000 and 2006. The estimated parameters also suggest that overvalued domestic currency, current account deficit and the increase in the default risk increase the likelihood of having an economic crisis in the economy. Overall, the findings in this paper suggest that the estimation methodology introduced in this paper can also be applied to other emerging market economies as well.
NASA Astrophysics Data System (ADS)
Stollenwerk, Nico
2009-09-01
Basic stochastic processes, like the SIS and SIR epidemics, are used to describe data from an internet based surveillance system, the InfluenzaNet. Via generating functions, in some simplifying situations there can be analytic expressions derived for the probability. From this likelihood functions for parameter estimation are constructed. This is a nice application in which partial differential equations appear in epidemiological applications without invoking any explicitly spatial aspect. All steps can eventually be bridged by numeric simulations in case of analytical difficulties [1, 2].
ERIC Educational Resources Information Center
Tao, Jian; Shi, Ning-Zhong; Chang, Hua-Hua
2012-01-01
For mixed-type tests composed of both dichotomous and polytomous items, polytomous items often yield more information than dichotomous ones. To reflect the difference between the two types of items, polytomous items are usually pre-assigned with larger weights. We propose an item-weighted likelihood method to better assess examinees' ability…
NASA Astrophysics Data System (ADS)
Lovreglio, Ruggiero; Ronchi, Enrico; Nilsson, Daniel
2015-11-01
The formulation of pedestrian floor field cellular automaton models is generally based on hypothetical assumptions to represent reality. This paper proposes a novel methodology to calibrate these models using experimental trajectories. The methodology is based on likelihood function optimization and allows verifying whether the parameters defining a model statistically affect pedestrian navigation. Moreover, it allows comparing different model specifications or the parameters of the same model estimated using different data collection techniques, e.g. virtual reality experiment, real data, etc. The methodology is here implemented using navigation data collected in a Virtual Reality tunnel evacuation experiment including 96 participants. A trajectory dataset in the proximity of an emergency exit is used to test and compare different metrics, i.e. Euclidean and modified Euclidean distance, for the static floor field. In the present case study, modified Euclidean metrics provide better fitting with the data. A new formulation using random parameters for pedestrian cellular automaton models is also defined and tested.
A Maximum-Likelihood Method for the Estimation of Pairwise Relatedness in Structured Populations
Anderson, Amy D.; Weir, Bruce S.
2007-01-01
A maximum-likelihood estimator for pairwise relatedness is presented for the situation in which the individuals under consideration come from a large outbred subpopulation of the population for which allele frequencies are known. We demonstrate via simulations that a variety of commonly used estimators that do not take this kind of misspecification of allele frequencies into account will systematically overestimate the degree of relatedness between two individuals from a subpopulation. A maximum-likelihood estimator that includes FST as a parameter is introduced with the goal of producing the relatedness estimates that would have been obtained if the subpopulation allele frequencies had been known. This estimator is shown to work quite well, even when the value of FST is misspecified. Bootstrap confidence intervals are also examined and shown to exhibit close to nominal coverage when FST is correctly specified. PMID:17339212
Hu, Yuxiang; Lu, Jing; Qiu, Xiaojun
2015-08-01
Open-sphere microphone arrays are preferred over rigid-sphere arrays when minimal interaction between array and the measured sound field is required. However, open-sphere arrays suffer from poor robustness at null frequencies of the spherical Bessel function. This letter proposes a maximum likelihood method for direction of arrival estimation in the spherical harmonic domain, which avoids the division of the spherical Bessel function and can be used at arbitrary frequencies. Furthermore, the method can be easily extended to wideband implementation. Simulation and experiment results demonstrate the superiority of the proposed method over the commonly used methods in open-sphere configurations. PMID:26328695
A likelihood method to cross-calibrate air-shower detectors
NASA Astrophysics Data System (ADS)
Dembinski, Hans Peter; Kégl, Balázs; Mariş, Ioana C.; Roth, Markus; Veberič, Darko
2016-01-01
We present a detailed statistical treatment of the energy calibration of hybrid air-shower detectors, which combine a surface detector array and a fluorescence detector, to obtain an unbiased estimate of the calibration curve. The special features of calibration data from air showers prevent unbiased results, if a standard least-squares fit is applied to the problem. We develop a general maximum-likelihood approach, based on the detailed statistical model, to solve the problem. Our approach was developed for the Pierre Auger Observatory, but the applied principles are general and can be transferred to other air-shower experiments, even to the cross-calibration of other observables. Since our general likelihood function is expensive to compute, we derive two approximations with significantly smaller computational cost. In the recent years both have been used to calibrate data of the Pierre Auger Observatory. We demonstrate that these approximations introduce negligible bias when they are applied to simulated toy experiments, which mimic realistic experimental conditions.
Comparisons of Four Methods for Estimating a Dynamic Factor Model
ERIC Educational Resources Information Center
Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.
2008-01-01
Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…
McGee, Steven
2002-01-01
Likelihood ratios are one of the best measures of diagnostic accuracy, although they are seldom used, because interpreting them requires a calculator to convert back and forth between “probability” and “odds” of disease. This article describes a simpler method of interpreting likelihood ratios, one that avoids calculators, nomograms, and conversions to “odds” of disease. Several examples illustrate how the clinician can use this method to refine diagnostic decisions at the bedside.
A method for selecting M dwarfs with an increased likelihood of unresolved ultracool companionship
NASA Astrophysics Data System (ADS)
Cook, N. J.; Pinfield, D. J.; Marocco, F.; Burningham, B.; Jones, H. R. A.; Frith, J.; Zhong, J.; Luo, A. L.; Qi, Z. X.; Lucas, P. W.; Gromadzki, M.; Day-Jones, A. C.; Kurtev, R. G.; Guo, Y. X.; Wang, Y. F.; Bai, Y.; Yi, Z. P.; Smart, R. L.
2016-04-01
Locating ultracool companions to M dwarfs is important for constraining low-mass formation models, the measurement of substellar dynamical masses and radii, and for testing ultracool evolutionary models. We present an optimized method for identifying M dwarfs which may have unresolved ultracool companions. We construct a catalogue of 440 694 M dwarf candidates, from Wide-Field Infrared Survey Explorer, Two Micron All-Sky Survey and Sloan Digital Sky Survey, based on optical- and near-infrared colours and reduced proper motion. With strict reddening, photometric and quality constraints we isolate a subsample of 36 898 M dwarfs and search for possible mid-infrared M dwarf + ultracool dwarf candidates by comparing M dwarfs which have similar optical/near-infrared colours (chosen for their sensitivity to effective temperature and metallicity). We present 1082 M dwarf + ultracool dwarf candidates for follow-up. Using simulated ultracool dwarf companions to M dwarfs, we estimate that the occurrence of unresolved ultracool companions amongst our M dwarf + ultracool dwarf candidates should be at least four times the average for our full M dwarf catalogue. We discuss possible contamination and bias and predict yields of candidates based on our simulations.
A method for modeling bias in a person's estimates of likelihoods of events
NASA Technical Reports Server (NTRS)
Nygren, Thomas E.; Morera, Osvaldo
1988-01-01
It is of practical importance in decision situations involving risk to train individuals to transform uncertainties into subjective probability estimates that are both accurate and unbiased. We have found that in decision situations involving risk, people often introduce subjective bias in their estimation of the likelihoods of events depending on whether the possible outcomes are perceived as being good or bad. Until now, however, the successful measurement of individual differences in the magnitude of such biases has not been attempted. In this paper we illustrate a modification of a procedure originally outlined by Davidson, Suppes, and Siegel (3) to allow for a quantitatively-based methodology for simultaneously estimating an individual's subjective utility and subjective probability functions. The procedure is now an interactive computer-based algorithm, DSS, that allows for the measurement of biases in probability estimation by obtaining independent measures of two subjective probability functions (S+ and S-) for winning (i.e., good outcomes) and for losing (i.e., bad outcomes) respectively for each individual, and for different experimental conditions within individuals. The algorithm and some recent empirical data are described.
NASA Astrophysics Data System (ADS)
Chang, Yen-Ching
2015-10-01
The efficiency and accuracy of estimating the Hurst exponent have been two inevitable considerations. Recently, an efficient implementation of the maximum likelihood estimator (MLE) (simply called the fast MLE) for the Hurst exponent was proposed based on a combination of the Levinson algorithm and Cholesky decomposition, and furthermore the fast MLE has also considered all four possible cases, including known mean, unknown mean, known variance, and unknown variance. In this paper, four cases of an approximate MLE (AMLE) were obtained based on two approximations of the logarithmic determinant and the inverse of a covariance matrix. The computational cost of the AMLE is much lower than that of the MLE, but a little higher than that of the fast MLE. To raise the computational efficiency of the proposed AMLE, a required power spectral density (PSD) was indirectly calculated by interpolating two suitable PSDs chosen from a set of established PSDs. Experimental results show that the AMLE through interpolation (simply called the interpolating AMLE) can speed up computation. The computational speed of the interpolating AMLE is on average over 24 times quicker than that of the fast MLE while remaining the accuracy very close to that of the MLE or the fast MLE.
NASA Astrophysics Data System (ADS)
Osmaston, Miles
2013-04-01
In my oral(?) contribution to this session [1] I use my studies of the fundamental physics of gravitation to derive a reason for expecting the vertical gradient of electron density (= radial electric field) in the ionosphere to be closely affected by another field, directly associated with the ordinary gravitational potential (g) present at the Earth's surface. I have called that other field the Gravity-Electric (G-E) field. A calibration of this linkage relationship could be provided by noting corresponding co-seismic changes in (g) and in the ionosphere when, for example, a major normal-fault slippage occurs. But we are here concerned with precursory changes. This means we are looking for mechanisms which, on suitably short timescales, would generate pre-quake elastic deformation that changes the local (g). This poster supplements my talk by noting, for more relaxed discussion, what I see as potentially relevant plate dynamical mechanisms. Timescale constraints. If monitoring for ionospheric precursors is on only short timescales, their detectability is limited to correspondingly tectonically active regions. But as our monitoring becomes more precise and over longer terms, this constraint will relax. Most areas of the Earth are undergoing very slow heating or cooling and corresponding volume or epeirogenic change; major earthquakes can result but we won't have detected any accumulating ionospheric precursor. Transcurrent faulting. In principle, slip on a straight fault, even in a stick-slip manner, should produce little vertical deformation, but a kink, such as has caused the Transverse Ranges on the San Andreas Fault, would seem worth monitoring for precursory build-up in the ionosphere. Plate closure - subducting plate downbend. The traditionally presumed elastic flexure downbend mechanism is incorrect. 'Seismic coupling' has long been recognized by seismologists, invoking the repeated occurrence of 'asperities' to temporarily lock subduction and allow stress
Şentürk, Damla; Dalrymple, Lorien S.; Mu, Yi; Nguyen, Danh V.
2014-01-01
SUMMARY We propose a new weighted hurdle regression method for modeling count data, with particular interest in modeling cardiovascular events in patients on dialysis. Cardiovascular disease remains one of the leading causes of hospitalization and death in this population. Our aim is to jointly model the relationship/association between covariates and (a) the probability of cardiovascular events, a binary process and (b) the rate of events once the realization is positive - when the ‘hurdle’ is crossed - using a zero-truncated Poisson distribution. When the observation period or follow-up time, from the start of dialysis, varies among individuals the estimated probability of positive cardiovascular events during the study period will be biased. Furthermore, when the model contains covariates, then the estimated relationship between the covariates and the probability of cardiovascular events will also be biased. These challenges are addressed with the proposed weighted hurdle regression method. Estimation for the weighted hurdle regression model is a weighted likelihood approach, where standard maximum likelihood estimation can be utilized. The method is illustrated with data from the United States Renal Data System. Simulation studies show the ability of proposed method to successfully adjust for differential follow-up times and incorporate the effects of covariates in the weighting. PMID:24930810
Sentürk, Damla; Dalrymple, Lorien S; Mu, Yi; Nguyen, Danh V
2014-11-10
We propose a new weighted hurdle regression method for modeling count data, with particular interest in modeling cardiovascular events in patients on dialysis. Cardiovascular disease remains one of the leading causes of hospitalization and death in this population. Our aim is to jointly model the relationship/association between covariates and (i) the probability of cardiovascular events, a binary process, and (ii) the rate of events once the realization is positive-when the 'hurdle' is crossed-using a zero-truncated Poisson distribution. When the observation period or follow-up time, from the start of dialysis, varies among individuals, the estimated probability of positive cardiovascular events during the study period will be biased. Furthermore, when the model contains covariates, then the estimated relationship between the covariates and the probability of cardiovascular events will also be biased. These challenges are addressed with the proposed weighted hurdle regression method. Estimation for the weighted hurdle regression model is a weighted likelihood approach, where standard maximum likelihood estimation can be utilized. The method is illustrated with data from the United States Renal Data System. Simulation studies show the ability of proposed method to successfully adjust for differential follow-up times and incorporate the effects of covariates in the weighting. PMID:24930810
Tibshirani, R.J.
1984-12-01
In this work, we extend the idea of local averaging to likelihood-based regression models. One application is in the class of generalized linear models (Nelder and Wedderburn (1972). We enlarge this class by replacing the covariate form chi..beta.. with an unspecified smooth function s(chi). This function is estimated from the data by a technique we call Local Likelihood Estimation - a type of local averaging. Multiple covariates are incorporated through a forward stepwise algorithm. In a number of real data examples, the local likelihood technique proves to be effective in uncovering non-linear dependencies. Finally, we give some asymptotic results for local likelihood estimates and provide some methods for inference.
Maximum likelihood method to correct for missed levels based on the {Delta}{sub 3}(L) statistic
Mulhall, Declan
2011-05-15
The {Delta}{sub 3}(L) statistic of random matrix theory is defined as the average of a set of random numbers {l_brace}{delta}{r_brace}, derived from a spectrum. The distribution p({delta}) of these random numbers is used as the basis of a maximum likelihood method to gauge the fraction x of levels missed in an experimental spectrum. The method is tested on an ensemble of depleted spectra from the Gaussian orthogonal ensemble (GOE) and accurately returned the correct fraction of missed levels. Neutron resonance data and acoustic spectra of an aluminum block were analyzed. All results were compared with an analysis based on an established expression for {Delta}{sub 3}(L) for a depleted GOE spectrum. The effects of intruder levels are examined and seen to be very similar to those of missed levels. Shell model spectra were seen to give the same p({delta}) as the GOE.
Yoshida, Ruriko; Nei, Masatoshi
2016-06-01
At the present time it is often stated that the maximum likelihood or the Bayesian method of phylogenetic construction is more accurate than the neighbor joining (NJ) method. Our computer simulations, however, have shown that the converse is true if we use p distance in the NJ procedure and the criterion of obtaining the true tree (Pc expressed as a percentage) or the combined quantity (c) of a value of Pc and a value of Robinson-Foulds' average topological error index (dT). This c is given by Pc (1 - dT/dTmax) = Pc (m - 3 - dT/2)/(m - 3), where m is the number of taxa used and dTmax is the maximum possible value of dT, which is given by 2(m - 3). This neighbor joining method with p distance (NJp method) will be shown generally to give the best data-fit model. This c takes a value between 0 and 1, and a tree-making method giving a high value of c is considered to be good. Our computer simulations have shown that the NJp method generally gives a better performance than the other methods and therefore this method should be used in general whether the gene is compositional or it contains the mosaic DNA regions or not. PMID:26929244
Calibrating CAT Pools and Online Pretest Items Using Marginal Maximum Likelihood Methods.
ERIC Educational Resources Information Center
Pommerich, Mary; Segall, Daniel O.
Research discussed in this paper was conducted as part of an ongoing large-scale simulation study to evaluate methods of calibrating pretest items for computerized adaptive testing (CAT) pools. The simulation was designed to mimic the operational CAT Armed Services Vocational Aptitude Battery (ASVAB) testing program, in which a single pretest item…
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1976-01-01
A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.
Priiatkina, S N
2002-05-01
For mapping nonlinked interacting genes relative to marker loci, the recombination fractions can be calculated by using the log-likelihood functions were derived that permit estimation of recombinant fractions by solving the ML equations on the basis of F2 data at various types of interaction. In some cases, the recombinant fraction estimates are obtained in the analytical form while in others they are numerically calculated from concrete experimental data. With the same type of epistasis the log-functions were shown to differ depending on the functional role (suppression or epistasis) of the mapped gene. Methods for testing the correspondence of the model and the recombination fraction estimates to the experimental data are discussed. In ambiguous cases, analysis of the linked marker behavior makes it possible to differentiate gene interaction from distorted single-locus segregation, which at some forms of interaction imitate phenotypic ratios. PMID:12068553
The high sensitivity of the maximum likelihood estimator method of tomographic image reconstruction
Llacer, J.; Veklerov, E.
1987-01-01
Positron Emission Tomography (PET) images obtained by the MLE iterative method of image reconstruction converge towards strongly deteriorated versions of the original source image. The image deterioration is caused by an excessive attempt by the algorithm to match the projection data with high counts. We can modulate this effect. We compared a source image with reconstructions by filtered backprojection to the MLE algorithm to show that the MLE images can have similar noise to the filtered backprojection images at regions of high activity and very low noise, comparable to the source image, in regions of low activity, if the iterative procedure is stopped at an appropriate point.
Evaluating the performance of likelihood methods for detecting population structure and migration.
Abdo, Zaid; Crandall, Keith A; Joyce, Paul
2004-04-01
A plethora of statistical models have recently been developed to estimate components of population genetic history. Very few of these methods, however, have been adequately evaluated for their performance in accurately estimating population genetic parameters of interest. In this paper, we continue a research program of evaluation of population genetic methods through computer simulation. Specifically, we examine the software MIGRATEE-N 1.6.8 and test the accuracy of this software to estimate genetic diversity (Theta), migration rates, and confidence intervals. We simulated nucleotide sequence data under a neutral coalescent model with lengths of 500 bp and 1000 bp, and with three different per site Theta values of (0.00025, 0.0025, 0.025) crossed with four different migration rates (0.0000025, 0.025, 0.25, 2.5) to construct 1000 evolutionary trees per-combination per-sequence-length. We found that while MIGRATEE-N 1.6.8 performs reasonably well in estimating genetic diversity (Theta), it does poorly at estimating migration rates and the confidence intervals associated with them. We recommend researchers use this software with caution under conditions similar to those used in this evaluation. PMID:15012759
Chen, Jinbo; Lin, Dongyu; Hochner, Hagit
2012-09-01
Case-control mother-child pair design represents a unique advantage for dissecting genetic susceptibility of complex traits because it allows the assessment of both maternal and offspring genetic compositions. This design has been widely adopted in studies of obstetric complications and neonatal outcomes. In this work, we developed an efficient statistical method for evaluating joint genetic and environmental effects on a binary phenotype. Using a logistic regression model to describe the relationship between the phenotype and maternal and offspring genetic and environmental risk factors, we developed a semiparametric maximum likelihood method for the estimation of odds ratio association parameters. Our method is novel because it exploits two unique features of the study data for the parameter estimation. First, the correlation between maternal and offspring SNP genotypes can be specified under the assumptions of random mating, Hardy-Weinberg equilibrium, and Mendelian inheritance. Second, environmental exposures are often not affected by offspring genes conditional on maternal genes. Our method yields more efficient estimates compared with the standard prospective method for fitting logistic regression models to case-control data. We demonstrated the performance of our method through extensive simulation studies and the analysis of data from the Jerusalem Perinatal Study. PMID:22587881
New methods to assess severity and likelihood of urban flood risk from intense rainfall
NASA Astrophysics Data System (ADS)
Fewtrell, Tim; Foote, Matt; Bates, Paul; Ntelekos, Alexandros
2010-05-01
the construction of appropriate probabilistic flood models. This paper will describe new research being undertaken to assess the practicality of ultra-high resolution, ground based laser-scanner data for flood modelling in urban centres, using new hydraulic propagation methods to determine the feasibility of such data to be applied within stochastic event models. Results from the collection of ‘point cloud' data collected from a mobile terrestrial laser-scanner system in a key urban centre, combined with appropriate datasets, will be summarized here and an initial assessment of the potential for the use of such data in stochastic event sets will be made. Conclusions are drawn from comparisons with previous studies and underlying DEM products of similar resolutions in terms of computational time, flood extent and flood depth. Based on the above, the study provides some current recommendations on the most appropriate resolution of input data for urban hydraulic modelling.
The Likelihood Function and Likelihood Statistics
NASA Astrophysics Data System (ADS)
Robinson, Edward L.
2016-01-01
The likelihood function is a necessary component of Bayesian statistics but not of frequentist statistics. The likelihood function can, however, serve as the foundation for an attractive variant of frequentist statistics sometimes called likelihood statistics. We will first discuss the definition and meaning of the likelihood function, giving some examples of its use and abuse - most notably in the so-called prosecutor's fallacy. Maximum likelihood estimation is the aspect of likelihood statistics familiar to most people. When data points are known to have Gaussian probability distributions, maximum likelihood parameter estimation leads directly to least-squares estimation. When the data points have non-Gaussian distributions, least-squares estimation is no longer appropriate. We will show how the maximum likelihood principle leads to logical alternatives to least squares estimation for non-Gaussian distributions, taking the Poisson distribution as an example.The likelihood ratio is the ratio of the likelihoods of, for example, two hypotheses or two parameters. Likelihood ratios can be treated much like un-normalized probability distributions, greatly extending the applicability and utility of likelihood statistics. Likelihood ratios are prone to the same complexities that afflict posterior probability distributions in Bayesian statistics. We will show how meaningful information can be extracted from likelihood ratios by the Laplace approximation, by marginalizing, or by Markov chain Monte Carlo sampling.
Terwilliger, J.D.
1995-03-01
Historically, most methods for detecting linkage disequilibrium were designed for use with diallelic marker loci, for which the analysis is straightforward. With the advent of polymorphic markers with many alleles, the normal approach to their analysis has been either to extend the methodology for two-allele systems (leading to an increase in df and to a corresponding loss of power) or to select the allele believed to be associated and then collapse the other alleles, reducing, in a biased way, the locus to a diallelic system. I propose a likelihood-based approach to testing for linkage disequilibrium, an approach that becomes more conservative as the number of alleles increases, and as the number of markers considered jointly increases in a multipoint test for linkage disequilibrium, while maintaining high power. Properties of this method for detecting associations and fine mapping the location of disease traits are investigated. It is found to be, in general, more powerful than conventional methods, and it provides a tractable framework for the fine mapping of new disease loci. Application to the cystic fibrosis data of Kerem et al. is included to illustrate the method. 12 refs., 4 figs., 4 tabs.
Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick
2015-01-01
Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins. PMID:25894579
Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick
2015-01-01
Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins. PMID:25894579
Wen, Yalu; Lu, Qing
2016-09-01
Although compelling evidence suggests that the genetic etiology of complex diseases could be heterogeneous in subphenotype groups, little attention has been paid to phenotypic heterogeneity in genetic association analysis of complex diseases. Simply ignoring phenotypic heterogeneity in association analysis could result in attenuated estimates of genetic effects and low power of association tests if subphenotypes with similar clinical manifestations have heterogeneous underlying genetic etiologies. To facilitate the family-based association analysis allowing for phenotypic heterogeneity, we propose a clustered multiclass likelihood-ratio ensemble (CMLRE) method. The proposed method provides an alternative way to model the complex relationship between disease outcomes and genetic variants. It allows for heterogeneous genetic causes of disease subphenotypes and can be applied to various pedigree structures. Through simulations, we found CMLRE outperformed the commonly adopted strategies in a variety of underlying disease scenarios. We further applied CMLRE to a family-based dataset from the International Consortium to Identify Genes and Interactions Controlling Oral Clefts (ICOC) to investigate the genetic variants and interactions predisposing to subphenotypes of oral clefts. The analysis suggested that two subphenotypes, nonsyndromic cleft lip without palate (CL) and cleft lip with palate (CLP), shared similar genetic etiologies, while cleft palate only (CP) had its own genetic mechanism. The analysis further revealed that rs10863790 (IRF6), rs7017252 (8q24), and rs7078160 (VAX1) were jointly associated with CL/CLP, while rs7969932 (TBK1), rs227731 (17q22), and rs2141765 (TBK1) jointly contributed to CP. PMID:27321816
NASA Technical Reports Server (NTRS)
Rheinfurth, M. H.; Wilson, H. B.
1991-01-01
The monograph was prepared to give the practicing engineer a clear understanding of dynamics with special consideration given to the dynamic analysis of aerospace systems. It is conceived to be both a desk-top reference and a refresher for aerospace engineers in government and industry. It could also be used as a supplement to standard texts for in-house training courses on the subject. Beginning with the basic concepts of kinematics and dynamics, the discussion proceeds to treat the dynamics of a system of particles. Both classical and modern formulations of the Lagrange equations, including constraints, are discussed and applied to the dynamic modeling of aerospace structures using the modal synthesis technique.
ERIC Educational Resources Information Center
Atalay Kabasakal, Kübra; Arsan, Nihan; Gök, Bilge; Kelecioglu, Hülya
2014-01-01
This simulation study compared the performances (Type I error and power) of Mantel-Haenszel (MH), SIBTEST, and item response theory-likelihood ratio (IRT-LR) methods under certain conditions. Manipulated factors were sample size, ability differences between groups, test length, the percentage of differential item functioning (DIF), and underlying…
ERIC Educational Resources Information Center
Han, Kyung T.; Guo, Fanmin
2014-01-01
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
Stepwise Signal Extraction via Marginal Likelihood
Du, Chao; Kao, Chu-Lan Michael
2015-01-01
This paper studies the estimation of stepwise signal. To determine the number and locations of change-points of the stepwise signal, we formulate a maximum marginal likelihood estimator, which can be computed with a quadratic cost using dynamic programming. We carry out extensive investigation on the choice of the prior distribution and study the asymptotic properties of the maximum marginal likelihood estimator. We propose to treat each possible set of change-points equally and adopt an empirical Bayes approach to specify the prior distribution of segment parameters. Detailed simulation study is performed to compare the effectiveness of this method with other existing methods. We demonstrate our method on single-molecule enzyme reaction data and on DNA array CGH data. Our study shows that this method is applicable to a wide range of models and offers appealing results in practice. PMID:27212739
Young, Jonathan; Thompson, Sandra E.; Brothers, Alan J.; Whitney, Paul D.; Coles, Garill A.; Henderson, Cindy L.; Wolf, Katherine E.; Hoopes, Bonnie L.
2008-12-01
The ability to estimate the likelihood of future events based on current and historical data is essential to the decision making process of many government agencies. Successful predictions related to terror events and characterizing the risks will support development of options for countering these events. The predictive tasks involve both technical and social component models. The social components have presented a particularly difficult challenge. This paper outlines some technical considerations of this modeling activity. Both data and predictions associated with the technical and social models will likely be known with differing certainties or accuracies – a critical challenge is linking across these model domains while respecting this fundamental difference in certainty level. This paper will describe the technical approach being taken to develop the social model and identification of the significant interfaces between the technical and social modeling in the context of analysis of diversion of nuclear material.
NASA Technical Reports Server (NTRS)
Gayman, W. H.
1974-01-01
Test method and apparatus determine fluid effective mass and damping in frequency range where effective mass may be considered as total mass less sum of slosh masses. Apparatus is designed so test tank and its mounting yoke are supported from structural test wall by series of flexures.
NASA Technical Reports Server (NTRS)
Grove, R. D.; Bowles, R. L.; Mayhew, S. C.
1972-01-01
A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.
NASA Technical Reports Server (NTRS)
Bueno, R. A.
1977-01-01
Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.
NASA Technical Reports Server (NTRS)
Napolitano, Marcello R.; Spagnuolo, Joelle M.
1992-01-01
The research being conducted pertains to the determination of the stability and control derivatives of the F/A-18 High Alpha Research Vehicle (HARV) from flight data using the Maximum Likelihood Method. The document outlines the approach used in the parameter estimation (PID) process and briefly describes the mathematical modeling of the F/A-18 HARV and the maneuvers designed to generate a sufficient data base for the PID research.
Kosakovsky Pond, Sergei L.; Poon, Art F.Y.; Leigh Brown, Andrew J.; Frost, Simon D.W.
2008-01-01
We develop a model-based phylogenetic maximum likelihood test for evidence of preferential substitution toward a given residue at individual positions of a protein alignment—directional evolution of protein sequences (DEPS). DEPS can identify both the target residue and sites evolving toward it, help detect selective sweeps and frequency-dependent selection—scenarios that confound most existing tests for selection, and achieve good power and accuracy on simulated data. We applied DEPS to alignments representing different genomic regions of influenza A virus (IAV), sampled from avian hosts (H5N1 serotype) and human hosts (H3N2 serotype), and identified multiple directionally evolving sites in 5/8 genomic segments of H5N1 and H3N2 IAV. We propose a simple descriptive classification of directionally evolving sites into 5 groups based on the temporal distribution of residue frequencies and document known functional correlates, such as immune escape or host adaptation. PMID:18511426
Pinou, Theodora; Vicario, Saverio; Marschner, Monique; Caccone, Adalgisa
2004-08-01
This paper focuses on the phylogenetic relationships of eight North American caenophidian snake species (Carphophis amoena, Contia tenuis, Diadophis punctatus, Farancia abacura, Farancia erytrogramma, Heterodon nasicus, Heterodon platyrhinos, and Heterodon simus) whose phylogenetic relationships remain controversial. Past studies have referred to these "relict" North American snakes either as colubrid, or as Neotropical dipsadids and/or xenodontids. Based on mitochondrial DNA ribosomal gene sequences and a likelihood-based Bayesian analysis, our study suggests that these North American snakes are not monophyletic and are nested within a group (Dipsadoidea) that contains the Dipsadidae, Xenodontidae, and Natricidae. In addition, we use the relationships proposed here to highlight putative examples of parallel evolution of hemipenial morphology among snake clades. PMID:15223038
NASA Astrophysics Data System (ADS)
Ipsen, Andreas; Ebbels, Timothy M. D.
2014-10-01
In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time—tasks that may be best addressed through engineering efforts.
Dynamic Method for Identifying Collected Sample Mass
NASA Technical Reports Server (NTRS)
Carson, John
2008-01-01
G-Sample is designed for sample collection missions to identify the presence and quantity of sample material gathered by spacecraft equipped with end effectors. The software method uses a maximum-likelihood estimator to identify the collected sample's mass based on onboard force-sensor measurements, thruster firings, and a dynamics model of the spacecraft. This makes sample mass identification a computation rather than a process requiring additional hardware. Simulation examples of G-Sample are provided for spacecraft model configurations with a sample collection device mounted on the end of an extended boom. In the absence of thrust knowledge errors, the results indicate that G-Sample can identify the amount of collected sample mass to within 10 grams (with 95-percent confidence) by using a force sensor with a noise and quantization floor of 50 micrometers. These results hold even in the presence of realistic parametric uncertainty in actual spacecraft inertia, center-of-mass offset, and first flexibility modes. Thrust profile knowledge is shown to be a dominant sensitivity for G-Sample, entering in a nearly one-to-one relationship with the final mass estimation error. This means thrust profiles should be well characterized with onboard accelerometers prior to sample collection. An overall sample-mass estimation error budget has been developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
NASA Astrophysics Data System (ADS)
Roth, Marshall
The Large-Area Telescope (LAT) on the Fermi gamma-Ray Space Telescope is a pair-conversion gamma-ray telescope with unprecedented capability to image astrophysical gamma-ray sources between 20 MeV and 300 GeV. The pre-launch performance of the LAT, decomposed into effective area, energy and angular dispersions, were determined through extensive Monte Carlo (MC) simulations and beam tests. The point-spread function (PSF) characterizes the angular distribution of reconstructed photons as a function of energy and geometry in the detector. Here we present a set of likelihood analyses of LAT data based on the spatial and spectral properties of sources, including a determination of the PSF on orbit. We find that the PSF on orbit is generally broader than the MC at energies above 3 GeV and consider several systematic effects to explain this difference. We also investigated several possible spatial models for pair-halo emission around BL Lac AGN and found no evidence for a component with spatial extension larger than the PSF.
Terwilliger, Thomas C.
2001-01-01
The recently developed technique of maximum-likelihood density modification [Terwilliger (2000 ▶), Acta Cryst. D56, 965–972] allows a calculation of phase probabilities based on the likelihood of the electron-density map to be carried out separately from the calculation of any prior phase probabilities. Here, it is shown that phase-probability distributions calculated from the map-likelihood function alone can be highly accurate and that they show minimal bias towards the phases used to initiate the calculation. Map-likelihood phase probabilities depend upon expected characteristics of the electron-density map, such as a defined solvent region and expected electron-density distributions within the solvent region and the region occupied by a macromolecule. In the simplest case, map-likelihood phase-probability distributions are largely based on the flatness of the solvent region. Though map-likelihood phases can be calculated without prior phase information, they are greatly enhanced by high-quality starting phases. This leads to the technique of prime-and-switch phasing for removing model bias. In prime-and-switch phasing, biased phases such as those from a model are used to prime or initiate map-likelihood phasing, then final phases are obtained from map-likelihood phasing alone. Map-likelihood phasing can be applied in cases with solvent content as low as 30%. Potential applications of map-likelihood phasing include unbiased phase calculation from molecular-replacement models, iterative model building, unbiased electron-density maps for cases where 2Fo − Fc or σA-weighted maps would currently be used, structure validation and ab initio phase determination from solvent masks, non-crystallographic symmetry or other knowledge about expected electron density. PMID:11717488
Likelihood functions for the analysis of single-molecule binned photon sequences
Gopich, Irina V.
2011-01-01
We consider the analysis of a class of experiments in which the number of photons in consecutive time intervals is recorded. Sequence of photon counts or, alternatively, of FRET efficiencies can be studied using likelihood-based methods. For a kinetic model of the conformational dynamics and state-dependent Poisson photon statistics, the formalism to calculate the exact likelihood that this model describes such sequences of photons or FRET efficiencies is developed. Explicit analytic expressions for the likelihood function for a two-state kinetic model are provided. The important special case when conformational dynamics are so slow that at most a single transition occurs in a time bin is considered. By making a series of approximations, we eventually recover the likelihood function used in hidden Markov models. In this way, not only is insight gained into the range of validity of this procedure, but also an improved likelihood function can be obtained. PMID:22711967
Huang, Jinxin; Hindman, Holly B; Rolland, Jannick P
2016-05-01
Dry eye disease (DED) is a common ophthalmic condition that is characterized by tear film instability and leads to ocular surface discomfort and visual disturbance. Advancements in the understanding and management of this condition have been limited by our ability to study the tear film secondary to its thin structure and dynamic nature. Here, we report a technique to simultaneously estimate the thickness of both the lipid and aqueous layers of the tear film in vivo using optical coherence tomography and maximum-likelihood estimation. After a blink, the lipid layer was rapidly thickened at an average rate of 10 nm/s over the first 2.5 s before stabilizing, whereas the aqueous layer continued thinning at an average rate of 0.29 μm/s of the 10 s blink cycle. Further development of this tear film imaging technique may allow for the elucidation of events that trigger tear film instability in DED. PMID:27128054
ERIC Educational Resources Information Center
Fennell, Mary L.; And Others
This document is part of a series of chapters described in SO 011 759. This chapter reports the results of Monte Carlo simulations designed to analyze problems of using maximum likelihood estimation (MLE: see SO 011 767) in research models which combine longitudinal and dynamic behavior data in studies of change. Four complications--censoring of…
Computational Methods for Structural Mechanics and Dynamics
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.
The Phylogenetic Likelihood Library
Flouri, T.; Izquierdo-Carrasco, F.; Darriba, D.; Aberer, A.J.; Nguyen, L.-T.; Minh, B.Q.; Von Haeseler, A.; Stamatakis, A.
2015-01-01
We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2–10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL). PMID:25358969
The phylogenetic likelihood library.
Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A
2015-03-01
We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL). PMID:25358969
Augmented Likelihood Image Reconstruction.
Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M
2016-01-01
The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction. PMID:26208310
ERIC Educational Resources Information Center
Lee, Sik-Yum; Xia, Ye-Mao
2006-01-01
By means of more than a dozen user friendly packages, structural equation models (SEMs) are widely used in behavioral, education, social, and psychological research. As the underlying theory and methods in these packages are vulnerable to outliers and distributions with longer-than-normal tails, a fundamental problem in the field is the…
Maximum likelihood versus likelihood-free quantum system identification in the atom maser
NASA Astrophysics Data System (ADS)
Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin
2014-10-01
We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle.
Dynamic atomic force microscopy methods
NASA Astrophysics Data System (ADS)
García, Ricardo; Pérez, Rubén
2002-09-01
In this report we review the fundamentals, applications and future tendencies of dynamic atomic force microscopy (AFM) methods. Our focus is on understanding why the changes observed in the dynamic properties of a vibrating tip that interacts with a surface make possible to obtain molecular resolution images of membrane proteins in aqueous solutions or to resolve atomic-scale surface defects in ultra high vacuum (UHV). Our description of the two major dynamic AFM modes, amplitude modulation atomic force microscopy (AM-AFM) and frequency modulation atomic force microscopy (FM-AFM) emphasises their common points without ignoring the differences in experimental set-ups and operating conditions. Those differences are introduced by the different feedback parameters, oscillation amplitude in AM-AFM and frequency shift and excitation amplitude in FM-AFM, used to track the topography and composition of a surface. The theoretical analysis of AM-AFM (also known as tapping-mode) emphasises the coexistence, in many situations of interests, of two stable oscillation states, a low and high amplitude solution. The coexistence of those oscillation states is a consequence of the presence of attractive and repulsive components in the interaction force and their non-linear dependence on the tip-surface separation. We show that key relevant experimental properties such as the lateral resolution, image contrast and sample deformation are highly dependent on the oscillation state chosen to operate the instrument. AM-AFM allows to obtain simultaneous topographic and compositional contrast in heterogeneous samples by recording the phase angle difference between the external excitation and the tip motion (phase imaging). Significant applications of AM-AFM such as high-resolution imaging of biomolecules and polymers, large-scale patterning of silicon surfaces, manipulation of single nanoparticles or the fabrication of single electron devices are also reviewed. FM-AFM (also called non
Andrews, Steven S; Rutherford, Suzannah
2016-01-01
Experimental measurements require calibration to transform measured signals into physically meaningful values. The conventional approach has two steps: the experimenter deduces a conversion function using measurements on standards and then calibrates (or normalizes) measurements on unknown samples with this function. The deduction of the conversion function from only the standard measurements causes the results to be quite sensitive to experimental noise. It also implies that any data collected without reliable standards must be discarded. Here we show that a "1-step calibration method" reduces these problems for the common situation in which samples are measured in batches, where a batch could be an immunoblot (Western blot), an enzyme-linked immunosorbent assay (ELISA), a sequence of spectra, or a microarray, provided that some sample measurements are replicated across multiple batches. The 1-step method computes all calibration results iteratively from all measurements. It returns the most probable values for the sample compositions under the assumptions of a statistical model, making them the maximum likelihood predictors. It is less sensitive to measurement error on standards and enables use of some batches that do not include standards. In direct comparison of both real and simulated immunoblot data, the 1-step method consistently exhibited smaller errors than the conventional "2-step" method. These results suggest that the 1-step method is likely to be most useful for cases where experimenters want to analyze existing data that are missing some standard measurements and where experimenters want to extract the best results possible from their data. Open source software for both methods is available for download or on-line use. PMID:26908370
NASA Astrophysics Data System (ADS)
Olivares, G.; Teferle, F. N.
2013-12-01
Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.
Andrews, Steven S.; Rutherford, Suzannah
2016-01-01
Experimental measurements require calibration to transform measured signals into physically meaningful values. The conventional approach has two steps: the experimenter deduces a conversion function using measurements on standards and then calibrates (or normalizes) measurements on unknown samples with this function. The deduction of the conversion function from only the standard measurements causes the results to be quite sensitive to experimental noise. It also implies that any data collected without reliable standards must be discarded. Here we show that a “1-step calibration method” reduces these problems for the common situation in which samples are measured in batches, where a batch could be an immunoblot (Western blot), an enzyme-linked immunosorbent assay (ELISA), a sequence of spectra, or a microarray, provided that some sample measurements are replicated across multiple batches. The 1-step method computes all calibration results iteratively from all measurements. It returns the most probable values for the sample compositions under the assumptions of a statistical model, making them the maximum likelihood predictors. It is less sensitive to measurement error on standards and enables use of some batches that do not include standards. In direct comparison of both real and simulated immunoblot data, the 1-step method consistently exhibited smaller errors than the conventional “2-step” method. These results suggest that the 1-step method is likely to be most useful for cases where experimenters want to analyze existing data that are missing some standard measurements and where experimenters want to extract the best results possible from their data. Open source software for both methods is available for download or on-line use. PMID:26908370
Integration methods for molecular dynamics
Leimkuhler, B.J.; Reich, S.; Skeel, R.D.
1996-12-31
Classical molecular dynamics simulation of a macromolecule requires the use of an efficient time-stepping scheme that can faithfully approximate the dynamics over many thousands of timesteps. Because these problems are highly nonlinear, accurate approximation of a particular solution trajectory on meaningful time intervals is neither obtainable nor desired, but some restrictions, such as symplecticness, can be imposed on the discretization which tend to imply good long term behavior. The presence of a variety of types and strengths of interatom potentials in standard molecular models places severe restrictions on the timestep for numerical integration used in explicit integration schemes, so much recent research has concentrated on the search for alternatives that possess (1) proper dynamical properties, and (2) a relative insensitivity to the fastest components of the dynamics. We survey several recent approaches. 48 refs., 2 figs.
Salas-Leiva, Dayana E.; Meerow, Alan W.; Calonje, Michael; Griffith, M. Patrick; Francisco-Ortega, Javier; Nakamura, Kyoko; Stevenson, Dennis W.; Lewis, Carl E.; Namoff, Sandra
2013-01-01
Background and aims Despite a recent new classification, a stable phylogeny for the cycads has been elusive, particularly regarding resolution of Bowenia, Stangeria and Dioon. In this study, five single-copy nuclear genes (SCNGs) are applied to the phylogeny of the order Cycadales. The specific aim is to evaluate several gene tree–species tree reconciliation approaches for developing an accurate phylogeny of the order, to contrast them with concatenated parsimony analysis and to resolve the erstwhile problematic phylogenetic position of these three genera. Methods DNA sequences of five SCNGs were obtained for 20 cycad species representing all ten genera of Cycadales. These were analysed with parsimony, maximum likelihood (ML) and three Bayesian methods of gene tree–species tree reconciliation, using Cycas as the outgroup. A calibrated date estimation was developed with Bayesian methods, and biogeographic analysis was also conducted. Key Results Concatenated parsimony, ML and three species tree inference methods resolve exactly the same tree topology with high support at most nodes. Dioon and Bowenia are the first and second branches of Cycadales after Cycas, respectively, followed by an encephalartoid clade (Macrozamia–Lepidozamia–Encephalartos), which is sister to a zamioid clade, of which Ceratozamia is the first branch, and in which Stangeria is sister to Microcycas and Zamia. Conclusions A single, well-supported phylogenetic hypothesis of the generic relationships of the Cycadales is presented. However, massive extinction events inferred from the fossil record that eliminated broader ancestral distributions within Zamiaceae compromise accurate optimization of ancestral biogeographical areas for that hypothesis. While major lineages of Cycadales are ancient, crown ages of all modern genera are no older than 12 million years, supporting a recent hypothesis of mostly Miocene radiations. This phylogeny can contribute to an accurate infrafamilial
Jerling, M; Merlé, Y; Mentré, F; Mallet, A
1994-01-01
Therapeutic drug monitoring data for nortriptyline (674 analyses from 578 patients) were evaluated with the nonparametric maximum likelihood (NPML) method in order to determine the population kinetic parameters of this drug and their relation to age, body weight and duration of treatment. Clearance of nortriptyline during monotherapy exhibited a large interindividual variability and a skewed distribution. A small, separate fraction with a very high clearance, constituting between 0.5% and 2% of the population, was seen in both men and women. This may be explained by the recent discovery of subjects with multiple copies of the gene encoding the cytochrome-P450-enzyme CYP2D6, which catalyses the hydroxylation of nortriptyline. However, erratic compliance with the prescription may also add to this finding. A separate distribution of low clearance values with a frequency corresponding to that of poor metabolizers of CYP2D6 (circa 7% in Caucasian populations) could not be detected. Concomitant therapy with drugs that inhibit CYP2D6 resulted in a major increase in the plasma nortriptyline concentrations. This was caused by a decrease in nortriptyline clearance, whereas the volume of distribution was unchanged. The demographic factors age and body weight had a minor influence on the clearance of nortriptyline which was also unaffected by the duration of treatment. PMID:7893588
Thorn, Graeme J; King, John R
2016-01-01
The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. PMID:26561777
Likelihood and clinical trials.
Hill, G; Forbes, W; Kozak, J; MacNeill, I
2000-03-01
The history of the application of statistical theory to the analysis of clinical trials is reviewed. The current orthodoxy is a somewhat illogical hybrid of the original theory of significance tests of Edgeworth, Karl Pearson, and Fisher, and the subsequent decision theory approach of Neyman, Egon Pearson, and Wald. This hegemony is under threat from Bayesian statisticians. A third approach is that of likelihood, stemming from the work of Fisher and Barnard. This approach is illustrated using hypothetical data from the Lancet articles by Bradford Hill, which introduced clinicians to statistical theory. PMID:10760630
Spectral methods in fluid dynamics
NASA Technical Reports Server (NTRS)
Hussaini, M. Y.; Zang, T. A.
1986-01-01
Fundamental aspects of spectral methods are introduced. Recent developments in spectral methods are reviewed with an emphasis on collocation techniques. Their applications to both compressible and incompressible flows, to viscous as well as inviscid flows, and also to chemically reacting flows are surveyed. The key role that these methods play in the simulation of stability, transition, and turbulence is brought out. A perspective is provided on some of the obstacles that prohibit a wider use of these methods, and how these obstacles are being overcome.
Model Fit after Pairwise Maximum Likelihood
Barendse, M. T.; Ligtvoet, R.; Timmerman, M. E.; Oort, F. J.
2016-01-01
Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log–likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two–way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136
Model Fit after Pairwise Maximum Likelihood.
Barendse, M T; Ligtvoet, R; Timmerman, M E; Oort, F J
2016-01-01
Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log-likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two-way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136
Sampling variability and estimates of density dependence: a composite-likelihood approach.
Lele, Subhash R
2006-01-01
It is well known that sampling variability, if not properly taken into account, affects various ecologically important analyses. Statistical inference for stochastic population dynamics models is difficult when, in addition to the process error, there is also sampling error. The standard maximum-likelihood approach suffers from large computational burden. In this paper, I discuss an application of the composite-likelihood method for estimation of the parameters of the Gompertz model in the presence of sampling variability. The main advantage of the method of composite likelihood is that it reduces the computational burden substantially with little loss of statistical efficiency. Missing observations are a common problem with many ecological time series. The method of composite likelihood can accommodate missing observations in a straightforward fashion. Environmental conditions also affect the parameters of stochastic population dynamics models. This method is shown to handle such nonstationary population dynamics processes as well. Many ecological time series are short, and statistical inferences based on such short time series tend to be less precise. However, spatial replications of short time series provide an opportunity to increase the effective sample size. Application of likelihood-based methods for spatial time-series data for population dynamics models is computationally prohibitive. The method of composite likelihood is shown to have significantly less computational burden, making it possible to analyze large spatial time-series data. After discussing the methodology in general terms, I illustrate its use by analyzing a time series of counts of American Redstart (Setophaga ruticilla) from the Breeding Bird Survey data, San Joaquin kit fox (Vulpes macrotis mutica) population abundance data, and spatial time series of Bull trout (Salvelinus confluentus) redds count data. PMID:16634310
Numerical methods for molecular dynamics
Skeel, R.D.
1991-01-01
This report summarizes our research progress to date on the use of multigrid methods for three-dimensional elliptic partial differential equations, with particular emphasis on application to the Poisson-Boltzmann equation of molecular biophysics. This research is motivated by the need for fast and accurate numerical solution techniques for three-dimensional problems arising in physics and engineering. In many applications these problems must be solved repeatedly, and the extremely large number of discrete unknowns required to accurately approximate solutions to partial differential equations in three-dimensional regions necessitates the use of efficient solution methods. This situation makes clear the importance of developing methods which are of optimal order (or nearly so), meaning that the number of operations required to solve the discrete problem is on the order of the number of discrete unknowns. Multigrid methods are generally regarded as being in this class of methods, and are in fact provably optimal order for an increasingly large class of problems. The fundamental goal of this research is to develop a fast and accurate numerical technique, based on multi-level principles, for the solutions of the Poisson-Boltzmann equation of molecular biophysics and similar equations occurring in other applications. An outline of the report is as follows. We first present some background material, followed by a survey of the literature on the use of multigrid methods for solving problems similar to the Poisson-Boltzmann equation. A short description of the software we have developed so far is then given, and numerical results are discussed. Finally, our research plans for the coming year are presented.
Galerkin Method for Nonlinear Dynamics
NASA Astrophysics Data System (ADS)
Noack, Bernd R.; Schlegel, Michael; Morzynski, Marek; Tadmor, Gilead
A Galerkin method is presented for control-oriented reduced-order models (ROM). This method generalizes linear approaches elaborated by M. Morzyński et al. for the nonlinear Navier-Stokes equation. These ROM are used as plants for control design in the chapters by G. Tadmor et al., S. Siegel, and R. King in this volume. Focus is placed on empirical ROM which compress flow data in the proper orthogonal decomposition (POD). The chapter shall provide a complete description for construction of straight-forward ROM as well as the physical understanding and teste
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Mehra, R. K.
1974-01-01
This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.
Quasi-likelihood for Spatial Point Processes
Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus
2014-01-01
Summary Fitting regression models for intensity functions of spatial point processes is of great interest in ecological and epidemiological studies of association between spatially referenced events and geographical or environmental covariates. When Cox or cluster process models are used to accommodate clustering not accounted for by the available covariates, likelihood based inference becomes computationally cumbersome due to the complicated nature of the likelihood function and the associated score function. It is therefore of interest to consider alternative more easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation which in practise is solved numerically. The derivation of the optimal estimating function has close similarities to the derivation of quasi-likelihood for standard data sets. The approximate solution is further equivalent to a quasi-likelihood score for binary spatial data. We therefore use the term quasi-likelihood for our optimal estimating function approach. We demonstrate in a simulation study and a data example that our quasi-likelihood method for spatial point processes is both statistically and computationally efficient. PMID:26041970
Disequilibrium mapping: Composite likelihood for pairwise disequilibrium
Devlin, B.; Roeder, K.; Risch, N.
1996-08-15
The pattern of linkage disequilibrium between a disease locus and a set of marker loci has been shown to be a useful tool for geneticists searching for disease genes. Several methods have been advanced to utilize the pairwise disequilibrium between the disease locus and each of a set of marker loci. However, none of the methods take into account the information from all pairs simultaneously while also modeling the variability in the disequilibrium values due to the evolutionary dynamics of the population. We propose a Composite Likelihood CL model that has these features when the physical distances between the marker loci are known or can be approximated. In this instance, and assuming that there is a single disease mutation, the CL model depends on only three parameters, the recombination fraction between the disease locus and an arbitrary marker locus, {theta}, the age of the mutation, and a variance parameter. When the CL is maximized over a grid of {theta}, it provides a graph that can direct the search for the disease locus. We also show how the CL model can be generalized to account for multiple disease mutations. Evolutionary simulations demonstrate the power of the analyses, as well as their potential weaknesses. Finally, we analyze the data from two mapped diseases, cystic fibrosis and diastrophic dysplasia, finding that the CL method performs well in both cases. 28 refs., 6 figs., 4 tabs.
Likelihood approaches for proportional likelihood ratio model with right-censored data.
Zhu, Hong
2014-06-30
Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo-likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite-sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non-proportionality. The relative merits of these methods are discussed in concluding remarks. PMID:24500821
Dynamic discretization method for solving Kepler's equation
NASA Astrophysics Data System (ADS)
Feinstein, Scott A.; McLaughlin, Craig A.
2006-09-01
Kepler’s equation needs to be solved many times for a variety of problems in Celestial Mechanics. Therefore, computing the solution to Kepler’s equation in an efficient manner is of great importance to that community. There are some historical and many modern methods that address this problem. Of the methods known to the authors, Fukushima’s discretization technique performs the best. By taking more of a system approach and combining the use of discretization with the standard computer science technique known as dynamic programming, we were able to achieve even better performance than Fukushima. We begin by defining Kepler’s equation for the elliptical case and describe existing solution methods. We then present our dynamic discretization method and show the results of a comparative analysis. This analysis will demonstrate that, for the conditions of our tests, dynamic discretization performs the best.
Likelihoods for fixed rank nomination networks.
Hoff, Peter; Fosdick, Bailey; Volfovsky, Alex; Stovel, Katherine
2013-12-01
Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586
Growing local likelihood network: Emergence of communities
NASA Astrophysics Data System (ADS)
Chen, S.; Small, M.
2015-10-01
In many real situations, networks grow only via local interactions. New nodes are added to the growing network with information only pertaining to a small subset of existing nodes. Multilevel marketing, social networks, and disease models can all be depicted as growing networks based on local (network path-length) distance information. In these examples, all nodes whose distance from a chosen center is less than d form a subgraph. Hence, we grow networks with information only from these subgraphs. Moreover, we use a likelihood-based method, where at each step we modify the networks by changing their likelihood to be closer to the expected degree distribution. Combining the local information and the likelihood method, we grow networks that exhibit novel features. We discover that the likelihood method, over certain parameter ranges, can generate networks with highly modulated communities, even when global information is not available. Communities and clusters are abundant in real-life networks, and the method proposed here provides a natural mechanism for the emergence of communities in scale-free networks. In addition, the algorithmic implementation of network growth via local information is substantially faster than global methods and allows for the exploration of much larger networks.
Maximum-likelihood density modification
Terwilliger, Thomas C.
2000-01-01
A likelihood-based approach to density modification is developed that can be applied to a wide variety of cases where some information about the electron density at various points in the unit cell is available. The key to the approach consists of developing likelihood functions that represent the probability that a particular value of electron density is consistent with prior expectations for the electron density at that point in the unit cell. These likelihood functions are then combined with likelihood functions based on experimental observations and with others containing any prior knowledge about structure factors to form a combined likelihood function for each structure factor. A simple and general approach to maximizing the combined likelihood function is developed. It is found that this likelihood-based approach yields greater phase improvement in model and real test cases than either conventional solvent flattening and histogram matching or a recent reciprocal-space solvent-flattening procedure [Terwilliger (1999 ▶), Acta Cryst. D55, 1863–1871]. PMID:10944333
A pairwise likelihood-based approach for changepoint detection in multivariate time series models
Ma, Ting Fung; Yau, Chun Yip
2016-01-01
This paper develops a composite likelihood-based approach for multiple changepoint estimation in multivariate time series. We derive a criterion based on pairwise likelihood and minimum description length for estimating the number and locations of changepoints and for performing model selection in each segment. The number and locations of the changepoints can be consistently estimated under mild conditions and the computation can be conducted efficiently with a pruned dynamic programming algorithm. Simulation studies and real data examples demonstrate the statistical and computational efficiency of the proposed method. PMID:27279666
A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution
Zou, Huanxin; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng
2016-01-01
The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images. PMID:27438840
A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution.
Zou, Huanxin; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng
2016-01-01
The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images. PMID:27438840
Interfacial gauge methods for incompressible fluid dynamics.
Saye, Robert
2016-06-01
Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of "gauge freedom" to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work, high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena. PMID:27386567
Interfacial gauge methods for incompressible fluid dynamics
Saye, Robert
2016-01-01
Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of “gauge freedom” to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work, high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena. PMID:27386567
Evaluation of Dynamic Methods for Earthwork Assessment
NASA Astrophysics Data System (ADS)
Vlček, Jozef; Ďureková, Dominika; Zgútová, Katarína
2015-05-01
Rapid development of road construction imposes requests on fast and quality methods for earthwork quality evaluation. Dynamic methods are now adopted in numerous civil engineering sections. Especially evaluation of the earthwork quality can be sped up using dynamic equipment. This paper presents the results of the parallel measurements of chosen devices for determining the level of compaction of soils. Measurements were used to develop the correlations between values obtained from various apparatuses. Correlations show that examined apparatuses are suitable for examination of compaction level of fine-grained soils with consideration of boundary conditions of used equipment. Presented methods are quick and results can be obtained immediately after measurement, and they are thus suitable in cases when construction works have to be performed in a short period of time.
Maximum likelihood topographic map formation.
Van Hulle, Marc M
2005-03-01
We introduce a new unsupervised learning algorithm for kernel-based topographic map formation of heteroscedastic gaussian mixtures that allows for a unified account of distortion error (vector quantization), log-likelihood, and Kullback-Leibler divergence. PMID:15802004
Spectral Methods for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Zang, T. A.; Streett, C. L.; Hussaini, M. Y.
1994-01-01
As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral
Mesoscopic Simulation Methods for Polymer Dynamics
NASA Astrophysics Data System (ADS)
Larson, Ronald
2015-03-01
We assess the accuracy and efficiency of mesoscopic simulation methods, namely Brownian Dynamics (BD), Stochastic Rotation Dynamics (SRD) and Dissipative Particle Dynamics (DPD), for polymers in solution at equilibrium and in flows in microfluidic geometries. Both SRD and DPD use solvent ``particles'' to carry momentum, and so account automatically for hydrodynamic interactions both within isolated polymer coils, and with other polymer molecules and with nearby solid boundaries. We assess quantitatively the effects of artificial particle inertia and fluid compressibility and show that they can be made small with appropriate choice of simulation parameters. We then use these methods to study flow-induced migration of polymer chains produced by: 1) hydrodynamic interactions, 2) streamline curvature or stress-gradients, and 3) convection of wall depletion zones. We show that huge concentration gradients can be produced by these mechanisms in microfluidic geometries that can be exploited for separation of polymers by size in periodic contraction-expansion geometries. We also assess the range of conditions for which BD, SRD or DPD is preferable for mesoscopic simulations. Finally, we show how such methods can be used to simulate quantitatively the swimming of micro-organisms such as E. coli. In collaboration with Lei Jiang and Tongyang Zhao, University of Michigan, Ann Arbor, MI.
Development of semiclassical molecular dynamics simulation method.
Nakamura, Hiroki; Nanbu, Shinkoh; Teranishi, Yoshiaki; Ohta, Ayumi
2016-04-28
Various quantum mechanical effects such as nonadiabatic transitions, quantum mechanical tunneling and coherence play crucial roles in a variety of chemical and biological systems. In this paper, we propose a method to incorporate tunneling effects into the molecular dynamics (MD) method, which is purely based on classical mechanics. Caustics, which define the boundary between classically allowed and forbidden regions, are detected along classical trajectories and the optimal tunneling path with minimum action is determined by starting from each appropriate caustic. The real phase associated with tunneling can also be estimated. Numerical demonstration with use of a simple collinear chemical reaction O + HCl → OH + Cl is presented in order to help the reader to well comprehend the method proposed here. Generalization to the on-the-fly ab initio version is rather straightforward. By treating the nonadiabatic transitions at conical intersections by the Zhu-Nakamura theory, new semiclassical MD methods can be developed. PMID:27067383
Comparing Methods for Dynamic Airspace Configuration
NASA Technical Reports Server (NTRS)
Zelinski, Shannon; Lai, Chok Fung
2011-01-01
This paper compares airspace design solutions for dynamically reconfiguring airspace in response to nominal daily traffic volume fluctuation. Airspace designs from seven algorithmic methods and a representation of current day operations in Kansas City Center were simulated with two times today's demand traffic. A three-configuration scenario was used to represent current day operations. Algorithms used projected unimpeded flight tracks to design initial 24-hour plans to switch between three configurations at predetermined reconfiguration times. At each reconfiguration time, algorithms used updated projected flight tracks to update the subsequent planned configurations. Compared to the baseline, most airspace design methods reduced delay and increased reconfiguration complexity, with similar traffic pattern complexity results. Design updates enabled several methods to as much as half the delay from their original designs. Freeform design methods reduced delay and increased reconfiguration complexity the most.
B-spline Method in Fluid Dynamics
NASA Technical Reports Server (NTRS)
Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)
2001-01-01
B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Implicit integration methods for dislocation dynamics
NASA Astrophysics Data System (ADS)
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-03-01
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. This paper investigates the viability of high-order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.
Evaluating network models: A likelihood analysis
NASA Astrophysics Data System (ADS)
Wang, Wen-Qiang; Zhang, Qian-Ming; Zhou, Tao
2012-04-01
Many models are put forward to mimic the evolution of real networked systems. A well-accepted way to judge the validity is to compare the modeling results with real networks subject to several structural features. Even for a specific real network, we cannot fairly evaluate the goodness of different models since there are too many structural features while there is no criterion to select and assign weights on them. Motivated by the studies on link prediction algorithms, we propose a unified method to evaluate the network models via the comparison of the likelihoods of the currently observed network driven by different models, with an assumption that the higher the likelihood is, the more accurate the model is. We test our method on the real Internet at the Autonomous System (AS) level, and the results suggest that the Generalized Linear Preferential (GLP) model outperforms the Tel Aviv Network Generator (Tang), while both two models are better than the Barabási-Albert (BA) and Erdös-Rényi (ER) models. Our method can be further applied in determining the optimal values of parameters that correspond to the maximal likelihood. The experiment indicates that the parameters obtained by our method can better capture the characters of newly added nodes and links in the AS-level Internet than the original methods in the literature.
NASA Astrophysics Data System (ADS)
Shang, Yilun
2016-08-01
How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.
New methods for quantum mechanical reaction dynamics
Thompson, W.H. |
1996-12-01
Quantum mechanical methods are developed to describe the dynamics of bimolecular chemical reactions. We focus on developing approaches for directly calculating the desired quantity of interest. Methods for the calculation of single matrix elements of the scattering matrix (S-matrix) and initial state-selected reaction probabilities are presented. This is accomplished by the use of absorbing boundary conditions (ABC) to obtain a localized (L{sup 2}) representation of the outgoing wave scattering Green`s function. This approach enables the efficient calculation of only a single column of the S-matrix with a proportionate savings in effort over the calculation of the entire S-matrix. Applying this method to the calculation of the initial (or final) state-selected reaction probability, a more averaged quantity, requires even less effort than the state-to-state S-matrix elements. It is shown how the same representation of the Green`s function can be effectively applied to the calculation of negative ion photodetachment intensities. Photodetachment spectroscopy of the anion ABC{sup -} can be a very useful method for obtaining detailed information about the neutral ABC potential energy surface, particularly if the ABC{sup -} geometry is similar to the transition state of the neutral ABC. Total and arrangement-selected photodetachment spectra are calculated for the H{sub 3}O{sup -} system, providing information about the potential energy surface for the OH + H{sub 2} reaction when compared with experimental results. Finally, we present methods for the direct calculation of the thermal rate constant from the flux-position and flux-flux correlation functions. The spirit of transition state theory is invoked by concentrating on the short time dynamics in the area around the transition state that determine reactivity. These methods are made efficient by evaluating the required quantum mechanical trace in the basis of eigenstates of the Boltzmannized flux operator.
Dynamic data filtering system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-04-29
A computer-implemented dynamic data filtering system and method for selectively choosing operating data of a monitored asset that modifies or expands a learned scope of an empirical model of normal operation of the monitored asset while simultaneously rejecting operating data of the monitored asset that is indicative of excessive degradation or impending failure of the monitored asset, and utilizing the selectively chosen data for adaptively recalibrating the empirical model to more accurately monitor asset aging changes or operating condition changes of the monitored asset.
NASA Astrophysics Data System (ADS)
Reid, Beth A.
2013-06-01
This software computes likelihoods for the Luminous Red Galaxies (LRG) data from the Sloan Digital Sky Survey (SDSS). It includes a patch to the existing CAMB software (the February 2009 release) to calculate the theoretical LRG halo power spectrum for various models. The code is written in Fortran 90 and has been tested with the Intel Fortran 90 and GFortran compilers.
A dynamic transformation method for modal synthesis.
NASA Technical Reports Server (NTRS)
Kuhar, E. J.; Stahle, C. V.
1973-01-01
This paper presents a condensation method for large discrete parameter vibration analysis of complex structures that greatly reduces truncation errors and provides accurate definition of modes in a selected frequency range. A dynamic transformation is obtained from the partitioned equations of motion that relates modes not explicity in the condensed solution to the retained modes at a selected system frequency. The generalized mass and stiffness matrices, obtained with existing modal synthesis methods, are reduced using this transformation and solved. Revised solutions are then obtained using new transformations at the calculated eigenvalues and are also used to assess the accuracy of the results. If all the modes of interest have not been obtained, the results are used to select a new set of retained coordinates and a new transformation frequency, and the procedure is repeated for another group of modes.
An empirical method for dynamic camouflage assessment
NASA Astrophysics Data System (ADS)
Blitch, John G.
2011-06-01
As camouflage systems become increasingly sophisticated in their potential to conceal military personnel and precious cargo, evaluation methods need to evolve as well. This paper presents an overview of one such attempt to explore alternative methods for empirical evaluation of dynamic camouflage systems which aspire to keep pace with a soldier's movement through rapidly changing environments that are typical of urban terrain. Motivating factors are covered first, followed by a description of the Blitz Camouflage Assessment (BCA) process and results from an initial proof of concept experiment conducted in November 2006. The conclusion drawn from these results, related literature and the author's personal experience suggest that operational evaluation of personal camouflage needs to be expanded beyond its foundation in signal detection theory and embrace the challenges posed by high levels of cognitive processing.
NASA Technical Reports Server (NTRS)
Napolitano, Marcello R.
1995-01-01
This report is a compilation of PID (Proportional Integral Derivative) results for both longitudinal and lateral directional analysis that was completed during Fall 1994. It had earlier established that the maneuvers available for PID containing independent control surface inputs from OBES were not well suited for extracting the cross-coupling static (i.e., C(sub N beta)) or dynamic (i.e., C(sub Npf)) derivatives. This was due to the fact that these maneuvers were designed with the goal of minimizing any lateral directional motion during longitudinal maneuvers and vice-versa. This allows for greater simplification in the aerodynamic model as far as coupling between longitudinal and lateral directions is concerned. As a result, efforts were made to reanalyze this data and extract static and dynamic derivatives for the F/A-18 HARV (High Angle of Attack Research Vehicle) without the inclusion of the cross-coupling terms such that more accurate estimates of classical model terms could be acquired. Four longitudinal flights containing static PID maneuvers were examined. The classical state equations already available in pEst for alphadot, qdot and thetadot were used. Three lateral directional flights of PID static maneuvers were also examined. The classical state equations already available in pEst for betadot, p dot, rdot and phi dot were used. Enclosed with this document are the full set of longitudinal and lateral directional parameter estimate plots showing coefficient estimates along with Cramer-Rao bounds. In addition, a representative time history match for each type of meneuver tested at each angle of attack is also enclosed.
Llacer, J. ); Bajamonde, A.C. . Dept. of Statistics)
1990-06-01
The frequency spectral characteristics, bias and variance of images reconstructed from real Positron Emission Tomography (PET) data have been studied. Feasible images obtained from statistically based reconstruction methods have been compared to Filtered Backprojection (FBP) images. Feasible images have been described as those images that are compatible with the measured data by consideration of the Poisson nature of the emission process. The results show that the spectral characteristics of reconstructions obtained by statistically based methods are at least as good as those obtained by the FBP methods. With some exceptions, statistically based reconstructions do not exhibit abnormal amounts of bias. The most significant difference between the two groups of reconstructions is in the image variance, where the statistically based methods yield substantially smaller variances in the regions with smaller image intensity than the FBP images. 14 refs., 12 figs., 3 tabs.
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1992-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1991-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
NMR Methods to Study Dynamic Allostery
Grutsch, Sarina; Brüschweiler, Sven; Tollinger, Martin
2016-01-01
Nuclear magnetic resonance (NMR) spectroscopy provides a unique toolbox of experimental probes for studying dynamic processes on a wide range of timescales, ranging from picoseconds to milliseconds and beyond. Along with NMR hardware developments, recent methodological advancements have enabled the characterization of allosteric proteins at unprecedented detail, revealing intriguing aspects of allosteric mechanisms and increasing the proportion of the conformational ensemble that can be observed by experiment. Here, we present an overview of NMR spectroscopic methods for characterizing equilibrium fluctuations in free and bound states of allosteric proteins that have been most influential in the field. By combining NMR experimental approaches with molecular simulations, atomistic-level descriptions of the mechanisms by which allosteric phenomena take place are now within reach. PMID:26964042
Methods and systems for combustion dynamics reduction
Kraemer, Gilbert Otto; Varatharajan, Balachandar; Srinivasan, Shiva; Lynch, John Joseph; Yilmaz, Ertan; Kim, Kwanwoo; Lacy, Benjamin; Crothers, Sarah; Singh, Kapil Kumar
2009-08-25
Methods and systems for combustion dynamics reduction are provided. A combustion chamber may include a first premixer and a second premixer. Each premixer may include at least one fuel injector, at least one air inlet duct, and at least one vane pack for at least partially mixing the air from the air inlet duct or ducts and fuel from the fuel injector or injectors. Each vane pack may include a plurality of fuel orifices through which at least a portion of the fuel and at least a portion of the air may pass. The vane pack or packs of the first premixer may be positioned at a first axial position and the vane pack or packs of the second premixer may be positioned at a second axial position axially staggered with respect to the first axial position.
A Maximum-Likelihood Approach to Force-Field Calibration.
Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam
2015-09-28
A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2
Semiclassical methods in chemical reaction dynamics
Keshavamurthy, S.
1994-12-01
Semiclassical approximations, simple as well as rigorous, are formulated in order to be able to describe gas phase chemical reactions in large systems. We formulate a simple but accurate semiclassical model for incorporating multidimensional tunneling in classical trajectory simulations. This model is based on the existence of locally conserved actions around the saddle point region on a multidimensional potential energy surface. Using classical perturbation theory and monitoring the imaginary action as a function of time along a classical trajectory we calculate state-specific unimolecular decay rates for a model two dimensional potential with coupling. Results are in good comparison with exact quantum results for the potential over a wide range of coupling constants. We propose a new semiclassical hybrid method to calculate state-to-state S-matrix elements for bimolecular reactive scattering. The accuracy of the Van Vleck-Gutzwiller propagator and the short time dynamics of the system make this method self-consistent and accurate. We also go beyond the stationary phase approximation by doing the resulting integrals exactly (numerically). As a result, classically forbidden probabilties are calculated with purely real time classical trajectories within this approach. Application to the one dimensional Eckart barrier demonstrates the accuracy of this approach. Successful application of the semiclassical hybrid approach to collinear reactive scattering is prevented by the phenomenon of chaotic scattering. The modified Filinov approach to evaluating the integrals is discussed, but application to collinear systems requires a more careful analysis. In three and higher dimensional scattering systems, chaotic scattering is suppressed and hence the accuracy and usefulness of the semiclassical method should be tested for such systems.
Estimating the Likelihood of Extreme Seismogenic Tsunamis
NASA Astrophysics Data System (ADS)
Geist, E. L.
2011-12-01
Because of high levels of destruction to coastal communities and critical facilities from recent tsunamis, estimating the likelihood of extreme seismogenic tsunamis has gained increased attention. Seismogenic tsunami generating capacity is directly related to the scalar seismic moment of the earthquake. As such, earthquake size distributions and recurrence can inform the likelihood of tsunami occurrence. The probability of extreme tsunamis is dependent on how the right-hand tail of the earthquake size distribution is specified. As evidenced by the 2004 Sumatra-Andaman and 2011 Tohoku earthquakes, it is likely that there is insufficient historical information to estimate the maximum earthquake magnitude (Mmax) for any specific subduction zone. Mmax may in fact not be a useful concept for subduction zones of significant length. Earthquake size distributions with a soft corner moment appear more consistent with global observations. Estimating the likelihood of extreme local tsunami runup is complicated by the fact that there is significant uncertainty in the scaling relationship between seismic moment and maximum local tsunami runup. This uncertainty arises from variations in source parameters specific to tsunami generation and the near-shore hydrodynamic response. The primary source effect is how slip is distributed along the fault relative to the overlying water depth. For high slip beneath deep water, shoaling amplification of the tsunami increases substantially according to Green's Law, compared to an equivalent amount of slip beneath shallow water. Both stochastic slip models and dynamic rupture models of tsunamigenic earthquakes are explored in a probabilistic context. The nearshore hydrodynamic response includes attenuating mechanisms, such as wave breaking, and amplifying mechanisms, such as constructive interference of trapped and non-trapped modes. Probabilistic estimates of extreme tsunamis are therefore site specific, as indicated by significant variations
Object Orientated Methods in Computational Fluid Dynamics.
NASA Astrophysics Data System (ADS)
Tabor, Gavin; Weller, Henry; Jasak, Hrvoje; Fureby, Christer
1997-11-01
We outline the aims of the FOAM code, a Finite Volume Computational Fluid Dynamics code written in C++, and discuss the use of Object Orientated Programming (OOP) methods to achieve these aims. The intention when writing this code was to make it as easy as possible to alter the modelling : this was achieved by making the top level syntax of the code as close as possible to conventional mathematical notation for tensors and partial differential equations. Object orientation enables us to define classes for both types of objects, and the operator overloading possible in C++ allows normal symbols to be used for the basic operations. The introduction of features such as automatic dimension checking of equations helps to enforce correct coding of models. We also discuss the use of OOP techniques such as data encapsulation and code reuse. As examples of the flexibility of this approach, we discuss the implementation of turbulence modelling using RAS and LES. The code is used to simulate turbulent flow for a number of test cases, including fully developed channel flow and flow around obstacles. We also demonstrate the use of the code for solving structures calculations and magnetohydrodynamics.
Intelligence's likelihood and evolutionary time frame
NASA Astrophysics Data System (ADS)
Bogonovich, Marc
2011-04-01
This paper outlines hypotheses relevant to the evolution of intelligent life and encephalization in the Phanerozoic. If general principles are inferable from patterns of Earth life, implications could be drawn for astrobiology. Many of the outlined hypotheses, relevant data, and associated evolutionary and ecological theory are not frequently cited in astrobiological journals. Thus opportunity exists to evaluate reviewed hypotheses with an astrobiological perspective. A quantitative method is presented for testing one of the reviewed hypotheses (hypothesis i; the diffusion hypothesis). Questions are presented throughout, which illustrate that the question of intelligent life's likelihood can be expressed as multiple, broadly ranging, more tractable questions.
Maximum Likelihood Estimation in Generalized Rasch Models.
ERIC Educational Resources Information Center
de Leeuw, Jan; Verhelst, Norman
1986-01-01
Maximum likelihood procedures are presented for a general model to unify the various models and techniques that have been proposed for item analysis. Unconditional maximum likelihood estimation, proposed by Wright and Haberman, and conditional maximum likelihood estimation, proposed by Rasch and Andersen, are shown as important special cases. (JAZ)
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Improved maximum likelihood reconstruction of complex multi-generational pedigrees.
Sheehan, Nuala A; Bartlett, Mark; Cussens, James
2014-11-01
The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as
Sensor registration using airlanes: maximum likelihood solution
NASA Astrophysics Data System (ADS)
Ong, Hwa-Tung
2004-01-01
In this contribution, the maximum likelihood estimation of sensor registration parameters, such as range, azimuth and elevation biases in radar measurements, using airlane information is proposed and studied. The motivation for using airlane information for sensor registration is that it is freely available as a source of reference and it provides an alternative to conventional techniques that rely on synchronised and correctly associated measurements from two or more sensors. In the paper, the problem is first formulated in terms of a measurement model that is a nonlinear function of the unknown target state and sensor parameters, plus sensor noise. A probabilistic model of the target state is developed based on airlane information. The maximum likelihood and also maximum a posteriori solutions are given. The Cramer-Rao lower bound is derived and simulation results are presented for the case of estimating the biases in radar range, azimuth and elevation measurements. The accuracy of the proposed method is compared against the Cramer-Rao lower bound and that of an existing two-sensor alignment method. It is concluded that sensor registration using airlane information is a feasible alternative to existing techniques.
Sensor registration using airlanes: maximum likelihood solution
NASA Astrophysics Data System (ADS)
Ong, Hwa-Tung
2003-12-01
In this contribution, the maximum likelihood estimation of sensor registration parameters, such as range, azimuth and elevation biases in radar measurements, using airlane information is proposed and studied. The motivation for using airlane information for sensor registration is that it is freely available as a source of reference and it provides an alternative to conventional techniques that rely on synchronised and correctly associated measurements from two or more sensors. In the paper, the problem is first formulated in terms of a measurement model that is a nonlinear function of the unknown target state and sensor parameters, plus sensor noise. A probabilistic model of the target state is developed based on airlane information. The maximum likelihood and also maximum a posteriori solutions are given. The Cramer-Rao lower bound is derived and simulation results are presented for the case of estimating the biases in radar range, azimuth and elevation measurements. The accuracy of the proposed method is compared against the Cramer-Rao lower bound and that of an existing two-sensor alignment method. It is concluded that sensor registration using airlane information is a feasible alternative to existing techniques.
Model-free linkage analysis using likelihoods
Curtis, D.; Sham, P.C.
1995-09-01
Misspecification of transmission model parameters can produce artifactually lod scores at small recombination fractions and in multipoint analysis. To avoid this problem, we have tried to devise a test that aims to detect a genetic effect at a particular locus, rather than attempting to estimate the map position of a locus with specified effect. Maximizing likelihoods over transmission model parameters, as well as linkage parameters, can produce seriously biased parameter estimates and so yield tests that lack power for the detection of linkage. However, constraining the transmission model parameters to produce the correct population prevalence largely avoids this problem. For computational convenience, we recommend that the likelihoods under linkage and nonlinkage are independently maximized over a limited set of transmission models, ranging from Mendelian dominant to null effect and from null effect to Mendelian recessive. In order to test for a genetic effect at a given map position, the likelihood under linkage is maximized over admixture, the proportion of families linked. Application to simulated data for a wide range of transmission models in both affected sib pairs and pedigrees demonstrates that the new method is well behaved under the null hypothesis and provides a powerful test for linkage when it is present. This test requires no specification of transmission model parameters, apart from an approximate estimate of the population prevalence. It can be applied equally to sib pairs and pedigrees, and, since it does not diminish the lod score at test positions very close to a marker, it is suitable for application to multipoint data. 24 refs., 1 fig., 4 tabs.
A hybrid likelihood algorithm for risk modelling.
Kellerer, A M; Kreisheimer, M; Chmelevsky, D; Barclay, D
1995-03-01
The risk of radiation-induced cancer is assessed through the follow-up of large cohorts, such as atomic bomb survivors or underground miners who have been occupationally exposed to radon and its decay products. The models relate to the dose, age and time dependence of the excess tumour rates, and they contain parameters that are estimated in terms of maximum likelihood computations. The computations are performed with the software package EPI-CURE, which contains the two main options of person-by person regression or of Poisson regression with grouped data. The Poisson regression is most frequently employed, but there are certain models that require an excessive number of cells when grouped data are used. One example involves computations that account explicitly for the temporal distribution of continuous exposures, as they occur with underground miners. In past work such models had to be approximated, but it is shown here that they can be treated explicitly in a suitably reformulated person-by person computation of the likelihood. The algorithm uses the familiar partitioning of the log-likelihood into two terms, L1 and L0. The first term, L1, represents the contribution of the 'events' (tumours). It needs to be evaluated in the usual way, but constitutes no computational problem. The second term, L0, represents the event-free periods of observation. It is, in its usual form, unmanageable for large cohorts. However, it can be reduced to a simple form, in which the number of computational steps is independent of cohort size. The method requires less computing time and computer memory, but more importantly it leads to more stable numerical results by obviating the need for grouping the data. The algorithm may be most relevant to radiation risk modelling, but it can facilitate the modelling of failure-time data in general. PMID:7604154
Discrepancy principle for the dynamical systems method
NASA Astrophysics Data System (ADS)
Ramm, A. G.
2005-02-01
Assume that Au=fis a solvable linear equation in a Hilbert space, ∥ A∥<∞, and R( A) is not closed, so this problem is ill-posed. Here R( A) is the range of the linear operator A. A dynamical systems method for solving this problem, consists of solving the following Cauchy problem: u˙=-u+(B+ɛ(t)) -1A ∗f, u(0)=u 0, where B:=A ∗A , u˙:= du/ dt , u0 is arbitrary, and ɛ( t)>0 is a continuously differentiable function, monotonically decaying to zero as t→∞. Ramm has proved [Commun Nonlin Sci Numer Simul 9(4) (2004) 383] that, for any u0, the Cauchy problem has a unique solution for all t>0, there exists y:= w(∞):=lim t→∞ u( t), Ay= f, and y is the unique minimal-norm solution to Au= f. If fδ is given, such that ∥ f- fδ∥⩽ δ, then uδ( t) is defined as the solution to the Cauchy problem with f replaced by fδ. The stopping time is defined as a number tδ such that lim δ→0 ∥ uδ( tδ)- y∥=0 and lim δ→0 tδ=∞. A discrepancy principle is proposed and proved in this paper. This principle yields tδ as the unique solution to the equation: ∥A(B+ɛ(t)) -1A ∗f δ-f δ∥=δ, where it is assumed that ∥ fδ∥> δ and f δ⊥N(A ∗) . The last assumption is removed, and if it does not hold, then the right-hand side of the above equation is replaced by Cδ, where C=const>1, and one assumes that ∥ fδ∥> Cδ. For nonlinear monotone A a discrepancy principle is formulated and justified.
Constraint likelihood analysis for a network of gravitational wave detectors
Klimenko, S.; Rakhmanov, M.; Mitselmakher, G.; Mohanty, S.
2005-12-15
We propose a coherent method for detection and reconstruction of gravitational wave signals with a network of interferometric detectors. The method is derived by using the likelihood ratio functional for unknown signal waveforms. In the likelihood analysis, the global maximum of the likelihood ratio over the space of waveforms is used as the detection statistic. We identify a problem with this approach. In the case of an aligned pair of detectors, the detection statistic depends on the cross correlation between the detectors as expected, but this dependence disappears even for infinitesimally small misalignments. We solve the problem by applying constraints on the likelihood functional and obtain a new class of statistics. The resulting method can be applied to data from a network consisting of any number of detectors with arbitrary detector orientations. The method allows us reconstruction of the source coordinates and the waveforms of two polarization components of a gravitational wave. We study the performance of the method with numerical simulations and find the reconstruction of the source coordinates to be more accurate than in the standard likelihood method.
System and Method for Dynamic Aeroelastic Control
NASA Technical Reports Server (NTRS)
Suh, Peter M. (Inventor)
2015-01-01
The present invention proposes a hardware and software architecture for dynamic modal structural monitoring that uses a robust modal filter to monitor a potentially very large-scale array of sensors in real time, and tolerant of asymmetric sensor noise and sensor failures, to achieve aircraft performance optimization such as minimizing aircraft flutter, drag and maximizing fuel efficiency.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.
Modified maximum likelihood registration based on information fusion
NASA Astrophysics Data System (ADS)
Qi, Yongqing; Jing, Zhongliang; Hu, Shiqiang
2007-11-01
The bias estimation of passive sensors is considered based on information fusion in multi-platform multi-sensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.
Assumed modes method and flexible multibody dynamics
NASA Technical Reports Server (NTRS)
Tadikonda, S. S. K.; Mordfin, T. G.; Hu, T. G.
1993-01-01
The use of assumed modes in flexible multibody dynamics algorithms requires the evaluation of several domain dependent integrals that are affected by the type of modes used. The implications of these integrals - often called zeroth, first and second order terms - are investigated in this paper, for arbitrarily shaped bodies. Guidelines are developed for the use of appropriate boundary conditions while generating the component modal models. The issue of whether and which higher order terms must be retained is also addressed. Analytical results, and numerical results using the Shuttle Remote Manipulator System as the multibody system, are presented to qualitatively and quantitatively address these issues.
Extrapolation methods for dynamic partial differential equations
NASA Technical Reports Server (NTRS)
Turkel, E.
1978-01-01
Several extrapolation procedures are presented for increasing the order of accuracy in time for evolutionary partial differential equations. These formulas are based on finite difference schemes in both the spatial and temporal directions. On practical grounds the methods are restricted to schemes that are fourth order in time and either second, fourth or sixth order in space. For hyperbolic problems the second order in space methods are not useful while the fourth order methods offer no advantage over the Kreiss-Oliger method unless very fine meshes are used. Advantages are first achieved using sixth order methods in space coupled with fourth order accuracy in time. Computational results are presented confirming the analytic discussions.
Multiscale likelihood analysis and image reconstruction
NASA Astrophysics Data System (ADS)
Willett, Rebecca M.; Nowak, Robert D.
2003-11-01
The nonparametric multiscale polynomial and platelet methods presented here are powerful new tools for signal and image denoising and reconstruction. Unlike traditional wavelet-based multiscale methods, these methods are both well suited to processing Poisson or multinomial data and capable of preserving image edges. At the heart of these new methods lie multiscale signal decompositions based on polynomials in one dimension and multiscale image decompositions based on what the authors call platelets in two dimensions. Platelets are localized functions at various positions, scales and orientations that can produce highly accurate, piecewise linear approximations to images consisting of smooth regions separated by smooth boundaries. Polynomial and platelet-based maximum penalized likelihood methods for signal and image analysis are both tractable and computationally efficient. Polynomial methods offer near minimax convergence rates for broad classes of functions including Besov spaces. Upper bounds on the estimation error are derived using an information-theoretic risk bound based on squared Hellinger loss. Simulations establish the practical effectiveness of these methods in applications such as density estimation, medical imaging, and astronomy.
MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.
A maximum likelihood framework for protein design
Kleinman, Claudia L; Rodrigue, Nicolas; Bonnard, Cécile; Philippe, Hervé; Lartillot, Nicolas
2006-01-01
Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces shaping protein sequences, and
Dynamic fiber Bragg grating sensing method
NASA Astrophysics Data System (ADS)
Ho, Siu Chun Michael; Ren, Liang; Li, Hongnan; Song, Gangbing
2016-02-01
The measurement of high frequency vibrations is important in many scientific and engineering problems. This paper presents a novel, cost effective method using fiber optic fiber Bragg gratings (FBGs) for the measurement of high frequency vibrations. The method uses wavelength matched FBG sensors, with the first sensor acting as a transmission filter and the second sensor acting as the sensing portion. Energy fluctuations in the reflection spectrum of the second FBG due to wavelength mismatch between the sensors are captured by a photodiode. An in-depth analysis of the optical circuit is provided to predict the behavior of the method as well as identify ways to optimize the method. Simple demonstrations of the method were performed with the FBG sensing system installed on a piezoelectric transducer and on a wind turbine blade. Vibrations were measured with sampling frequencies up to 1 MHz for demonstrative purposes. The sensing method can be multiplexed for use with multiple sensors, and with care, can be retrofitted to work with FBG sensors already installed on a structure.
Song, Dong; Wang, Haonan; Tu, Catherine Y.; Marmarelis, Vasilis Z.; Hampson, Robert E.; Deadwyler, Sam A.; Berger, Theodore W.
2013-01-01
One key problem in computational neuroscience and neural engineering is the identification and modeling of functional connectivity in the brain using spike train data. To reduce model complexity, alleviate overfitting, and thus facilitate model interpretation, sparse representation and estimation of functional connectivity is needed. Sparsities include global sparsity, which captures the sparse connectivities between neurons, and local sparsity, which reflects the active temporal ranges of the input-output dynamical interactions. In this paper, we formulate a generalized functional additive model (GFAM) and develop the associated penalized likelihood estimation methods for such a modeling problem. A GFAM consists of a set of basis functions convolving the input signals, and a link function generating the firing probability of the output neuron from the summation of the convolutions weighted by the sought model coefficients. Model sparsities are achieved by using various penalized likelihood estimations and basis functions. Specifically, we introduce two variations of the GFAM using a global basis (e.g., Laguerre basis) and group LASSO estimation, and a local basis (e.g., B-spline basis) and group bridge estimation, respectively. We further develop an optimization method based on quadratic approximation of the likelihood function for the estimation of these models. Simulation and experimental results show that both group-LASSO-Laguerre and group-bridge-B-spline can capture faithfully the global sparsities, while the latter can replicate accurately and simultaneously both global and local sparsities. The sparse models outperform the full models estimated with the standard maximum likelihood method in out-of-sample predictions. PMID:23674048
Methods for modeling contact dynamics of capture mechanisms
NASA Technical Reports Server (NTRS)
Williams, Philip J.; Tobbe, Patrick A.; Glaese, John
1991-01-01
In this paper, an analytical approach for studying the contact dynamics of space-based vehicles during docking/berthing maneuvers is presented. Methods for modeling physical contact between docking/berthing mechanisms, examples of how these models have been used to evaluate the dynamic behavior of automated capture mechanisms, and experimental verification of predicted results are shown.
Numerical methods for molecular dynamics. Progress report
Skeel, R.D.
1991-12-31
This report summarizes our research progress to date on the use of multigrid methods for three-dimensional elliptic partial differential equations, with particular emphasis on application to the Poisson-Boltzmann equation of molecular biophysics. This research is motivated by the need for fast and accurate numerical solution techniques for three-dimensional problems arising in physics and engineering. In many applications these problems must be solved repeatedly, and the extremely large number of discrete unknowns required to accurately approximate solutions to partial differential equations in three-dimensional regions necessitates the use of efficient solution methods. This situation makes clear the importance of developing methods which are of optimal order (or nearly so), meaning that the number of operations required to solve the discrete problem is on the order of the number of discrete unknowns. Multigrid methods are generally regarded as being in this class of methods, and are in fact provably optimal order for an increasingly large class of problems. The fundamental goal of this research is to develop a fast and accurate numerical technique, based on multi-level principles, for the solutions of the Poisson-Boltzmann equation of molecular biophysics and similar equations occurring in other applications. An outline of the report is as follows. We first present some background material, followed by a survey of the literature on the use of multigrid methods for solving problems similar to the Poisson-Boltzmann equation. A short description of the software we have developed so far is then given, and numerical results are discussed. Finally, our research plans for the coming year are presented.
Proposal on dynamic correction method for resonance ionization mass spectrometry
NASA Astrophysics Data System (ADS)
Noto, Takuma; Tomita, Hideki; Richter, Sven; Schneider, Fabian; Wendt, Klaus; Iguchi, Tetsuo; Kawarabayashi, Jun
2013-04-01
For high precision and accuracy in isotopic ratio measurement of transuranic elements using laser ablation assisted resonance ionization mass spectrometry, a dynamic correction method based on correlation of ion signals with energy and timing of each laser pulse was proposed. The feasibility of this dynamic correction method was investigated through the use of a programmable electronics device for fast acquisition of the energy and timing of each laser pulse.
MARGINAL EMPIRICAL LIKELIHOOD AND SURE INDEPENDENCE FEATURE SCREENING
Chang, Jinyuan; Tang, Cheng Yong; Wu, Yichao
2013-01-01
We study a marginal empirical likelihood approach in scenarios when the number of variables grows exponentially with the sample size. The marginal empirical likelihood ratios as functions of the parameters of interest are systematically examined, and we find that the marginal empirical likelihood ratio evaluated at zero can be used to differentiate whether an explanatory variable is contributing to a response variable or not. Based on this finding, we propose a unified feature screening procedure for linear models and the generalized linear models. Different from most existing feature screening approaches that rely on the magnitudes of some marginal estimators to identify true signals, the proposed screening approach is capable of further incorporating the level of uncertainties of such estimators. Such a merit inherits the self-studentization property of the empirical likelihood approach, and extends the insights of existing feature screening methods. Moreover, we show that our screening approach is less restrictive to distributional assumptions, and can be conveniently adapted to be applied in a broad range of scenarios such as models specified using general moment conditions. Our theoretical results and extensive numerical examples by simulations and data analysis demonstrate the merits of the marginal empirical likelihood approach. PMID:24415808
Fast multipole methods for particle dynamics
Kurzak, J.; Pettitt, B. M.
2008-01-01
The growth of simulations of particle systems has been aided by advances in computer speed and algorithms. The adoption of O(N) algorithms to solve N-body simulation problems has been less rapid due to the fact that such scaling was only competitive for relatively large N. Our work seeks to find algorithmic modifications and practical implementations for intermediate values of N in typical use for molecular simulations. This article reviews fast multipole techniques for calculation of electrostatic interactions in molecular systems. The basic mathematics behind fast summations applied to long ranged forces is presented along with advanced techniques for accelerating the solution, including our most recent developments. The computational efficiency of the new methods facilitates both simulations of large systems as well as longer and therefore more realistic simulations of smaller systems. PMID:19194526
Engineering applications of a dynamical state feedback chaotification method
NASA Astrophysics Data System (ADS)
Şahin, Savaş; Güzeliş, Cüneyt
2012-09-01
This paper presents two engineering applications of a chaotification method which can be applied to any inputstate linearizable (nonlinear) system including linear controllable ones as special cases. In the used chaotification method, a reference chaotic and linear system can be combined into a special form by a dynamical state feedback increasing the order of the open loop system to have the same chaotic dynamics with the reference chaotic system. Promising dc motor applications of the method are implemented by the proposed dynamical state feedback which is based on matching the closed loop dynamics to the well known Chua and also Lorenz chaotic systems. The first application, which is the chaotified dc motor used for mixing a corn syrup added acid-base mixture, is implemented via a personal computer and a microcontroller based circuit. As a second application, a chaotified dc motor with a taco-generator used in the feedback is realized by using fully analog circuit elements.
Fast Multipole Methods for Particle Dynamics.
Kurzak, Jakub; Pettitt, Bernard M.
2006-08-30
The research described in this product was performed in part in the Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory. The growth of simulations of particle systems has been aided by advances in computer speed and algorithms. The adoption of O(N) algorithms to solve N-body simulation problems has been less rapid due to the fact that such scaling was only competitive for relatively large N. Our work seeks to find algorithmic modifications and practical implementations for intermediate values of N in typical use for molecular simulations. This article reviews fast multipole techniques for calculation of electrostatic interactions in molecular systems. The basic mathematics behind fast summations applied to long ranged forces is presented along with advanced techniques for accelerating the solution, including our most recent developments. The computational efficiency of the new methods facilitates both simulations of large systems as well as longer and therefore more realistic simulations of smaller systems.
Likelihood-Free Inference in High-Dimensional Models.
Kousathanas, Athanasios; Leuenberger, Christoph; Helfer, Jonas; Quinodoz, Mathieu; Foll, Matthieu; Wegmann, Daniel
2016-06-01
Methods that bypass analytical evaluations of the likelihood function have become an indispensable tool for statistical inference in many fields of science. These so-called likelihood-free methods rely on accepting and rejecting simulations based on summary statistics, which limits them to low-dimensional models for which the value of the likelihood is large enough to result in manageable acceptance rates. To get around these issues, we introduce a novel, likelihood-free Markov chain Monte Carlo (MCMC) method combining two key innovations: updating only one parameter per iteration and accepting or rejecting this update based on subsets of statistics approximately sufficient for this parameter. This increases acceptance rates dramatically, rendering this approach suitable even for models of very high dimensionality. We further derive that for linear models, a one-dimensional combination of statistics per parameter is sufficient and can be found empirically with simulations. Finally, we demonstrate that our method readily scales to models of very high dimensionality, using toy models as well as by jointly inferring the effective population size, the distribution of fitness effects (DFE) of segregating mutations, and selection coefficients for each locus from data of a recent experiment on the evolution of drug resistance in influenza. PMID:27052569
Refining clinical diagnosis with likelihood ratios.
Grimes, David A; Schulz, Kenneth F
Likelihood ratios can refine clinical diagnosis on the basis of signs and symptoms; however, they are underused for patients' care. A likelihood ratio is the percentage of ill people with a given test result divided by the percentage of well individuals with the same result. Ideally, abnormal test results should be much more typical in ill individuals than in those who are well (high likelihood ratio) and normal test results should be most frequent in well people than in sick people (low likelihood ratio). Likelihood ratios near unity have little effect on decision-making; by contrast, high or low ratios can greatly shift the clinician's estimate of the probability of disease. Likelihood ratios can be calculated not only for dichotomous (positive or negative) tests but also for tests with multiple levels of results, such as creatine kinase or ventilation-perfusion scans. When combined with an accurate clinical diagnosis, likelihood ratios from ancillary tests improve diagnostic accuracy in a synergistic manner. PMID:15850636
Davtyan, Aram; Dama, James F.; Voth, Gregory A.; Andersen, Hans C.
2015-04-21
Coarse-grained (CG) models of molecular systems, with fewer mechanical degrees of freedom than an all-atom model, are used extensively in chemical physics. It is generally accepted that a coarse-grained model that accurately describes equilibrium structural properties (as a result of having a well constructed CG potential energy function) does not necessarily exhibit appropriate dynamical behavior when simulated using conservative Hamiltonian dynamics for the CG degrees of freedom on the CG potential energy surface. Attempts to develop accurate CG dynamic models usually focus on replacing Hamiltonian motion by stochastic but Markovian dynamics on that surface, such as Langevin or Brownian dynamics. However, depending on the nature of the system and the extent of the coarse-graining, a Markovian dynamics for the CG degrees of freedom may not be appropriate. In this paper, we consider the problem of constructing dynamic CG models within the context of the Multi-Scale Coarse-graining (MS-CG) method of Voth and coworkers. We propose a method of converting a MS-CG model into a dynamic CG model by adding degrees of freedom to it in the form of a small number of fictitious particles that interact with the CG degrees of freedom in simple ways and that are subject to Langevin forces. The dynamic models are members of a class of nonlinear systems interacting with special heat baths that were studied by Zwanzig [J. Stat. Phys. 9, 215 (1973)]. The properties of the fictitious particles can be inferred from analysis of the dynamics of all-atom simulations of the system of interest. This is analogous to the fact that the MS-CG method generates the CG potential from analysis of equilibrium structures observed in all-atom simulation data. The dynamic models generate a non-Markovian dynamics for the CG degrees of freedom, but they can be easily simulated using standard molecular dynamics programs. We present tests of this method on a series of simple examples that demonstrate that
A Particle Population Control Method for Dynamic Monte Carlo
NASA Astrophysics Data System (ADS)
Sweezy, Jeremy; Nolen, Steve; Adams, Terry; Zukaitis, Anthony
2014-06-01
A general particle population control method has been derived from splitting and Russian Roulette for dynamic Monte Carlo particle transport. A well-known particle population control method, known as the particle population comb, has been shown to be a special case of this general method. This general method has been incorporated in Los Alamos National Laboratory's Monte Carlo Application Toolkit (MCATK) and examples of it's use are shown for both super-critical and sub-critical systems.
Altazimuth mount based dynamic calibration method for GNSS attitude measurement
NASA Astrophysics Data System (ADS)
Jiang, Nan; He, Tao; Sun, Shaohua; Gu, Qing
2015-02-01
As the key process to ensure the test accuracy and quality, the dynamic calibration of the GNSS attitude measuring instrument is often embarrassed by the lack of the rigid enough test platform and an accurate enough calibration reference. To solve the problems, a novel dynamic calibration method for GNSS attitude measurement based on altazimuth mount is put forward in this paper. The principle and implementation of this method are presented, and then the feasibility and usability of the method are analyzed in detail involving the applicability of the mount, calibrating precision, calibrating range, base line rigidity and the satellite signal involved factors. Furthermore, to verify and test the method, a confirmatory experiment is carried out with the survey ship GPS attitude measuring instrument, and the experimental results prove that it is a feasible way to the dynamic calibration for GNSS attitude measurement.
Method to describe stochastic dynamics using an optimal coordinate.
Krivov, Sergei V
2013-12-01
A general method to describe the stochastic dynamics of Markov processes is suggested. The method aims to solve three related problems: the determination of an optimal coordinate for the description of stochastic dynamics; the reconstruction of time from an ensemble of stochastic trajectories; and the decomposition of stationary stochastic dynamics into eigenmodes which do not decay exponentially with time. The problems are solved by introducing additive eigenvectors which are transformed by a stochastic matrix in a simple way - every component is translated by a constant distance. Such solutions have peculiar properties. For example, an optimal coordinate for stochastic dynamics with detailed balance is a multivalued function. An optimal coordinate for a random walk on a line corresponds to the conventional eigenvector of the one-dimensional Dirac equation. The equation for the optimal coordinate in a slowly varying potential reduces to the Hamilton-Jacobi equation for the action function. PMID:24483410
Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.
2003-01-01
Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.
Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.
2004-01-01
Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.
Efficient maximum likelihood parameterization of continuous-time Markov processes
McGibbon, Robert T.; Pande, Vijay S.
2015-01-01
Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016
NASA Astrophysics Data System (ADS)
Suh, Youngjoo; Kim, Hoirin
2014-12-01
In this paper, a new discriminative likelihood score weighting technique is proposed for speaker identification. The proposed method employs a discriminative weighting of frame-level log-likelihood scores with acoustic-phonetic classification in the Gaussian mixture model (GMM)-based speaker identification. Experiments performed on the Aurora noise-corrupted TIMIT database showed that the proposed approach provides meaningful performance improvement with an overall relative error reduction of 15.8% over the maximum likelihood-based baseline GMM approach.
Likelihood Analysis for Mega Pixel Maps
NASA Technical Reports Server (NTRS)
Kogut, Alan J.
1999-01-01
The derivation of cosmological parameters from astrophysical data sets routinely involves operations counts which scale as O(N(exp 3) where N is the number of data points. Currently planned missions, including MAP and Planck, will generate sky maps with N(sub d) = 10(exp 6) or more pixels. Simple "brute force" analysis, applied to such mega-pixel data, would require years of computing even on the fastest computers. We describe an algorithm which allows estimation of the likelihood function in the direct pixel basis. The algorithm uses a conjugate gradient approach to evaluate X2 and a geometric approximation to evaluate the determinant. Monte Carlo simulations provide a correction to the determinant, yielding an unbiased estimate of the likelihood surface in an arbitrary region surrounding the likelihood peak. The algorithm requires O(N(sub d)(exp 3/2) operations and O(Nd) storage for each likelihood evaluation, and allows for significant parallel computation.
Maximum-Likelihood Detection Of Noncoherent CPM
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Automated Maximum Likelihood Separation of Signal from Baseline in Noisy Quantal Data
Bruno, William J.; Ullah, Ghanim; Daniel Mak, Don-On; Pearson, John E.
2013-01-01
Data recordings often include high-frequency noise and baseline fluctuations that are not generated by the system under investigation, which need to be removed before analyzing the signal for the system’s behavior. In the absence of an automated method, experimentalists fall back on manual procedures for removing these fluctuations, which can be laborious and prone to subjective bias. We introduce a maximum likelihood formalism for separating signal from a drifting baseline plus noise, when the signal takes on integer multiples of some value, as in ion channel patch-clamp current traces. Parameters such as the quantal step size (e.g., current passing through a single channel), noise amplitude, and baseline drift rate can all be optimized automatically using the expectation-maximization algorithm, taking the number of open channels (or molecules in the on-state) at each time point as a hidden variable. Our goal here is to reconstruct the signal, not model the (possibly highly complex) underlying system dynamics. Thus, our likelihood function is independent of those dynamics. This may be thought of as restricting to the simplest possible hidden Markov model for the underlying channel current, in which successive measurements of the state of the channel(s) are independent. The resulting method is comparable to an experienced human in terms of results, but much faster. FORTRAN 90, C, R, and JAVA codes that implement the algorithm are available for download from our website. PMID:23823225
Automated maximum likelihood separation of signal from baseline in noisy quantal data.
Bruno, William J; Ullah, Ghanim; Mak, Don-On Daniel; Pearson, John E
2013-07-01
Data recordings often include high-frequency noise and baseline fluctuations that are not generated by the system under investigation, which need to be removed before analyzing the signal for the system's behavior. In the absence of an automated method, experimentalists fall back on manual procedures for removing these fluctuations, which can be laborious and prone to subjective bias. We introduce a maximum likelihood formalism for separating signal from a drifting baseline plus noise, when the signal takes on integer multiples of some value, as in ion channel patch-clamp current traces. Parameters such as the quantal step size (e.g., current passing through a single channel), noise amplitude, and baseline drift rate can all be optimized automatically using the expectation-maximization algorithm, taking the number of open channels (or molecules in the on-state) at each time point as a hidden variable. Our goal here is to reconstruct the signal, not model the (possibly highly complex) underlying system dynamics. Thus, our likelihood function is independent of those dynamics. This may be thought of as restricting to the simplest possible hidden Markov model for the underlying channel current, in which successive measurements of the state of the channel(s) are independent. The resulting method is comparable to an experienced human in terms of results, but much faster. FORTRAN 90, C, R, and JAVA codes that implement the algorithm are available for download from our website. PMID:23823225
Dynamic tread wear measurement method for train wheels against vibrations.
Chen, Xu; Sun, Junhua; Liu, Zhen; Zhang, Guangjun
2015-06-10
Dynamic tread wear measurement is difficult but significant for railway transportation safety and efficiency. The accuracy of existing methods is inclined to be affected by environmental vibrations since they are highly dependent on the accurate calibration of the relative pose between vision sensors. In this paper, we present a method to obtain full wheel profiles based on automatic registration of vision sensor data instead of traditional global calibrations. We adopt two structured light vision sensors to recover the inner and outer profiles of each wheel, and register them by the iterative closest point algorithm. Computer simulations show that the proposed method is insensitive to noises and relative pose vibrations. Static experiments demonstrate that our method has high accuracy and great repeatability. Dynamic experiments show that the measurement accuracy of our method is about 0.18 mm, which is a twofold improvement over traditional methods. PMID:26192824
Application of bifurcation methods to nonlinear flight dynamics problems
NASA Astrophysics Data System (ADS)
Goman, M. G.; Zagainov, G. I.; Khramtsovsky, A. V.
Applications of global stability and bifurcational analysis methods are presented for different nonlinear flight dynamics problems, such as roll-coupling, stall, spin, etc. Based on the results for different real aircraft, F-4, F-14, F-15, High Incidence Research Model, (HIRM), the general methods developed by many authors are presented. The outline of basic concepts and methods from dynamcal system theory are also introduced.
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
Improved dynamic analysis method using load-dependent Ritz vectors
NASA Technical Reports Server (NTRS)
Escobedo-Torres, J.; Ricles, J. M.
1993-01-01
The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.
Can the ring polymer molecular dynamics method be interpreted as real time quantum dynamics?
Jang, Seogjoo; Sinitskiy, Anton V.; Voth, Gregory A.
2014-04-21
The ring polymer molecular dynamics (RPMD) method has gained popularity in recent years as a simple approximation for calculating real time quantum correlation functions in condensed media. However, the extent to which RPMD captures real dynamical quantum effects and why it fails under certain situations have not been clearly understood. Addressing this issue has been difficult in the absence of a genuine justification for the RPMD algorithm starting from the quantum Liouville equation. To this end, a new and exact path integral formalism for the calculation of real time quantum correlation functions is presented in this work, which can serve as a rigorous foundation for the analysis of the RPMD method as well as providing an alternative derivation of the well established centroid molecular dynamics method. The new formalism utilizes the cyclic symmetry of the imaginary time path integral in the most general sense and enables the expression of Kubo-transformed quantum time correlation functions as that of physical observables pre-averaged over the imaginary time path. Upon filtering with a centroid constraint function, the formulation results in the centroid dynamics formalism. Upon filtering with the position representation of the imaginary time path integral, we obtain an exact quantum dynamics formalism involving the same variables as the RPMD method. The analysis of the RPMD approximation based on this approach clarifies that an explicit quantum dynamical justification does not exist for the use of the ring polymer harmonic potential term (imaginary time kinetic energy) as implemented in the RPMD method. It is analyzed why this can cause substantial errors in nonlinear correlation functions of harmonic oscillators. Such errors can be significant for general correlation functions of anharmonic systems. We also demonstrate that the short time accuracy of the exact path integral limit of RPMD is of lower order than those for finite discretization of path. The
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.
Dimension-independent likelihood-informed MCMC
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2016-01-01
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.
Accelerated molecular dynamics methods: introduction and recent developments
Uberuaga, Blas Pedro; Voter, Arthur F; Perez, Danny; Shim, Y; Amar, J G
2009-01-01
A long-standing limitation in the use of molecular dynamics (MD) simulation is that it can only be applied directly to processes that take place on very short timescales: nanoseconds if empirical potentials are employed, or picoseconds if we rely on electronic structure methods. Many processes of interest in chemistry, biochemistry, and materials science require study over microseconds and beyond, due either to the natural timescale for the evolution or to the duration of the experiment of interest. Ignoring the case of liquids xxx, the dynamics on these time scales is typically characterized by infrequent-event transitions, from state to state, usually involving an energy barrier. There is a long and venerable tradition in chemistry of using transition state theory (TST) [10, 19, 23] to directly compute rate constants for these kinds of activated processes. If needed dynamical corrections to the TST rate, and even quantum corrections, can be computed to achieve an accuracy suitable for the problem at hand. These rate constants then allow them to understand the system behavior on longer time scales than we can directly reach with MD. For complex systems with many reaction paths, the TST rates can be fed into a stochastic simulation procedure such as kinetic Monte Carlo xxx, and a direct simulation of the advance of the system through its possible states can be obtained in a probabilistically exact way. A problem that has become more evident in recent years, however, is that for many systems of interest there is a complexity that makes it difficult, if not impossible, to determine all the relevant reaction paths to which TST should be applied. This is a serious issue, as omitted transition pathways can have uncontrollable consequences on the simulated long-time kinetics. Over the last decade or so, we have been developing a new class of methods for treating the long-time dynamics in these complex, infrequent-event systems. Rather than trying to guess in advance what
Screw-matrix method in dynamics of multibody systems
NASA Astrophysics Data System (ADS)
Yanzhu, Liu
1988-05-01
In the present paper the concept of screw in classical mechanics is expressed in matrix form, in order to formulate the dynamical equations of the multibody systems. The mentioned method can retain the advantages of the screw theory and avoid the shortcomings of the dual number notation. Combining the screw-matrix method with the tool of graph theory in Roberson/Wittenberg formalism. We can expand the application of the screw theory to the general case of multibody systems. For a tree system, the dynamical equations for each j-th subsystem, composed of all the outboard bodies connected by j-th joint can be formulated without the constraint reaction forces in the joints. For a nontree system, the dynamical equations of subsystems and the kinematical consistency conditions of the joints can be derived using the loop matrix. The whole process of calculation is unified in matrix form. A three-segment manipulator is discussed as an example.
Dynamic subcriticality measurements using the CF neutron noise method: Videotape
Mihalczo, J.T.; Blakeman, E.D.; Ragan, G.E.; Johnson, E.B.
1987-01-01
The capability to measure the subcriticality for a multiplying system with k-effective values as low as 0.3 was demonstrated for measurement times of approximately 10 s; the measured k-effective values obtained do not depend on the speed with which the solution height is changed or on whether the tank is filling or draining. As in previous experiments, the low-frequency ratios of spectral densities are all that are needed to obtain the k-effective value. This method's effectiveness for systems where conditions are changing with time as demonstrated, probably exceeds the dynamic requirements for most nuclear fuel plant processing applications. The calculated k-effective values using the KENO code and Hansen-Roach cross-sections compare well with the experimental values. Before the dynamic capability of the method can be considered fully explored, additional dynamic experiments are required for other geometries and fuel concentrations.
A method for dynamic system characterization using hydraulic series resistance.
Kim, Dongshin; Chesler, Naomi C; Beebe, David J
2006-05-01
The pressure required to drive flow through a microfluidic device is an important characteristic of that device. We present a method to measure the flow rate through microfluidic components and systems, including micropumps and microvalves. The measurement platform is composed of two pressure sensors and a glass tube, which provides series resistance. The principle of the measurement is the fluid dynamical equivalent of Ohm's law, which defines the relationship between current, resistance, and voltage that are analogues to flow rate, hydraulic resistance, and pressure drop, respectively. Once the series resistance is known, it is possible to compute the flow rate through a device based on pressure alone. In addition, the dynamic system characteristics of the device-resistance and capacitance-can be computed. The benefits of this method are its simple configuration, capability of measuring flow rate accurately from the more easily measured pressure, and the ability to predict the dynamic response of microfluidic devices. PMID:16652179
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database. PMID:25163062
The Feldenkrais Method: A Dynamic Approach to Changing Motor Behavior.
ERIC Educational Resources Information Center
Buchanan, Patricia A.; Ulrich, Beverly D.
2001-01-01
Describes the Feldenkrais Method of somatic education, noting parallels with a dynamic systems theory (DST) approach to motor behavior. Feldenkrais uses movement and perception to foster individualized improvement in function. DST explains that a human-environment system continually adapts to changing conditions and assembles behaviors…
Proposed method of rotary dynamic balancing by laser
NASA Technical Reports Server (NTRS)
Perkins, W. E.
1967-01-01
Laser method, where high energies of monochromatic light can be precisely collimated to perform welding and machining processes, is proposed for rotary dynamic balancing. The unbalance, as detected with the velocity pickup, would trigger the laser system which would emit high energy pulses directed at the heavy side of the component.
Forced vibration of flexible body systems. A dynamic stiffness method
NASA Astrophysics Data System (ADS)
Liu, T. S.; Lin, J. C.
1993-10-01
Due to the development of high speed machinery, robots, and aerospace structures, the research of flexible body systems undergoing both gross motion and elastic deformation has seen increasing importance. The finite element method and modal analysis are often used in formulating equations of motion for dynamic analysis of the systems which entail time domain, forced vibration analysis. This study develops a new method based on dynamic stiffness to investigate forced vibration of flexible body systems. In contrast to the conventional finite element method, shape functions and stiffness matrices used in this study are derived from equations of motion for continuum beams. Hence, the resulting shape functions are named as dynamic shape functions. By applying the dynamic shape functions, the mass and stiffness matrices of a beam element are derived. The virtual work principle is employed to formulate equations of motion. Not only the coupling of gross motion and elastic deformation, but also the stiffening effect of axial forces is taken into account. Simulation results of a cantilever beam, a rotating beam, and a slider crank mechanism are compared with the literature to verify the proposed method.
Continuation Methods for Qualitative Analysis of Aircraft Dynamics
NASA Technical Reports Server (NTRS)
Cummings, Peter A.
2004-01-01
A class of numerical methods for constructing bifurcation curves for systems of coupled, non-linear ordinary differential equations is presented. Foundations are discussed, and several variations are outlined along with their respective capabilities. Appropriate background material from dynamical systems theory is presented.
Hybrid finite element and Brownian dynamics method for charged particles
NASA Astrophysics Data System (ADS)
Huber, Gary A.; Miao, Yinglong; Zhou, Shenggao; Li, Bo; McCammon, J. Andrew
2016-04-01
Diffusion is often the rate-determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. A previous study introduced a new hybrid diffusion method that couples the strengths of each of these two methods, but was limited by the lack of interactions among the particles; the force on each particle had to be from an external field. This study further develops the method to allow charged particles. The method is derived for a general multidimensional system and is presented using a basic test case for a one-dimensional linear system with one charged species and a radially symmetric system with three charged species.
Extended Molecular Dynamics Methods for Vortex Dynamics in Nano-structured Superconductors
NASA Astrophysics Data System (ADS)
Kato, Masaru; Sato, Osamu
Using improved molecular dynamics simulation method, we study vortex dynamics in nano-scaled superconductors. Heat generations during vortex motion, heat transfer in superconductors, and entropy forces to vortices are incorporated. Also quasi-particle relaxations after vortex motion, and their attractive "retarded" forces to other vortices are incorporated using the condensation-energy field. We show the time development of formation of vortex channel flow in a superconducting Corbino-disk.
Review of dynamic optimization methods in renewable natural resource management
Williams, B.K.
1989-01-01
In recent years, the applications of dynamic optimization procedures in natural resource management have proliferated. A systematic review of these applications is given in terms of a number of optimization methodologies and natural resource systems. The applicability of the methods to renewable natural resource systems are compared in terms of system complexity, system size, and precision of the optimal solutions. Recommendations are made concerning the appropriate methods for certain kinds of biological resource problems.
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Dynamic Rupture Benchmarking of the ADER-DG Method
NASA Astrophysics Data System (ADS)
Pelties, C.; Gabriel, A.
2012-12-01
We will verify the arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) method in various test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite (Harris et al. 2009). The ADER-DG scheme is able to solve the spontaneous rupture problem with high-order accuracy in space and time on three-dimensional unstructured tetrahedral meshes. Strong mesh coarsening or refinement at areas of interest can be applied to keep the computational costs feasible. Moreover, the method does not generate spurious high-frequency contributions in the slip rate spectra and therefore does not require any artificial damping as demonstrated in previous presentations and publications (Pelties et al. 2010 and 2012). We will show that the mentioned features hold also for more advanced setups as e.g. a branching fault system, heterogeneous background stresses and bimaterial faults. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies. References: Harris, R.A., M. Barall, R. Archuleta, B. Aagaard, J.-P. Ampuero, H. Bhat, V. Cruz-Atienza, L. Dalguer, P. Dawson, S. Day, B. Duan, E. Dunham, G. Ely, Y. Kaneko, Y. Kase, N. Lapusta, Y. Liu, S. Ma, D. Oglesby, K. Olsen, A. Pitarka, S. Song, and E. Templeton, The SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise, Seismological Research Letters, vol. 80, no. 1, pages 119-126, 2009 Pelties, C., J. de la Puente, and M. Kaeser, Dynamic Rupture Modeling in Three Dimensions on Unstructured Meshes Using a Discontinuous Galerkin Method, AGU 2010 Fall Meeting, abstract #S21C-2068 Pelties, C., J. de la Puente, J.-P. Ampuero, G. Brietzke, and M. Kaeser, Three-Dimensional Dynamic Rupture Simulation with a High-order Discontinuous Galerkin Method on Unstructured Tetrahedral Meshes, JGR. - Solid Earth, VOL. 117, B02309, 2012
Dynamic Rupture Benchmarking of the ADER-DG Method
NASA Astrophysics Data System (ADS)
Gabriel, Alice; Pelties, Christian
2013-04-01
We will verify the arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) method in various test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite (Harris et al. 2009). The ADER-DG scheme is able to solve the spontaneous rupture problem with high-order accuracy in space and time on three-dimensional unstructured tetrahedral meshes. Strong mesh coarsening or refinement at areas of interest can be applied to keep the computational costs feasible. Moreover, the method does not generate spurious high-frequency contributions in the slip rate spectra and therefore does not require any artificial damping as demonstrated in previous presentations and publications (Pelties et al. 2010 and 2012). We will show that the mentioned features hold also for more advanced setups as e.g. a branching fault system, heterogeneous background stresses and bimaterial faults. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies. References: Harris, R.A., M. Barall, R. Archuleta, B. Aagaard, J.-P. Ampuero, H. Bhat, V. Cruz-Atienza, L. Dalguer, P. Dawson, S. Day, B. Duan, E. Dunham, G. Ely, Y. Kaneko, Y. Kase, N. Lapusta, Y. Liu, S. Ma, D. Oglesby, K. Olsen, A. Pitarka, S. Song, and E. Templeton, The SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise, Seismological Research Letters, vol. 80, no. 1, pages 119-126, 2009 Pelties, C., J. de la Puente, and M. Kaeser, Dynamic Rupture Modeling in Three Dimensions on Unstructured Meshes Using a Discontinuous Galerkin Method, AGU 2010 Fall Meeting, abstract #S21C-2068 Pelties, C., J. de la Puente, J.-P. Ampuero, G. Brietzke, and M. Kaeser, Three-Dimensional Dynamic Rupture Simulation with a High-order Discontinuous Galerkin Method on Unstructured Tetrahedral Meshes, JGR. - Solid Earth, VOL. 117, B02309, 2012
Computational Methods for Structural Mechanics and Dynamics, part 1
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.
Collaborative double robust targeted maximum likelihood estimation.
van der Laan, Mark J; Gruber, Susan
2010-01-01
Collaborative double robust targeted maximum likelihood estimators represent a fundamental further advance over standard targeted maximum likelihood estimators of a pathwise differentiable parameter of a data generating distribution in a semiparametric model, introduced in van der Laan, Rubin (2006). The targeted maximum likelihood approach involves fluctuating an initial estimate of a relevant factor (Q) of the density of the observed data, in order to make a bias/variance tradeoff targeted towards the parameter of interest. The fluctuation involves estimation of a nuisance parameter portion of the likelihood, g. TMLE has been shown to be consistent and asymptotically normally distributed (CAN) under regularity conditions, when either one of these two factors of the likelihood of the data is correctly specified, and it is semiparametric efficient if both are correctly specified. In this article we provide a template for applying collaborative targeted maximum likelihood estimation (C-TMLE) to the estimation of pathwise differentiable parameters in semi-parametric models. The procedure creates a sequence of candidate targeted maximum likelihood estimators based on an initial estimate for Q coupled with a succession of increasingly non-parametric estimates for g. In a departure from current state of the art nuisance parameter estimation, C-TMLE estimates of g are constructed based on a loss function for the targeted maximum likelihood estimator of the relevant factor Q that uses the nuisance parameter to carry out the fluctuation, instead of a loss function for the nuisance parameter itself. Likelihood-based cross-validation is used to select the best estimator among all candidate TMLE estimators of Q(0) in this sequence. A penalized-likelihood loss function for Q is suggested when the parameter of interest is borderline-identifiable. We present theoretical results for "collaborative double robustness," demonstrating that the collaborative targeted maximum
Collaborative Double Robust Targeted Maximum Likelihood Estimation*
van der Laan, Mark J.; Gruber, Susan
2010-01-01
Collaborative double robust targeted maximum likelihood estimators represent a fundamental further advance over standard targeted maximum likelihood estimators of a pathwise differentiable parameter of a data generating distribution in a semiparametric model, introduced in van der Laan, Rubin (2006). The targeted maximum likelihood approach involves fluctuating an initial estimate of a relevant factor (Q) of the density of the observed data, in order to make a bias/variance tradeoff targeted towards the parameter of interest. The fluctuation involves estimation of a nuisance parameter portion of the likelihood, g. TMLE has been shown to be consistent and asymptotically normally distributed (CAN) under regularity conditions, when either one of these two factors of the likelihood of the data is correctly specified, and it is semiparametric efficient if both are correctly specified. In this article we provide a template for applying collaborative targeted maximum likelihood estimation (C-TMLE) to the estimation of pathwise differentiable parameters in semi-parametric models. The procedure creates a sequence of candidate targeted maximum likelihood estimators based on an initial estimate for Q coupled with a succession of increasingly non-parametric estimates for g. In a departure from current state of the art nuisance parameter estimation, C-TMLE estimates of g are constructed based on a loss function for the targeted maximum likelihood estimator of the relevant factor Q that uses the nuisance parameter to carry out the fluctuation, instead of a loss function for the nuisance parameter itself. Likelihood-based cross-validation is used to select the best estimator among all candidate TMLE estimators of Q0 in this sequence. A penalized-likelihood loss function for Q is suggested when the parameter of interest is borderline-identifiable. We present theoretical results for “collaborative double robustness,” demonstrating that the collaborative targeted maximum
Likelihood alarm displays. [for human operator
NASA Technical Reports Server (NTRS)
Sorkin, Robert D.; Kantowitz, Barry H.; Kantowitz, Susan C.
1988-01-01
In a likelihood alarm display (LAD) information about event likelihood is computed by an automated monitoring system and encoded into an alerting signal for the human operator. Operator performance within a dual-task paradigm was evaluated with two LADs: a color-coded visual alarm and a linguistically coded synthetic speech alarm. The operator's primary task was one of tracking; the secondary task was to monitor a four-element numerical display and determine whether the data arose from a 'signal' or 'no-signal' condition. A simulated 'intelligent' monitoring system alerted the operator to the likelihood of a signal. The results indicated that (1) automated monitoring systems can improve performance on primary and secondary tasks; (2) LADs can improve the allocation of attention among tasks and provide information integrated into operator decisions; and (3) LADs do not necessarily add to the operator's attentional load.
Dynamic Optical Grating Device and Associated Method for Modulating Light
NASA Technical Reports Server (NTRS)
Park, Yeonjoon (Inventor); Choi, Sang H. (Inventor); King, Glen C. (Inventor); Chu, Sang-Hyon (Inventor)
2012-01-01
A dynamic optical grating device and associated method for modulating light is provided that is capable of controlling the spectral properties and propagation of light without moving mechanical components by the use of a dynamic electric and/or magnetic field. By changing the electric field and/or magnetic field, the index of refraction, the extinction coefficient, the transmittivity, and the reflectivity fo the optical grating device may be controlled in order to control the spectral properties of the light reflected or transmitted by the device.
Population-dynamics method with a multicanonical feedback control
NASA Astrophysics Data System (ADS)
Nemoto, Takahiro; Bouchet, Freddy; Jack, Robert L.; Lecomte, Vivien
2016-06-01
We discuss the Giardinà-Kurchan-Peliti population dynamics method for evaluating large deviations of time-averaged quantities in Markov processes [Phys. Rev. Lett. 96, 120603 (2006), 10.1103/PhysRevLett.96.120603]. This method exhibits systematic errors which can be large in some circumstances, particularly for systems with weak noise, with many degrees of freedom, or close to dynamical phase transitions. We show how these errors can be mitigated by introducing control forces within the algorithm. These forces are determined by an iteration-and-feedback scheme, inspired by multicanonical methods in equilibrium sampling. We demonstrate substantially improved results in a simple model, and we discuss potential applications to more complex systems.
Population-dynamics method with a multicanonical feedback control.
Nemoto, Takahiro; Bouchet, Freddy; Jack, Robert L; Lecomte, Vivien
2016-06-01
We discuss the Giardinà-Kurchan-Peliti population dynamics method for evaluating large deviations of time-averaged quantities in Markov processes [Phys. Rev. Lett. 96, 120603 (2006)PRLTAO0031-900710.1103/PhysRevLett.96.120603]. This method exhibits systematic errors which can be large in some circumstances, particularly for systems with weak noise, with many degrees of freedom, or close to dynamical phase transitions. We show how these errors can be mitigated by introducing control forces within the algorithm. These forces are determined by an iteration-and-feedback scheme, inspired by multicanonical methods in equilibrium sampling. We demonstrate substantially improved results in a simple model, and we discuss potential applications to more complex systems. PMID:27415224
Analysis of the human electroencephalogram with methods from nonlinear dynamics
Mayer-Kress, G.; Holzfuss, J.
1986-09-08
We apply several different methods from nonlinear dynamical systems to the analysis of the degree of temporal disorder in data from human EEG. Among these are methods of geometrical reconstruction, dimensional complexity, mutual information content, and two different approaches for estimating Lyapunov characteristic exponents. We show how the naive interpretation of numerical results can lead to a considerable underestimation of the dimensional complexity. This is true even when the errors from least squares fits are small. We present more realistic error estimates and show that they seem to contain additional, important information. By applying independent methods of analysis to the same data sets for a given lead, we find that the degree of temporal disorder is minimal in a ''resting awake'' state and increases in sleep as well as in fluroxene induced general anesthesia. At the same time the statistical errors appear to decrease, which can be interpretated as a transition to a more uniform dynamical state. 29 refs., 10 figs.
Development of a transfer function method for dynamic stability measurement
NASA Technical Reports Server (NTRS)
Johnson, W.
1977-01-01
Flutter testing method based on transfer function measurements is developed. The error statistics of several dynamic stability measurement methods are reviewed. It is shown that the transfer function measurement controls the error level by averaging the data and correlating the input and output. The method also gives a direct estimate of the error in the response measurement. An algorithm is developed for obtaining the natural frequency and damping ratio of low damped modes of the system, using integrals of the transfer function in the vicinity of a resonant peak. Guidelines are given for selecting the parameters in the transfer function measurement. Finally, the dynamic stability measurement technique is applied to data from a wind tunnel test of a proprotor and wing model.
Maximum likelihood clustering with dependent feature trees
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.
Fast inference in generalized linear models via expected log-likelihoods
Ramirez, Alexandro D.; Paninski, Liam
2015-01-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289
A Non-smooth Newton Method for Multibody Dynamics
Erleben, K.; Ortiz, R.
2008-09-01
In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.
Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
Parallel methods for dynamic simulation of multiple manipulator systems
NASA Technical Reports Server (NTRS)
Mcmillan, Scott; Sadayappan, P.; Orin, David E.
1993-01-01
In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.
Molecular Dynamics and Energy Minimization Based on Embedded Atom Method
1995-03-01
This program performs atomic scale computer simulations of the structure and dynamics of metallic system using energetices based on the Embedded Atom Method. The program performs two types of calculations. First, it performs local energy minimization of all atomic positions to determine ground state and saddle point energies and structures. Second, it performs molecular dynamics simulations to determine thermodynamics or miscroscopic dynamics of the system. In both cases, various constraints can be applied to themore » system. The volume of the system can be varied automatically to achieve any desired external pressure. The temperature in molecular dynamics simulations can be controlled by a variety of methods. Further, the temperature control can be applied either to the entire system or just a subset of the atoms that would act as a thermal source/sink. The motion of one or more of the atoms can be constrained to either simulate the effects of bulk boundary conditions or to facilitate the determination of saddle point configurations. The simulations are performed with periodic boundary conditions.« less
Fast computation of genetic likelihoods on human pedigree data.
Goradia, T M; Lange, K; Miller, P L; Nadkarni, P M
1992-01-01
Gene mapping and genetic epidemiology require large-scale computation of likelihoods based on human pedigree data. Although computation of such likelihoods has become increasingly sophisticated, fast calculations are still impeded by complex pedigree structures, by models with many underlying loci and by missing observations on key family members. The current paper 'introduces' a new method of array factorization that substantially accelerates linkage calculations with large numbers of markers. This method is not limited to nuclear families or to families with complete phenotyping. Vectorization and parallelization are two general-purpose hardware techniques for accelerating computations. These techniques can assist in the rapid calculation of genetic likelihoods. We describe our experience using both of these methods with the existing program MENDEL. A vectorized version of MENDEL was run on an IBM 3090 supercomputer. A parallelized version of MENDEL was run on parallel machines of different architectures and on a network of workstations. Applying these revised versions of MENDEL to two challenging linkage problems yields substantial improvements in computational speed. PMID:1555846
Numerical likelihood analysis of cosmic ray anisotropies
Carlos Hojvat et al.
2003-07-02
A numerical likelihood approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.
Efficient Bit-to-Symbol Likelihood Mappings
NASA Technical Reports Server (NTRS)
Moision, Bruce E.; Nakashima, Michael A.
2010-01-01
This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.
Inertial stochastic dynamics. I. Long-time-step methods for Langevin dynamics
NASA Astrophysics Data System (ADS)
Beard, Daniel A.; Schlick, Tamar
2000-05-01
Two algorithms are presented for integrating the Langevin dynamics equation with long numerical time steps while treating the mass terms as finite. The development of these methods is motivated by the need for accurate methods for simulating slow processes in polymer systems such as two-site intermolecular distances in supercoiled DNA, which evolve over the time scale of milliseconds. Our new approaches refine the common Brownian dynamics (BD) scheme, which approximates the Langevin equation in the highly damped diffusive limit. Our LTID ("long-time-step inertial dynamics") method is based on an eigenmode decomposition of the friction tensor. The less costly integrator IBD ("inertial Brownian dynamics") modifies the usual BD algorithm by the addition of a mass-dependent correction term. To validate the methods, we evaluate the accuracy of LTID and IBD and compare their behavior to that of BD for the simple example of a harmonic oscillator. We find that the LTID method produces the expected correlation structure for Langevin dynamics regardless of the level of damping. In fact, LTID is the only consistent method among the three, with error vanishing as the time step approaches zero. In contrast, BD is accurate only for highly overdamped systems. For cases of moderate overdamping, and for the appropriate choice of time step, IBD is significantly more accurate than BD. IBD is also less computationally expensive than LTID (though both are the same order of complexity as BD), and thus can be applied to simulate systems of size and time scale ranges previously accessible to only the usual BD approach. Such simulations are discussed in our companion paper, for long DNA molecules modeled as wormlike chains.
Dynamic Multiscale Quantum Mechanics/Electromagnetics Simulation Method.
Meng, Lingyi; Yam, ChiYung; Koo, SiuKong; Chen, Quan; Wong, Ngai; Chen, GuanHua
2012-04-10
A newly developed hybrid quantum mechanics and electromagnetics (QM/EM) method [Yam et al. Phys. Chem. Chem. Phys.2011, 13, 14365] is generalized to simulate the real time dynamics. Instead of the electric and magnetic fields, the scalar and vector potentials are used to integrate Maxwell's equations in the time domain. The TDDFT-NEGF-EOM method [Zheng et al. Phys. Rev. B2007, 75, 195127] is employed to simulate the electronic dynamics in the quantum mechanical region. By allowing the penetration of a classical electromagnetic wave into the quantum mechanical region, the electromagnetic wave for the entire simulating region can be determined consistently by solving Maxwell's equations. The transient potential distributions and current density at the interface between quantum mechanical and classical regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. Charge distribution, current density, and potentials at different temporal steps and spatial scales are integrated seamlessly within a unified computational framework. PMID:26596737
Sensitivity evaluation of dynamic speckle activity measurements using clustering methods
Etchepareborda, Pablo; Federico, Alejandro; Kaufmann, Guillermo H.
2010-07-01
We evaluate and compare the use of competitive neural networks, self-organizing maps, the expectation-maximization algorithm, K-means, and fuzzy C-means techniques as partitional clustering methods, when the sensitivity of the activity measurement of dynamic speckle images needs to be improved. The temporal history of the acquired intensity generated by each pixel is analyzed in a wavelet decomposition framework, and it is shown that the mean energy of its corresponding wavelet coefficients provides a suited feature space for clustering purposes. The sensitivity obtained by using the evaluated clustering techniques is also compared with the well-known methods of Konishi-Fujii, weighted generalized differences, and wavelet entropy. The performance of the partitional clustering approach is evaluated using simulated dynamic speckle patterns and also experimental data.
Analysis methods for wind turbine control and electrical system dynamics
NASA Technical Reports Server (NTRS)
Hinrichsen, E. N.
1995-01-01
The integration of new energy technologies into electric power systems requires methods which recognize the full range of dynamic events in both the new generating unit and the power system. Since new energy technologies are initially perceived as small contributors to large systems, little attention is generally paid to system integration, i.e. dynamic events in the power system are ignored. As a result, most new energy sources are only capable of base-load operation, i.e. they have no load following or cycling capability. Wind turbines are no exception. Greater awareness of this implicit (and often unnecessary) limitation is needed. Analysis methods are recommended which include very low penetration (infinite bus) as well as very high penetration (stand-alone) scenarios.
Relaxation method and TCLE method of linear response in terms of thermo-field dynamics
NASA Astrophysics Data System (ADS)
Saeki, Mizuhiko
2008-03-01
The general formulae of the dynamic susceptibility are derived using the relaxation method and the TCLE method for the linear response by introducing the renormalized hat-operator in terms of thermo-field dynamics (TFD). In the former method, the Kubo formula is calculated for systems with no external driving fields, while in the latter method the admittance is directly calculated from time-convolutionless equations with external driving terms. The relation between the two methods is analytically investigated, and also the fluctuation-dissipation theorem is examined for the two methods in terms of TFD. The TCLE method is applied to an interacting spin system, and a formula of the transverse magnetic susceptibility is derived for such a system. The transverse magnetic susceptibility of an interacting spin system with S = 1 / 2 spins is obtained up to the first order in powers of the spin-spin interaction.
Least-squares finite element method for fluid dynamics
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Povinelli, Louis A.
1989-01-01
An overview is given of new developments of the least squares finite element method (LSFEM) in fluid dynamics. Special emphasis is placed on the universality of LSFEM; the symmetry and positiveness of the algebraic systems obtained from LSFEM; the accommodation of LSFEM to equal order interpolations for incompressible viscous flows; and the natural numerical dissipation of LSFEM for convective transport problems and high speed compressible flows. The performance of LSFEM is illustrated by numerical examples.
Parallel processing numerical method for confined vortex dynamics and applications
NASA Astrophysics Data System (ADS)
Bistrian, Diana Alina
2013-10-01
This paper explores a combined analytical and numerical technique to investigate the hydrodynamic instability of confined swirling flows, with application to vortex rope dynamics in a Francis turbine diffuser, in condition of sophisticated boundary constraints. We present a new approach based on the method of orthogonal decomposition in the Hilbert space, implemented with a spectral descriptor scheme in discrete space. A parallel implementation of the numerical scheme is conducted reducing the computational time compared to other techniques.
On the Dynamics of Implicit Linear Multistep Methods
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.; Rai, Man Mohan (Technical Monitor)
1995-01-01
Some new guidelines on the usage of implicit linear multistep methods (LMMs) as time-dependent approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) are explored. The commonly used implicit LMMs in CFD belong to the class of superstable time discretizations. It can be shown that the nonlinear asymptotic behavior in terms of bifurcation diagrams and basins of attractions of these schemes can provide an improved range of initial data and time step over the linearized stability limit.
Gravimetric method for the dynamic measurement of urine flow.
Steele, J E; Skarlatos, S; Brand, P H; Metting, P J; Britton, S L
1993-10-01
The rate of urine formation is a primary index of renal function, but no techniques are currently available to accurately measure low rates of urine flow on a continuous basis, such as are normally found in rats. We developed a gravimetric method for the dynamic measurement of urine flow in anesthetized rats. Catheters were inserted directly into the ureters close to the renal pelves, and a siphon was created to collect all of the urine formed as rapidly as it was produced. Urine flow was determined by measuring the weight of the urine using a direct-reading analytical balance interfaced to a computer. Basal urine flow was measured at 2-sec intervals for 30 to 60 min. The dynamic response of urine flow to a rapid decrease in arterial pressure produced by a bolus intravenous injection of acetylcholine (0.5 micrograms) was also measured. Intrinsic drift, evaporative losses, and the responsiveness of the system to several fixed pump flows in the low physiologic range were evaluated in vitro. The gravimetric method described was able to continuously measure basal urine flows that averaged 37.3 +/- 12.4 microliters/min. Error due to drift and evaporation was negligible, totaling less than 1% of the measured urine flow. Acetylcholine-induced declines in arterial pressure were followed within 8 sec by a decline in urine flow. These data demonstrate that this new gravimetric method provides a simple, inexpensive, dynamic measurement of urine flow in the microliter/min range. PMID:8372099
A Method for Evaluating Dynamical Friction in Linear Ball Bearings
Fujii, Yusaku; Maru, Koichi; Jin, Tao; Yupapin, Preecha P.; Mitatha, Somsak
2010-01-01
A method is proposed for evaluating the dynamical friction of linear bearings, whose motion is not perfectly linear due to some play in its internal mechanism. In this method, the moving part of a linear bearing is made to move freely, and the force acting on the moving part is measured as the inertial force given by the product of its mass and the acceleration of its centre of gravity. To evaluate the acceleration of its centre of gravity, the acceleration of two different points on it is measured using a dual-axis optical interferometer. PMID:22163457
Comparing the Performance of Two Dynamic Load Distribution Methods
NASA Technical Reports Server (NTRS)
Kale, L. V.
1987-01-01
Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.
Maximum-Likelihood Fits to Histograms for Improved Parameter Estimation
NASA Astrophysics Data System (ADS)
Fowler, J. W.
2014-08-01
Straightforward methods for adapting the familiar statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K fluorescence spectrum, a poor choice of can lead to biases of at least 10 % in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.
A Targeted Maximum Likelihood Estimator for Two-Stage Designs
Rose, Sherri; van der Laan, Mark J.
2011-01-01
We consider two-stage sampling designs, including so-called nested case control studies, where one takes a random sample from a target population and completes measurements on each subject in the first stage. The second stage involves drawing a subsample from the original sample, collecting additional data on the subsample. This data structure can be viewed as a missing data structure on the full-data structure collected in the second-stage of the study. Methods for analyzing two-stage designs include parametric maximum likelihood estimation and estimating equation methodology. We propose an inverse probability of censoring weighted targeted maximum likelihood estimator (IPCW-TMLE) in two-stage sampling designs and present simulation studies featuring this estimator. PMID:21556285
Maximum-likelihood registration of range images with missing data.
Sharp, Gregory C; Lee, Sang W; Wehe, David K
2008-01-01
Missing data are common in range images, due to geometric occlusions, limitations in the sensor field of view, poor reflectivity, depth discontinuities, and cast shadows. Using registration to align these data often fails, because points without valid correspondences can be incorrectly matched. This paper presents a maximum likelihood method for registration of scenes with unmatched or missing data. Using ray casting, correspondences are formed between valid and missing points in each view. These correspondences are used to classify points by their visibility properties, including occlusions, field of view, and shadow regions. The likelihood of each point match is then determined using statistical properties of the sensor, such as noise and outlier distributions. Experiments demonstrate a high rates of convergence on complex scenes with varying degrees of overlap. PMID:18000329
A maximum-likelihood estimation of pairwise relatedness for autopolyploids
Huang, K; Guo, S T; Shattuck, M R; Chen, S T; Qi, X G; Zhang, P; Li, B G
2015-01-01
Relatedness between individuals is central to ecological genetics. Multiple methods are available to quantify relatedness from molecular data, including method-of-moment and maximum-likelihood estimators. We describe a maximum-likelihood estimator for autopolyploids, and quantify its statistical performance under a range of biologically relevant conditions. The statistical performances of five additional polyploid estimators of relatedness were also quantified under identical conditions. When comparing truncated estimators, the maximum-likelihood estimator exhibited lower root mean square error under some conditions and was more biased for non-relatives, especially when the number of alleles per loci was low. However, even under these conditions, this bias was reduced to be statistically insignificant with more robust genetic sampling. We also considered ambiguity in polyploid heterozygote genotyping and developed a weighting methodology for candidate genotypes. The statistical performances of three polyploid estimators under both ideal and actual conditions (including inbreeding and double reduction) were compared. The software package POLYRELATEDNESS is available to perform this estimation and supports a maximum ploidy of eight. PMID:25370210
Comparison of nonlinear dynamic methods and perturbation methods for voice analysis
NASA Astrophysics Data System (ADS)
Zhang, Yu; Jiang, Jack J.; Wallace, Stephanie M.; Zhou, Liang
2005-10-01
Nonlinear dynamic methods and perturbation methods are compared in terms of the effects of signal length, sampling rate, and noise. Results of theoretical and experimental studies quantitatively show that measurements representing frequency and amplitude perturbations are not applicable to chaotic signals because of difficulties in pitch tracking and sensitivity to initial state differences. Perturbation analyses are only reliable when applied to nearly periodic voice samples of sufficiently long signal lengths that were obtained at high sampling rates and low noise levels. In contrast, nonlinear dynamic methods, such as correlation dimension, allow the quantification of chaotic time series. Additionally, the correlation dimension method presents a more stable analysis of nearly periodic voice samples for shorter signal lengths, lower sampling rates, and higher noise levels. The correlation dimension method avoids some of the methodological issues associated with perturbation methods, and may potentially improve the ability for real time analysis as well as reduce costs in experimental designs for objectively assessing voice disorders.
A reduced basis method for molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Vincent-Finley, Rachel Elisabeth
In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.
Maximum likelihood estimation for life distributions with competing failure modes
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1979-01-01
Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.
A support-operator method for 3-D rupture dynamics
NASA Astrophysics Data System (ADS)
Ely, Geoffrey P.; Day, Steven M.; Minster, Jean-Bernard
2009-06-01
We present a numerical method to simulate spontaneous shear crack propagation within a heterogeneous, 3-D, viscoelastic medium. Wave motions are computed on a logically rectangular hexahedral mesh, using the generalized finite-difference method of Support Operators (SOM). This approach enables modelling of non-planar surfaces and non-planar fault ruptures. Our implementation, the Support Operator Rupture Dynamics (SORD) code, is highly scalable, enabling large-scale, multiprocessors calculations. The fault surface is modelled by coupled double nodes, where rupture occurs as dictated by the local stress conditions and a frictional failure law. The method successfully performs test problems developed for the Southern California Earthquake Center (SCEC)/U.S. Geological Survey (USGS) dynamic earthquake rupture code validation exercise, showing good agreement with semi-analytical boundary integral method results. We undertake further dynamic rupture tests to quantify numerical errors introduced by shear deformations to the hexahedral mesh. We generate a family of meshes distorted by simple shearing, in the along-strike direction, up to a maximum of 73°. For SCEC/USGS validation problem number 3, grid-induced errors increase with mesh shear angle, with the logarithm of error approximately proportional to angle over the range tested. At 73°, rms misfits are about 10 per cent for peak slip rate, and 0.5 per cent for both rupture time and total slip, indicating that the method (which, up to now, we have applied mainly to near-vertical strike-slip faulting) is also capable of handling geometries appropriate to low-angle surface-rupturing thrust earthquakes. Additionally, we demonstrate non-planar rupture effects, by modifying the test geometry to include, respectively, cylindrical curvature and sharp kinks.
Role of Molecular Dynamics and Related Methods in Drug Discovery.
De Vivo, Marco; Masetti, Matteo; Bottegoni, Giovanni; Cavalli, Andrea
2016-05-12
Molecular dynamics (MD) and related methods are close to becoming routine computational tools for drug discovery. Their main advantage is in explicitly treating structural flexibility and entropic effects. This allows a more accurate estimate of the thermodynamics and kinetics associated with drug-target recognition and binding, as better algorithms and hardware architectures increase their use. Here, we review the theoretical background of MD and enhanced sampling methods, focusing on free-energy perturbation, metadynamics, steered MD, and other methods most consistently used to study drug-target binding. We discuss unbiased MD simulations that nowadays allow the observation of unsupervised ligand-target binding, assessing how these approaches help optimizing target affinity and drug residence time toward improved drug efficacy. Further issues discussed include allosteric modulation and the role of water molecules in ligand binding and optimization. We conclude by calling for more prospective studies to attest to these methods' utility in discovering novel drug candidates. PMID:26807648
Likelihood-based modification of experimental crystal structure electron density maps
Terwilliger, Thomas C.
2005-04-16
A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
Gopich, Irina V.
2015-01-21
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.
Maximum-likelihood density modification using pattern recognition of structural motifs
Terwilliger, Thomas C.
2001-12-01
A likelihood-based density-modification method is extended to include pattern recognition of structural motifs. The likelihood-based approach to density modification [Terwilliger (2000 ▶), Acta Cryst. D56, 965–972] is extended to include the recognition of patterns of electron density. Once a region of electron density in a map is recognized as corresponding to a known structural element, the likelihood of the map is reformulated to include a term that reflects how closely the map agrees with the expected density for that structural element. This likelihood is combined with other aspects of the likelihood of the map, including the presence of a flat solvent region and the electron-density distribution in the protein region. This likelihood-based pattern-recognition approach was tested using the recognition of helical segments in a largely helical protein. The pattern-recognition method yields a substantial phase improvement over both conventional and likelihood-based solvent-flattening and histogram-matching methods. The method can potentially be used to recognize any common structural motif and incorporate prior knowledge about that motif into density modification.
A Method for Molecular Dynamics on Curved Surfaces.
Paquay, Stefan; Kusters, Remy
2016-03-29
Dynamics simulations of constrained particles can greatly aid in understanding the temporal and spatial evolution of biological processes such as lateral transport along membranes and self-assembly of viruses. Most theoretical efforts in the field of diffusive transport have focused on solving the diffusion equation on curved surfaces, for which it is not tractable to incorporate particle interactions even though these play a crucial role in crowded systems. We show here that it is possible to take such interactions into account by combining standard constraint algorithms with the classical velocity Verlet scheme to perform molecular dynamics simulations of particles constrained to an arbitrarily curved surface. Furthermore, unlike Brownian dynamics schemes in local coordinates, our method is based on Cartesian coordinates, allowing for the reuse of many other standard tools without modifications, including parallelization through domain decomposition. We show that by applying the schemes to the Langevin equation for various surfaces, we obtain confined Brownian motion, which has direct applications to many biological and physical problems. Finally we present two practical examples that highlight the applicability of the method: 1) the influence of crowding and shape on the lateral diffusion of proteins in curved membranes; and 2) the self-assembly of a coarse-grained virus capsid protein model. PMID:27028633
Nonholonomic Hamiltonian Method for Molecular Dynamics Simulations of Reacting Shocks
NASA Astrophysics Data System (ADS)
Fahrenthold, Eric; Bass, Joseph
2015-06-01
Conventional molecular dynamics simulations of reacting shocks employ a holonomic Hamiltonian formulation: the breaking and forming of covalent bonds is described by potential functions. In general these potential functions: (a) are algebraically complex, (b) must satisfy strict smoothness requirements, and (c) contain many fitted parameters. In recent research the authors have developed a new noholonomic formulation of reacting molecular dynamics. In this formulation bond orders are determined by rate equations and the bonding-debonding process need not be described by differentiable functions. This simplifies the representation of complex chemistry and reduces the number of fitted model parameters. Example applications of the method show molecular level shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.
System and method for reducing combustion dynamics in a combustor
Uhm, Jong Ho; Johnson, Thomas Edward; Zuo, Baifang; York, William David
2015-09-01
A system for reducing combustion dynamics in a combustor includes an end cap having an upstream surface axially separated from a downstream surface, and tube bundles extend from the upstream surface through the downstream surface. A divider inside a tube bundle defines a diluent passage that extends axially through the downstream surface, and a diluent supply in fluid communication with the divider provides diluent flow to the diluent passage. A method for reducing combustion dynamics in a combustor includes flowing a fuel through tube bundles, flowing a diluent through a diluent passage inside a tube bundle, wherein the diluent passage extends axially through at least a portion of the end cap into a combustion chamber, and forming a diluent barrier in the combustion chamber between the tube bundle and at least one other adjacent tube bundle.
System and method for reducing combustion dynamics in a combustor
Uhm, Jong Ho; Johnson, Thomas Edward; Zuo, Baifang; York, William David
2013-08-20
A system for reducing combustion dynamics in a combustor includes an end cap having an upstream surface axially separated from a downstream surface, and tube bundles extend through the end cap. A diluent supply in fluid communication with the end cap provides diluent flow to the end cap. Diluent distributors circumferentially arranged inside at least one tube bundle extend downstream from the downstream surface and provide fluid communication for the diluent flow through the end cap. A method for reducing combustion dynamics in a combustor includes flowing fuel through tube bundles that extend axially through an end cap, flowing a diluent through diluent distributors into a combustion chamber, wherein the diluent distributors are circumferentially arranged inside at least one tube bundle and each diluent distributor extends downstream from the end cap, and forming a diluent barrier in the combustion chamber between at least one pair of adjacent tube bundles.
cosmoabc: Likelihood-free inference for cosmology
NASA Astrophysics Data System (ADS)
Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.
2015-05-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.
Spectral likelihood expansions for Bayesian inference
NASA Astrophysics Data System (ADS)
Nagel, Joseph B.; Sudret, Bruno
2016-03-01
A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.
Likelihood-Based Climate Model Evaluation
NASA Technical Reports Server (NTRS)
Braverman, Amy; Cressie, Noel; Teixeira, Joao
2012-01-01
Climate models are deterministic, mathematical descriptions of the physics of climate. Confidence in predictions of future climate is increased if the physics are verifiably correct. A necessary, (but not sufficient) condition is that past and present climate be simulated well. Quantify the likelihood that a (summary statistic computed from a) set of observations arises from a physical system with the characteristics captured by a model generated time series. Given a prior on models, we can go further: posterior distribution of model given observations.
Informative Parameters of Dynamic Geo-electricity Methods
NASA Astrophysics Data System (ADS)
Tursunmetov, R.
With growing complexity of geological tasks and revealing abnormality zones con- nected with ore, oil, gas and water availability, methods of dynamic geo-electricity started to be used. In these methods geological environment is considered as inter- phase irregular one. Main dynamic element of this environment is double electric layer, which develops on the boundary between solid and liquid phase. In ore or wa- ter saturated environment double electric layers become electrochemical or electro- kinetic active elements of geo-electric environment, which, in turn, form natural elec- tric field. Mentioned field influences artificially created field distribution and inter- action bear complicated super-position or non-linear character. Therefore, geological environment is considered as active one, which is able to accumulate and transform artificially superpositioned fields. Main dynamic property of this environment is non- liner behavior of specific electric resistance and soil polarization depending on current density and measurements frequency, which serve as informative parameters for dy- namic geo-electricity methods. Study of disperse soil electric properties in impulse- frequency regime with study of temporal and frequency characteristics of electric field is of main interest for definition of geo-electric abnormality. Volt-amperic characteris- tics of electromagnetic field study has big practical significance. These characteristics are determined by electric-chemically active ore and water saturated fields. Mentioned parameters depend on initiated field polarity, in particular on ore saturated zone's character, composition and mineralization and natural electric field availability un- der cathode and anode mineralization. Non-linear behavior of environment's dynamic properties impacts initiated field structure that allows to define abnormal zone loca- tion. And, finally, study of soil anisotropy dynamic properties in space will allow to identify filtration flows
Score-based likelihood ratios for handwriting evidence.
Hepler, Amanda B; Saunders, Christopher P; Davis, Linda J; Buscaglia, JoAnn
2012-06-10
Score-based approaches for computing forensic likelihood ratios are becoming more prevalent in the forensic literature. When two items of evidential value are entangled via a scorefunction, several nuances arise when attempting to model the score behavior under the competing source-level propositions. Specific assumptions must be made in order to appropriately model the numerator and denominator probability distributions. This process is fairly straightforward for the numerator of the score-based likelihood ratio, entailing the generation of a database of scores obtained by pairing items of evidence from the same source. However, this process presents ambiguities for the denominator database generation - in particular, how best to generate a database of scores between two items of different sources. Many alternatives have appeared in the literature, three of which we will consider in detail. They differ in their approach to generating denominator databases, by pairing (1) the item of known source with randomly selected items from a relevant database; (2) the item of unknown source with randomly generated items from a relevant database; or (3) two randomly generated items. When the two items differ in type, perhaps one having higher information content, these three alternatives can produce very different denominator databases. While each of these alternatives has appeared in the literature, the decision of how to generate the denominator database is often made without calling attention to the subjective nature of this process. In this paper, we compare each of the three methods (and the resulting score-based likelihood ratios), which can be thought of as three distinct interpretations of the denominator proposition. Our goal in performing these comparisons is to illustrate the effect that subtle modifications of these propositions can have on inferences drawn from the evidence evaluation procedure. The study was performed using a data set composed of cursive writing
The Feldenkrais Method: a dynamic approach to changing motor behavior.
Buchanan, P A; Ulrich, B D
2001-12-01
This tutorial describes the Feldenkrais Method and points to parallels with a dynamic systems theory (DST) approach to motor behavior Feldenkrais is an educational system designed to use movement and perception to foster individualized improvement in function. Moshe Feldenkrais, its originator, believed his method enhanced people's ability to discover flexible and adaptable behavior and that behaviors are self-organized. Similarly, DST explains that a human-environment system is continually adapting to changing conditions and assembling behaviors accordingly. Despite little research, Feldenkrais is being used with people of widely ranging ages and abilities in varied settings. We propose that DSTprovides an integrated foundation for research on the Feldenkrais Method, suggest research questions, and encourage researchers to test the fundamental tenets of Feldenkrais. PMID:11770781
Statistical method of evaluation of flip-flop dynamical parameters
NASA Astrophysics Data System (ADS)
Wieczorek, P. Z.; Opalski, L. J.
2008-01-01
This paper presents statistical algorithm and measurement system for precise evaluation of flip-flop dynamical parameters in asynchronous operation. The analyzed flip-flop parameters are failure probability, MTBF and propagation delay. It is shown how these parameters depend on metastable operation of flip-flops. The numerical and hardware solutions shown in article allow for precise and reliable comparison of flip-flops. Also the analysis of influence of flip-flop electrical parameters of flip-flop electrical parameters on their metastable operation is possible with use of presented statistical method. Statistical estimation of parameters of flip-flops in which metastability occurs, seems to be more reliable than standard empirical methods of flip-flop analysis. Presented method allows for showing inaccuracies in theoretical model of metastability.
Communications overlapping in fast multipole particle dynamics methods
Kurzak, Jakub; Pettitt, B. Montgomery . E-mail: pettitt@uh.edu
2005-03-01
In molecular dynamics the fast multipole method (FMM) is an attractive alternative to Ewald summation for calculating electrostatic interactions due to the operation counts. However when applied to small particle systems and taken to many processors it has a high demand for interprocessor communication. In a distributed memory environment this demand severely limits applicability of the FMM to systems with O(10 K atoms). We present an algorithm that allows for fine grained overlap of communication and computation, while not sacrificing synchronization and determinism in the equations of motion. The method avoids contention in the communication subsystem making it feasible to use the FMM for smaller systems on larger numbers of processors. Our algorithm also facilitates application of multiple time stepping techniques within the FMM. We present scaling at a reasonably high level of accuracy compared with optimized Ewald methods.
Development of a dynamically adaptive grid method for multidimensional problems
NASA Astrophysics Data System (ADS)
Holcomb, J. E.; Hindman, R. G.
1984-06-01
An approach to solution adaptive grid generation for use with finite difference techniques, previously demonstrated on model problems in one space dimension, has been extended to multidimensional problems. The method is based on the popular elliptic steady grid generators, but is 'dynamically' adaptive in the sense that a grid is maintained at all times satisfying the steady grid law driven by a solution-dependent source term. Testing has been carried out on Burgers' equation in one and two space dimensions. Results appear encouraging both for inviscid wave propagation cases and viscous boundary layer cases, suggesting that application to practical flow problems is now possible. In the course of the work, obstacles relating to grid correction, smoothing of the solution, and elliptic equation solvers have been largely overcome. Concern remains, however, about grid skewness, boundary layer resolution and the need for implicit integration methods. Also, the method in 3-D is expected to be very demanding of computer resources.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Approximate maximum likelihood estimation of scanning observer templates
NASA Astrophysics Data System (ADS)
Abbey, Craig K.; Samuelson, Frank W.; Wunderlich, Adam; Popescu, Lucretiu M.; Eckstein, Miguel P.; Boone, John M.
2015-03-01
In localization tasks, an observer is asked to give the location of some target or feature of interest in an image. Scanning linear observer models incorporate the search implicit in this task through convolution of an observer template with the image being evaluated. Such models are becoming increasingly popular as predictors of human performance for validating medical imaging methodology. In addition to convolution, scanning models may utilize internal noise components to model inconsistencies in human observer responses. In this work, we build a probabilistic mathematical model of this process and show how it can, in principle, be used to obtain estimates of the observer template using maximum likelihood methods. The main difficulty of this approach is that a closed form probability distribution for a maximal location response is not generally available in the presence of internal noise. However, for a given image we can generate an empirical distribution of maximal locations using Monte-Carlo sampling. We show that this probability is well approximated by applying an exponential function to the scanning template output. We also evaluate log-likelihood functions on the basis of this approximate distribution. Using 1,000 trials of simulated data as a validation test set, we find that a plot of the approximate log-likelihood function along a single parameter related to the template profile achieves its maximum value near the true value used in the simulation. This finding holds regardless of whether the trials are correctly localized or not. In a second validation study evaluating a parameter related to the relative magnitude of internal noise, only the incorrect localization images produces a maximum in the approximate log-likelihood function that is near the true value of the parameter.
A novel method to study cerebrospinal fluid dynamics in rats
Karimy, Jason K.; Kahle, Kristopher T.; Kurland, David B.; Yu, Edward; Gerzanich, Volodymyr; Simard, J. Marc
2014-01-01
Background Cerebrospinal fluid (CSF) flow dynamics play critical roles in both the immature and adult brain, with implications for neurodevelopment and disease processes such as hydrocephalus and neurodegeneration. Remarkably, the only reported method to date for measuring CSF formation in laboratory rats is the indirect tracer dilution method (a.k.a., ventriculocisternal perfusion), which has limitations. New Method Anesthetized rats were mounted in a stereotaxic apparatus, both lateral ventricles were cannulated, and the Sylvian aqueduct was occluded. Fluid exited one ventricle at a rate equal to the rate of CSF formation plus the rate of infusion (if any) into the contralateral ventricle. Pharmacological agents infused at a constant known rate into the contralateral ventricle were tested for their effect on CSF formation in real-time. Results The measured rate of CSF formation was increased by blockade of the Sylvian aqueduct but was not changed by increasing the outflow pressure (0–3 cm of H2O). In male Wistar rats, CSF formation was age-dependent: 0.39±0.06, 0.74±0.05, 1.02±0.04 and 1.40±0.06 µL/min at 8, 9, 10 and 12 weeks, respectively. CSF formation was reduced 57% by intraventricular infusion of the carbonic anhydrase inhibitor, acetazolamide. Comparison with existing methods Tracer dilution methods do not permit ongoing real-time determination of the rate of CSF formation, are not readily amenable to pharmacological manipulations, and require critical assumptions. Direct measurement of CSF formation overcomes these limitations. Conclusions Direct measurement of CSF formation in rats is feasible. Our method should prove useful for studying CSF dynamics in normal physiology and disease models. PMID:25554415
Applicability of optical scanner method for fine root dynamics
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Ohashi, Mizue; Makita, Naoki; Khoon Kho, Lip; Katayama, Ayumi; Matsumoto, Kazuho; Ikeno, Hidetoshi
2016-04-01
Fine root dynamics is one of the important components in forest carbon cycling, as ~60 % of tree photosynthetic production can be allocated to root growth and metabolic activities. Various techniques have been developed for monitoring fine root biomass, production, mortality in order to understand carbon pools and fluxes resulting from fine roots dynamics. The minirhizotron method is now a widely used technique, in which a transparent tube is inserted into the soil and researchers count an increase and decrease of roots along the tube using images taken by a minirhizotron camera or minirhizotron video camera inside the tube. This method allows us to observe root behavior directly without destruction, but has several weaknesses; e.g., the difficulty of scaling up the results to stand level because of the small observation windows. Also, most of the image analysis are performed manually, which may yield insufficient quantitative and objective data. Recently, scanner method has been proposed, which can produce much bigger-size images (A4-size) with lower cost than those of the minirhizotron methods. However, laborious and time-consuming image analysis still limits the applicability of this method. In this study, therefore, we aimed to develop a new protocol for scanner image analysis to extract root behavior in soil. We evaluated applicability of this method in two ways; 1) the impact of different observers including root-study professionals, semi- and non-professionals on the detected results of root dynamics such as abundance, growth, and decomposition, and 2) the impact of window size on the results using a random sampling basis exercise. We applied our new protocol to analyze temporal changes of root behavior from sequential scanner images derived from a Bornean tropical forests. The results detected by the six observers showed considerable concordance in temporal changes in the abundance and the growth of fine roots but less in the decomposition. We also examined
A whirlwind tour of statistical methods in structural dynamics.
Booker, J. M.
2004-01-01
Several statistical methods and their corresponding principles of application to structural dynamics problems will be presented. This set was chosen based upon the projects and their corresponding challenges in the Engineering Sciences & Applications (ESA) Division at Los Alamos National Laboratory and focuses on variance-based uncertainty quantification. Our structural dynamics applications are heavily involved in modeling and simulation, often with sparse data availability. In addition to models, heavy reliance is placed upon the use of expertise and experience. Beginning with principles of inference and prediction, some statistical tools for verification and validation are introduced. Among these are the principles of good experimental design for test and model computation planning, and the combination of data, models and knowledge through the use of Bayes Theorem. A brief introduction to multivariate methods and exploratory data analysis will be presented as part of understanding relationships and variation among important parameters, physical quantities of interest, measurements, inputs and outputs. Finally, the use of these methods and principles will be discussed in drawing conclusions from the validation assessment process under uncertainty.
A dynamic calibration method for the pressure transducer
NASA Astrophysics Data System (ADS)
Wang, Zhongyu; Wang, Zhuoran; Li, Qiang
2016-01-01
Pressure transducer is widely used in the field of industry. A calibrated pressure transducer can increase the performance of precision instruments in the closed mechanical relationship. Calibration is the key to ensure the pressure transducer with a high precision and dynamic characteristic. Unfortunately, the current calibration method can usually be used in the laboratory with a good condition and only one pressure transducer can be calibrated at each time. Therefore the calibration efficiency is hard to meet the requirement of modern industry with high efficiency. A dynamic and fast calibration technology with a calibration device and a corresponding data processing method is proposed in this paper. Firstly, the pressure transducer to be calibrated is placed in the small cavity chamber. The calibration process only contains a single loop. The outputs of each calibrated transducer are recorded automatically by the control terminal. Secondly, LabView programming is used for the information acquisition and data processing. The performance of the repeatability and nonlinear indicators can be figured out directly. At last the pressure transducers are calibrated simultaneously in the experiment to verify the suggested calibration technology. The experimental result shows this method can be used to calibrate the pressure transducer in the practical engineering measurement.
Quantum dynamics by the constrained adiabatic trajectory method
Leclerc, A.; Jolicard, G.; Guerin, S.; Killingbeck, J. P.
2011-03-15
We develop the constrained adiabatic trajectory method (CATM), which allows one to solve the time-dependent Schroedinger equation constraining the dynamics to a single Floquet eigenstate, as if it were adiabatic. This constrained Floquet state (CFS) is determined from the Hamiltonian modified by an artificial time-dependent absorbing potential whose forms are derived according to the initial conditions. The main advantage of this technique for practical implementation is that the CFS is easy to determine even for large systems since its corresponding eigenvalue is well isolated from the others through its imaginary part. The properties and limitations of the CATM are explored through simple examples.
Applications of the molecular dynamics flexible fitting method.
Trabuco, Leonardo G; Schreiner, Eduard; Gumbart, James; Hsin, Jen; Villa, Elizabeth; Schulten, Klaus
2011-03-01
In recent years, cryo-electron microscopy (cryo-EM) has established itself as a key method in structural biology, permitting the structural characterization of large biomolecular complexes in various functional states. The data obtained through single-particle cryo-EM has recently seen a leap in resolution thanks to landmark advances in experimental and computational techniques, resulting in sub-nanometer resolution structures being obtained routinely. The remaining gap between these data and revealing the mechanisms of molecular function can be closed through hybrid modeling tools that incorporate known atomic structures into the cryo-EM data. One such tool, molecular dynamics flexible fitting (MDFF), uses molecular dynamics simulations to combine structures from X-ray crystallography with cryo-EM density maps to derive atomic models of large biomolecular complexes. The structures furnished by MDFF can be used subsequently in computational investigations aimed at revealing the dynamics of the complexes under study. In the present work, recent applications of MDFF are presented, including the interpretation of cryo-EM data of the ribosome at different stages of translation and the structure of a membrane-curvature-inducing photosynthetic complex. PMID:20932910
A new method for parameter estimation in nonlinear dynamical equations
NASA Astrophysics Data System (ADS)
Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao
2015-01-01
Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.
3-D dynamic rupture simulations by a finite volume method
NASA Astrophysics Data System (ADS)
Benjemaa, M.; Glinsky-Olivier, N.; Cruz-Atienza, V. M.; Virieux, J.
2009-07-01
Dynamic rupture of a 3-D spontaneous crack of arbitrary shape is investigated using a finite volume (FV) approach. The full domain is decomposed in tetrahedra whereas the surface, on which the rupture takes place, is discretized with triangles that are faces of tetrahedra. First of all, the elastodynamic equations are described into a pseudo-conservative form for an easy application of the FV discretization. Explicit boundary conditions are given using criteria based on the conservation of discrete energy through the crack surface. Using a stress-threshold criterion, these conditions specify fluxes through those triangles that have suffered rupture. On these broken surfaces, stress follows a linear slip-weakening law, although other friction laws can be implemented. For The Problem Version 3 of the dynamic-rupture code verification exercise conducted by the SCEC/USGS, numerical solutions on a planar fault exhibit a very high convergence rate and are in good agreement with the reference one provided by a finite difference (FD) technique. For a non-planar fault of parabolic shape, numerical solutions agree satisfactorily well with those obtained with a semi-analytical boundary integral method in terms of shear stress amplitudes, stopping phases arrival times and stress overshoots. Differences between solutions are attributed to the low-order interpolation of the FV approach, whose results are particularly sensitive to the mesh regularity (structured/unstructured). We expect this method, which is well adapted for multiprocessor parallel computing, to be competitive with others for solving large scale dynamic ruptures scenarios of seismic sources in the near future.
Satellite attitude dynamics and estimation with the implicit midpoint method
NASA Astrophysics Data System (ADS)
Hellström, Christian; Mikkola, Seppo
2009-07-01
We describe the application of the implicit midpoint integrator to the problem of attitude dynamics for low-altitude satellites without the use of quaternions. Initially, we consider the satellite to rotate without external torques applied to it. We compare the numerical solution with the exact solution in terms of Jacobi's elliptic functions. Then, we include the gravity-gradient torque, where the implicit midpoint integrator proves to be a fast, simple and accurate method. Higher-order versions of the implicit midpoint scheme are compared to Gauss-Legendre Runge-Kutta methods in terms of accuracy and processing time. Finally, we investigate the performance of a parameter-adaptive Kalman filter based on the implicit midpoint integrator for the determination of the principal moments of inertia through observations.
Photobleaching Methods to Study Golgi Complex Dynamics in Living Cells
Snapp, Erik Lee
2014-01-01
The Golgi complex (GC) is a highly dynamic organelle that constantly receives and exports proteins and lipids from both the endoplasmic reticulum and the plasma membrane. While protein trafficking can be monitored with traditional biochemical methods, these approaches average the behaviors of millions of cells, provide modest temporal information and no spatial information. Photobleaching methods enable investigators to monitor protein trafficking in single cells or even single GC stacks with subsecond precision. Furthermore, photobleaching can be exploited to monitor the behaviors of resident GC proteins to provide insight into mechanisms of retention and trafficking. In this chapter, general photobleaching approaches with laser scanning confocal microscopes are described. Importantly, the problems associated with many fluorescent proteins (FPs) and their uses in the secretory pathway are discussed and appropriate choices are suggested. For example, Enhanced Green Fluorescent Protein (EGFP) and most red FPs are extremely problematic. Finally, options for data analyses are described. PMID:24295308
Computational methods. [Calculation of dynamic loading to offshore platforms
Maeda, H. . Inst. of Industrial Science)
1993-02-01
With regard to the computational methods for hydrodynamic forces, first identification of marine hydrodynamics in offshore technology is discussed. Then general computational methods, the state of the arts and uncertainty on flow problems in offshore technology in which developed, developing and undeveloped problems are categorized and future works follow. Marine hydrodynamics consists of water surface and underwater fluid dynamics. Marine hydrodynamics covers, not only hydro, but also aerodynamics such as wind load or current-wave-wind interaction, hydrodynamics such as cavitation, underwater noise, multi-phase flow such as two-phase flow in pipes or air bubble in water or surface and internal waves, and magneto-hydrodynamics such as propulsion due to super conductivity. Among them, two key words are focused on as the identification of marine hydrodynamics in offshore technology; they are free surface and vortex shedding.
Dynamic permeability of porous media by the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Pazdniakou, A.; Adler, P. M.
2013-12-01
The lattice Boltzmann method (LBM) is applied to calculate the dynamic permeability K(ω) of porous media; an oscillating macroscopic pressure gradient is imposed in order to generate oscillating flows. The LBM simulation yields the time dependent seepage velocity of amplitude A and phase shift B which are used to calculate K(ω). The procedure is validated for plane Poiseuille flows where excellent agreement with the analytical solution is obtained. The limitations of the method are discussed. When the ratio between the kinematic viscosity and the characteristic size of the pores is high, the corresponding Knudsen number Kn is high and the numerical values of K(ω) are incorrect with a positive imaginary part; it is only when Kn is small enough that correct values are obtained. The influence of the time discretization of the oscillating body force is studied; simulation results are influenced by an insufficient discretization, i.e., it is necessary to avoid using too high frequencies. The influence of absolute errors in the seepage velocity amplitude δA and the phase shift δB on K(ω) shows that for high ω even small errors in B can cause drastic errors in ReK(ω). The dynamic permeability of reconstructed and real (sandstone) porous media is calculated for a large range of frequencies and the universal scaling behavior is verified. Very good correspondences with the theoretical predictions are observed.
Some methods for dynamic analysis of the scalp recorded EEG.
Pribram, K H; King, J S; Pierce, T W; Warren, A
1996-01-01
This paper describes methods for quantifying the spatiotemporal dynamics of EEG. Development of these methods was motivated by watching computer-generated animations of EEG voltage records. These animations contain a wealth of information about the pattern of change across time in the voltages observed across the surface of the scalp. In an effort to quantify this pattern of changing voltages, we elected to extract a single quantifiable feature from each measurement epoch, the highest squared voltage among the various electrode sites. Nineteen channels of EEG were collected from subjects using an electrode cap with standard 10-20 system placements. Two minute records were obtained. Each record was sampled at a rate of 200 per second. Thirty seconds of artifact-free data were extracted from each 2 minute record. An algorithm then determined the location of the channel with the greatest amplitude for each 5 msec sampling epoch. We quantified these spatio-temporal dynamics as scalars, vectors and cluster analytic plots of EEG activity for finger tapping, cognitive effort (counting backwards) and relaxation to illustrate the utility of the techniques. PMID:8813416
A spatiotemporal characterization method for the dynamic cytoskeleton.
Alhussein, Ghada; Shanti, Aya; Farhat, Ilyas A H; Timraz, Sara B H; Alwahab, Noaf S A; Pearson, Yanthe E; Martin, Matthew N; Christoforou, Nicolas; Teo, Jeremy C M
2016-05-01
The significant gap between quantitative and qualitative understanding of cytoskeletal function is a pressing problem; microscopy and labeling techniques have improved qualitative investigations of localized cytoskeleton behavior, whereas quantitative analyses of whole cell cytoskeleton networks remain challenging. Here we present a method that accurately quantifies cytoskeleton dynamics. Our approach digitally subdivides cytoskeleton images using interrogation windows, within which box-counting is used to infer a fractal dimension (Df ) to characterize spatial arrangement, and gray value intensity (GVI) to determine actin density. A partitioning algorithm further obtains cytoskeleton characteristics from the perinuclear, cytosolic, and periphery cellular regions. We validated our measurement approach on Cytochalasin-treated cells using transgenically modified dermal fibroblast cells expressing fluorescent actin cytoskeletons. This method differentiates between normal and chemically disrupted actin networks, and quantifies rates of cytoskeletal degradation. Furthermore, GVI distributions were found to be inversely proportional to Df , having several biophysical implications for cytoskeleton formation/degradation. We additionally demonstrated detection sensitivity of differences in Df and GVI for cells seeded on substrates with varying degrees of stiffness, and coated with different attachment proteins. This general approach can be further implemented to gain insights on dynamic growth, disruption, and structure of the cytoskeleton (and other complex biological morphology) due to biological, chemical, or physical stimuli. © 2016 Wiley Periodicals, Inc. PMID:27015595
New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes
Zhao, Ying-Qi; Zeng, Donglin; Laber, Eric B.; Kosorok, Michael R.
2014-01-01
Dynamic treatment regimes (DTRs) are sequential decision rules for individual patients that can adapt over time to an evolving illness. The goal is to accommodate heterogeneity among patients and find the DTR which will produce the best long term outcome if implemented. We introduce two new statistical learning methods for estimating the optimal DTR, termed backward outcome weighted learning (BOWL), and simultaneous outcome weighted learning (SOWL). These approaches convert individualized treatment selection into an either sequential or simultaneous classification problem, and can thus be applied by modifying existing machine learning techniques. The proposed methods are based on directly maximizing over all DTRs a nonparametric estimator of the expected long-term outcome; this is fundamentally different than regression-based methods, for example Q-learning, which indirectly attempt such maximization and rely heavily on the correctness of postulated regression models. We prove that the resulting rules are consistent, and provide finite sample bounds for the errors using the estimated rules. Simulation results suggest the proposed methods produce superior DTRs compared with Q-learning especially in small samples. We illustrate the methods using data from a clinical trial for smoking cessation. PMID:26236062
Efficient Pairwise Composite Likelihood Estimation for Spatial-Clustered Data
Bai, Yun; Kang, Jian; Song, Peter X.-K.
2015-01-01
Summary Spatial-clustered data refer to high-dimensional correlated measurements collected from units or subjects that are spatially clustered. Such data arise frequently from studies in social and health sciences. We propose a unified modeling framework, termed as GeoCopula, to characterize both large-scale variation, and small-scale variation for various data types, including continuous data, binary data, and count data as special cases. To overcome challenges in the estimation and inference for the model parameters, we propose an efficient composite likelihood approach in that the estimation efficiency is resulted from a construction of over-identified joint composite estimating equations. Consequently, the statistical theory for the proposed estimation is developed by extending the classical theory of the generalized method of moments. A clear advantage of the proposed estimation method is the computation feasibility. We conduct several simulation studies to assess the performance of the proposed models and estimation methods for both Gaussian and binary spatial-clustered data. Results show a clear improvement on estimation efficiency over the conventional composite likelihood method. An illustrative data example is included to motivate and demonstrate the proposed method. PMID:24945876
Modelling autoimmune rheumatic disease: a likelihood rationale.
Ulvestad, E
2003-07-01
Immunoglobulins (Igs) and autoantibodies are commonly tested in sera from patients with suspected rheumatic disease. To evaluate the clinical utility of the tests in combination, we investigated sera from 351 patients with autoimmune rheumatic disease (ARD) rheumatoid arthritis (RA), systemic lupus erythematosus (SLE) and Sjögren's syndrome (SS) and 96 patients with nonautoimmune rheumatic disease (NAD) (fibromyalgia, osteoarthritis, etc.). Antinuclear antibodies (ANA), rheumatoid factor (RF), antibodies against DNA and extractable nuclear antigens (anti-ENA), IgG, IgA and IgM were measured for all patients. Logistic regression analysis of test results was used to calculate each patient's probability for belonging to the ARD or NAD group as well as likelihood ratios for disease. Test accuracy was investigated using receiver-operating characteristic (ROC) plots and nonparametric ROC analysis. Neither concentrations of IgG, IgA, IgM, anti-DNA nor anti-ENA gave a significant effect on diagnostic outcome. Probabilities for disease and likelihood ratios calculated by combining RF and ANA performed significantly better at predicting ARD than utilization of the diagnostic tests in isolation (P < 0.001). At a cut-off level of P = 0.73 and likelihood ratio = 1, the logistic model gave a specificity of 93% and a sensitivity of 75% for the differentiation between ARD and NAD. When compared at the same level of specificity, ANA gave a sensitivity of 37% and RF gave a sensitivity of 56.6%. Dichotomizing ANA and RF as positive or negative did not reduce the performance characteristics of the model. Combining results obtained from serological analysis of ANA and RF according to this model will increase the diagnostic utility of the tests in rheumatological practice. PMID:12828565
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Dynamic characterization of satellite components through non-invasive methods
Mullins, Joshua G; Wiest, Heather K; Mascarenas, David D. L.; Macknelly, David
2010-10-21
The rapid deployment of satellites is hindered by the need to flight-qualify their components and the resulting mechanical assembly. Conventional methods for qualification testing of satellite components are costly and time consuming. Furthermore, full-scale vehicles must be subjected to launch loads during testing. This harsh testing environment increases the risk of component damage during qualification. The focus of this research effort was to assess the performance of Structural Health Monitoring (SHM) techniques as a replacement for traditional vibration testing. SHM techniques were applied on a small-scale structure representative of a responsive satellite. The test structure consisted of an extruded aluminum space-frame covered with aluminum shear plates, which was assembled using bolted joints. Multiple piezoelectric patches were bonded to the test structure and acted as combined actuators and sensors. Various methods of SHM were explored including impedance-based health monitoring, wave propagation, and conventional frequency response functions. Using these methods in conjunction with finite element modelling, the dynamic properties of the test structure were established and areas of potential damage were identified and localized. The adequacy of the results from each SHM method was validated by comparison to results from conventional vibration testing.
Dynamic characterization of satellite components through non-invasive methods
Mullens, Joshua G; Wiest, Heather K; Mascarenas, David D; Park, Gyuhae
2011-01-24
The rapid deployment of satellites is hindered by the need to flight-qualify their components and the resulting mechanical assembly. Conventional methods for qualification testing of satellite components are costly and time consuming. Furthermore, full-scale vehicles must be subjected to launch loads during testing. The harsh testing environment increases the risk of component damage during qualification. The focus of this research effort was to assess the performance of Structural Health Monitoring (SHM) techniques as replacement for traditional vibration testing. SHM techniques were applied on a small-scale structure representative of a responsive satellite. The test structure consisted of an extruded aluminum space-frame covered with aluminum shear plates, which was assembled using bolted joints. Multiple piezoelectric patches were bonded to the test structure and acted as combined actuators and sensors. Various methods of SHM were explored including impedance-based health monitoring, wave propagation, and conventional frequency response functions. Using these methods in conjunction with finite element modeling, the dynamic properties of the test structure were established and areas of potential damage were identified and localized. The adequacy of the results from each SHM method was validated by comparison to results from conventional vibration testing.
A note on the asymptotic distribution of likelihood ratio tests to test variance components.
Visscher, Peter M
2006-08-01
When using maximum likelihood methods to estimate genetic and environmental components of (co)variance, it is common to test hypotheses using likelihood ratio tests, since such tests have desirable asymptotic properties. In particular, the standard likelihood ratio test statistic is assumed asymptotically to follow a chi2 distribution with degrees of freedom equal to the number of parameters tested. Using the relationship between least squares and maximum likelihood estimators for balanced designs, it is shown why the asymptotic distribution of the likelihood ratio test for variance components does not follow a chi2 distribution with degrees of freedom equal to the number of parameters tested when the null hypothesis is true. Instead, the distribution of the likelihood ratio test is a mixture of chi2 distributions with different degrees of freedom. Implications for testing variance components in twin designs and for quantitative trait loci mapping are discussed. The appropriate distribution of the likelihood ratio test statistic should be used in hypothesis testing and model selection. PMID:16899155
Maximum likelihood estimation for distributed parameter models of flexible spacecraft
NASA Technical Reports Server (NTRS)
Taylor, L. W., Jr.; Williams, J. L.
1989-01-01
A distributed-parameter model of the NASA Solar Array Flight Experiment spacecraft structure is constructed on the basis of measurement data and analyzed to generate a priori estimates of modal frequencies and mode shapes. A Newton-Raphson maximum-likelihood algorithm is applied to determine the unknown parameters, using a truncated model for the estimation and the full model for the computation of the higher modes. Numerical results are presented in a series of graphs and briefly discussed, and the significant improvement in computation speed obtained by parallel implementation of the method on a supercomputer is noted.
An implicit finite element method for discrete dynamic fracture
Jobie M. Gerken
1999-12-01
A method for modeling the discrete fracture of two-dimensional linear elastic structures with a distribution of small cracks subject to dynamic conditions has been developed. The foundation for this numerical model is a plane element formulated from the Hu-Washizu energy principle. The distribution of small cracks is incorporated into the numerical model by including a small crack at each element interface. The additional strain field in an element adjacent to this crack is treated as an externally applied strain field in the Hu-Washizu energy principle. The resulting stiffness matrix is that of a standard plane element. The resulting load vector is that of a standard plane element with an additional term that includes the externally applied strain field. Except for the crack strain field equations, all terms of the stiffness matrix and load vector are integrated symbolically in Maple V so that fully integrated plane stress and plane strain elements are constructed. The crack strain field equations are integrated numerically. The modeling of dynamic behavior of simple structures was demonstrated within acceptable engineering accuracy. In the model of axial and transverse vibration of a beam and the breathing mode of vibration of a thin ring, the dynamic characteristics were shown to be within expected limits. The models dominated by tensile forces (the axially loaded beam and the pressurized ring) were within 0.5% of the theoretical values while the shear dominated model (the transversely loaded beam) is within 5% of the calculated theoretical value. The constant strain field of the tensile problems can be modeled exactly by the numerical model. The numerical results should therefore, be exact. The discrepancies can be accounted for by errors in the calculation of frequency from the numerical results. The linear strain field of the transverse model must be modeled by a series of constant strain elements. This is an approximation to the true strain field, so some
Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.
Maiwald, Tim; Hass, Helge; Steiert, Bernhard; Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens
2016-01-01
In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood. PMID:27588423
How much to trust the senses: likelihood learning.
Sato, Yoshiyuki; Kording, Konrad P
2014-01-01
Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood. PMID:25398975
LIKELIHOOD OF THE POWER SPECTRUM IN COSMOLOGICAL PARAMETER ESTIMATION
Sun, Lei; Wang, Qiao; Zhan, Hu
2013-11-01
The likelihood function is a crucial element of parameter estimation. In analyses of galaxy overdensities and weak lensing shear, one often approximates the likelihood of the power spectrum with a Gaussian distribution. The posterior probability derived from such a likelihood deviates considerably from the exact posterior on the largest scales probed by any survey, where the central limit theorem does not apply. We show that various forms of Gaussian likelihoods can have a significant impact on the estimation of the primordial non-Gaussianity parameter f{sub NL} from the galaxy angular power spectrum. The Gaussian plus log-normal likelihood, which has been applied successfully in analyses of the cosmic microwave background, outperforms the Gaussian likelihoods. Nevertheless, even if the exact likelihood of the power spectrum is used, the estimated parameters may be still biased. As such, the likelihoods and estimators need to be thoroughly examined for potential systematic errors.
Space station static and dynamic analyses using parallel methods
NASA Technical Reports Server (NTRS)
Gupta, V.; Newell, J.; Storaasli, O.; Baddourah, M.; Bostic, S.
1993-01-01
Algorithms for high-performance parallel computers are applied to perform static analyses of large-scale Space Station finite-element models (FEMs). Several parallel-vector algorithms under development at NASA Langley are assessed. Sparse matrix solvers were found to be more efficient than banded symmetric or iterative solvers for the static analysis of large-scale applications. In addition, new sparse and 'out-of-core' solvers were found superior to substructure (superelement) techniques which require significant additional cost and time to perform static condensation during global FEM matrix generation as well as the subsequent recovery and expansion. A method to extend the fast parallel static solution techniques to reduce the computation time for dynamic analysis is also described. The resulting static and dynamic algorithms offer design economy for preliminary multidisciplinary design optimization and FEM validation against test modes. The algorithms are being optimized for parallel computers to solve one-million degrees-of-freedom (DOF) FEMs. The high-performance computers at NASA afforded effective software development, testing, efficient and accurate solution with timely system response and graphical interpretation of results rarely found in industry. Based on the author's experience, similar cooperation between industry and government should be encouraged for similar large-scale projects in the future.
Applications of Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.
2004-01-01
Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.
Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark
2002-01-01
Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.
Indentation Measurements to Validate Dynamic Elasticity Imaging Methods.
Altahhan, Khaldoon N; Wang, Yue; Sobh, Nahil; Insana, Michael F
2016-09-01
We describe macro-indentation techniques for estimating the elastic modulus of soft hydrogels. Our study describes (a) conditions under which quasi-static indentation can validate dynamic shear-wave imaging estimates and (b) how each of these techniques uniquely biases modulus estimates as they couple to the sample geometry. Harmonic shear waves between 25 and 400 Hz were imaged using ultrasonic Doppler and optical coherence tomography methods to estimate shear dispersion. From the shear-wave speed of sound, average elastic moduli of homogeneous samples were estimated. These results are compared directly with macroscopic indentation measurements measured two ways. One set of measurements applied Hertzian theory to the loading phase of the force-displacement curves using samples treated to minimize surface adhesion forces. A second set of measurements applied Johnson-Kendall-Roberts theory to the unloading phase of the force-displacement curve when surface adhesions were significant. All measurements were made using gelatin hydrogel samples of different sizes and concentrations. Agreement within 5% among elastic modulus estimates was achieved for a range of experimental conditions. Consequently, a simple quasi-static indentation measurement using a common gel can provide elastic modulus measurements that help validate dynamic shear-wave imaging estimates. PMID:26376923
Maximum likelihood decoding of Reed Solomon Codes
Sudan, M.
1996-12-31
We present a randomized algorithm which takes as input n distinct points ((x{sub i}, y{sub i})){sup n}{sub i=1} from F x F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., y{sub i} = f (x{sub i}) for at least t values of i), provided t = {Omega}({radical}nd). The running time is bounded by a polynomial in n. This immediately provides a maximum likelihood decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides some maximum likelihood decoding for any efficient (i.e., constant or even polynomial rate) code.
CORA: Emission Line Fitting with Maximum Likelihood
NASA Astrophysics Data System (ADS)
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
CORA - emission line fitting with Maximum Likelihood
NASA Astrophysics Data System (ADS)
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Maximum likelihood analysis of bubble incidence for mixed gas diving.
Tikuisis, P; Gault, K; Carrod, G
1990-03-01
The method of maximum likelihood has been applied to predict the incidence of bubbling in divers for both air and helium diving. Data were obtained from 108 air man-dives and 622 helium man-dives conducted experimentally in a hyperbaric chamber. Divers were monitored for bubbles using Doppler ultrasonics during the period from surfacing until approximately 2 h after surfacing. Bubble grades were recorded according to the K-M code, and the maximum value in the precordial region for each diver was used in the likelihood analysis. Prediction models were based on monoexponential gas kinetics using one and two parallel-compartment configurations. The model parameters were of three types: gas kinetics, gas potency, and compartment gain. When the potency of the gases was not distinguished, the risk criterion used was inherently based on the gas supersaturation ratio, otherwise it was based on the potential bubble volume. The two-compartment model gave a significantly better prediction than the one-compartment model only if the kinetics of nitrogen and helium were distinguished. A further significant improvement with the two-compartment model was obtained when the potency of the two gases was distinguished, thereby making the potential bubble volume criterion a better choice than the gas pressure criterion. The results suggest that when the method of maximum likelihood is applied for the prediction of the incidence of bubbling, more than one compartment should be used and if more than one is used consideration should be given to distinguishing the potencies of the inert gases. PMID:2181767
Dynamically controlled crystallization method and apparatus and crystals obtained thereby
NASA Technical Reports Server (NTRS)
Arnowitz, Leonard (Inventor); Steinberg, Emanuel (Inventor)
1999-01-01
A method and apparatus for dynamically controlling the crystallization of proteins including a crystallization chamber or chambers for holding a protein in a salt solution, one or more salt solution chambers, two communication passages respectively coupling the crystallization chamber with each of the salt solution chambers, and transfer mechanisms configured to respectively transfer salt solution between each of the salt solution chambers and the crystallization chamber. The transfer mechanisms are interlocked to maintain the volume of salt solution in the crystallization chamber substantially constant. Salt solution of different concentrations is transferred into and out of the crystallization chamber to adjust the salt concentration in the crystallization chamber to achieve precise control of the crystallization process.
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, Jon D.
1990-01-01
Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.
Implementing efficient dynamic formal verification methods for MPI programs.
Vakkalanka, S.; DeLisi, M.; Gopalakrishnan, G.; Kirby, R. M.; Thakur, R.; Gropp, W.; Mathematics and Computer Science; Univ. of Utah; Univ. of Illinois
2008-01-01
We examine the problem of formally verifying MPI programs for safety properties through an efficient dynamic (runtime) method in which the processes of a given MPI program are executed under the control of an interleaving scheduler. To ensure full coverage for given input test data, the algorithm must take into consideration MPI's out-of-order completion semantics. The algorithm must also ensure that nondeterministic constructs (e.g., MPI wildcard receive matches) are executed in all possible ways. Our new algorithm rewrites wildcard receives to specific receives, one for each sender that can potentially match with the receive. It then recursively explores each case of the specific receives. The list of potential senders matching a receive is determined through a runtime algorithm that exploits MPI's operation ordering semantics. Our verification tool ISP that incorporates this algorithm efficiently verifies several programs and finds bugs missed by existing informal verification tools.
Modern wing flutter analysis by computational fluid dynamics methods
NASA Technical Reports Server (NTRS)
Cunningham, Herbert J.; Batina, John T.; Bennett, Robert M.
1988-01-01
The application and assessment of the recently developed CAP-TSD transonic small-disturbance code for flutter prediction is described. The CAP-TSD code has been developed for aeroelastic analysis of complete aircraft configurations and was previously applied to the calculation of steady and unsteady pressures with favorable results. Generalized aerodynamic forces and flutter characteristics are calculated and compared with linear theory results and with experimental data for a 45 deg sweptback wing. These results are in good agreement with the experimental flutter data which is the first step toward validating CAP-TSD for general transonic aeroelastic applications. The paper presents these results and comparisons along with general remarks regarding modern wing flutter analysis by computational fluid dynamics methods.
NASA Astrophysics Data System (ADS)
Lika, Konstadia; Kearney, Michael R.; Freitas, Vânia; van der Veer, Henk W.; van der Meer, Jaap; Wijsman, Johannes W. M.; Pecquerie, Laure; Kooijman, Sebastiaan A. L. M.
2011-11-01
The Dynamic Energy Budget (DEB) theory for metabolic organisation captures the processes of development, growth, maintenance, reproduction and ageing for any kind of organism throughout its life-cycle. However, the application of DEB theory is challenging because the state variables and parameters are abstract quantities that are not directly observable. We here present a new approach of parameter estimation, the covariation method, that permits all parameters of the standard Dynamic Energy Budget (DEB) model to be estimated from standard empirical datasets. Parameter estimates are based on the simultaneous minimization of a weighted sum of squared deviations between a number of data sets and model predictions or the minimisation of the negative log likelihood function, both in a single-step procedure. The structure of DEB theory permits the unusual situation of using single data-points (such as the maximum reproduction rate), which we call "zero-variate" data, for estimating parameters. We also introduce the concept of "pseudo-data", exploiting the rules for the covariation of parameter values among species that are implied by the standard DEB model. This allows us to introduce the concept of a generalised animal, which has specified parameter values. We here outline the philosophy behind the approach and its technical implementation. In a companion paper, we assess the behaviour of the estimation procedure and present preliminary findings of emerging patterns in parameter values across diverse taxa.
Introduction to finite-difference methods for numerical fluid dynamics
Scannapieco, E.; Harlow, F.H.
1995-09-01
This work is intended to be a beginner`s exercise book for the study of basic finite-difference techniques in computational fluid dynamics. It is written for a student level ranging from high-school senior to university senior. Equations are derived from basic principles using algebra. Some discussion of partial-differential equations is included, but knowledge of calculus is not essential. The student is expected, however, to have some familiarity with the FORTRAN computer language, as the syntax of the computer codes themselves is not discussed. Topics examined in this work include: one-dimensional heat flow, one-dimensional compressible fluid flow, two-dimensional compressible fluid flow, and two-dimensional incompressible fluid flow with additions of the equations of heat flow and the {Kappa}-{epsilon} model for turbulence transport. Emphasis is placed on numerical instabilities and methods by which they can be avoided, techniques that can be used to evaluate the accuracy of finite-difference approximations, and the writing of the finite-difference codes themselves. Concepts introduced in this work include: flux and conservation, implicit and explicit methods, Lagrangian and Eulerian methods, shocks and rarefactions, donor-cell and cell-centered advective fluxes, compressible and incompressible fluids, the Boussinesq approximation for heat flow, Cartesian tensor notation, the Boussinesq approximation for the Reynolds stress tensor, and the modeling of transport equations. A glossary is provided which defines these and other terms.
High dimensional model representation method for fuzzy structural dynamics
NASA Astrophysics Data System (ADS)
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
A Dynamic Integration Method for Borderland Database using OSM data
NASA Astrophysics Data System (ADS)
Zhou, X.-G.; Jiang, Y.; Zhou, K.-X.; Zeng, L.
2013-11-01
Spatial data is the fundamental of borderland analysis of the geography, natural resources, demography, politics, economy, and culture. As the spatial region used in borderland researching usually covers several neighboring countries' borderland regions, the data is difficult to achieve by one research institution or government. VGI has been proven to be a very successful means of acquiring timely and detailed global spatial data at very low cost. Therefore VGI will be one reasonable source of borderland spatial data. OpenStreetMap (OSM) has been known as the most successful VGI resource. But OSM data model is far different from the traditional authoritative geographic information. Thus the OSM data needs to be converted to the scientist customized data model. With the real world changing fast, the converted data needs to be updated. Therefore, a dynamic integration method for borderland data is presented in this paper. In this method, a machine study mechanism is used to convert the OSM data model to the user data model; a method used to select the changed objects in the researching area over a given period from OSM whole world daily diff file is presented, the change-only information file with designed form is produced automatically. Based on the rules and algorithms mentioned above, we enabled the automatic (or semiautomatic) integration and updating of the borderland database by programming. The developed system was intensively tested.
An extinction/reignition dynamic method for turbulent combustion
NASA Astrophysics Data System (ADS)
Knaus, Robert; Pantano, Carlos
2011-11-01
Quasi-randomly distributed locations of high strain in turbulent combustion can cause a nonpremixed or partially premixed flame to develop local regions of extinction called ``flame holes''. The presence and extent of these holes can increase certain pollutants and reduce the amount of fuel burned. Accurately modeling the dynamics of these interacting regions can improve the accuracy of combustion simulations by effectively incorporating finite-rate chemistry effects. In the proposed method, the flame hole state is characterized by a progress variable that nominally exists on the stoichiometric surface. The evolution of this field is governed by a partial-differential equation embedded in the time-dependent two-manifold of the flame surface. This equation includes advection, propagation, and flame hole formation (flame hole healing or collapse is accounted by propagation naturally). We present a computational algorithm that solves this equation by embedding it in the usual three-dimensional space. A piece-wise parabolic WENO scheme combined with a compression algorithm are used to evolve the flame hole progress variable. A key aspect of the method is the extension of the surface data to the three-dimensional space in an efficient manner. We present results of this method applied to canonical turbulent combusting flows where the flame holes interact and describe their statistics.
The ONIOM molecular dynamics method for biochemical applications: cytidine deaminase
Matsubara, Toshiaki; Dupuis, Michel; Aida, Misako
2007-03-22
Abstract We derived and implemented the ONIOM-molecular dynamics (MD) method for biochemical applications. The implementation allows the characterization of the functions of the real enzymes taking account of their thermal motion. In this method, the direct MD is performed by calculating the ONIOM energy and gradients of the system on the fly. We describe the first application of this ONOM-MD method to cytidine deaminase. The environmental effects on the substrate in the active site are examined. The ONIOM-MD simulations show that the product uridine is strongly perturbed by the thermal motion of the environment and dissociates easily from the active site. TM and MA were supported in part by grants from the Ministry of Education, Culture, Sports, Science and Technology of Japan. MD was supported by the Division of Chemical Sciences, Office of Basic Energy Sciences, and by the Office of Biological and Environmental Research of the U.S. Department of Energy DOE. Battelle operates Pacific Northwest National Laboratory for DOE.
Communications Overlapping in Fast Multipole Particle Dynamics Methods
Kurzak, Jakub; Pettitt, Bernard M.
2005-03-01
The research described in this product was performed in part in the Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory. In molecular dynamics the fast multipole method (FMM) is an attractive alternative to Ewald summation for calculating electrostatic interactions due to the operation counts. However when applied to small particle systems and taken to many processors it has a high demand for interprocessor communication. In a distributed memory environment this demand severely limits applicability of the FMM to systems with O(10 K atoms). We present an algorithm that allows for fine grained overlap of communication and computation, while not sacrificing synchronization and determinism in the equations of motion. The method avoids contention in the communication subsystem making it feasible to use the FMM for smaller systems on larger numbers of processors. Our algorithm also facilitates application of multiple time stepping techniques within the FMM. We present scaling at a reasonably high level of accuracy compared with optimized Ewald methods.
Testing and Validation of the Dynamic Inertia Measurement Method
NASA Technical Reports Server (NTRS)
Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David
2015-01-01
The Dynamic Inertia Measurement (DIM) method uses a ground vibration test setup to determine the mass properties of an object using information from frequency response functions. Most conventional mass properties testing involves using spin tables or pendulum-based swing tests, which for large aerospace vehicles becomes increasingly difficult and time-consuming, and therefore expensive, to perform. The DIM method has been validated on small test articles but has not been successfully proven on large aerospace vehicles. In response, the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) conducted mass properties testing on an "iron bird" test article that is comparable in mass and scale to a fighter-type aircraft. The simple two-I-beam design of the "iron bird" was selected to ensure accurate analytical mass properties. Traditional swing testing was also performed to compare the level of effort, amount of resources, and quality of data with the DIM method. The DIM test showed favorable results for the center of gravity and moments of inertia; however, the products of inertia showed disagreement with analytical predictions.
Comprehensive boundary method for solid walls in dissipative particle dynamics
Visser, D.C. . E-mail: visser@science.uva.nl; Hoefsloot, H.C.J.; Iedema, P.D. . E-mail: piet@science.uva.nl
2005-05-20
Dissipative particle dynamics (DPD) is a particle-based mesoscopic simulation technique, especially useful to study hydrodynamic behaviour in the field of complex fluid flow. Most studies with DPD have focused on bulk behaviour by considering a part of an infinite region using periodic boundaries. To model a finite system instead, boundary conditions of the solid walls confining the system must be addressed. These conditions depend on the time and length scales of phenomena studied, i.e., the level of coarse graining. Here we focus on a mesoscopic level at which small scale atomistic effects near the wall are no longer visible. At this, more macroscopic, level a solid wall should be impenetrable, show no-slip and should not affect the fluid properties. Solid walls used in previous studies were unable to meet all three these conditions or met them with limited success. Here, we describe a method to create solid walls that does satisfy all requirements, producing the correct boundary conditions. The introduction of periodic conditions for curved boundaries makes this new wall method fit for curved geometries as well. And, an improved reflection mechanism makes the walls impenetrable without causing side effects. The method described here could also be implemented in other particle-based models.
Numerical Simulations of Granular Dynamics: Method and Tests
NASA Astrophysics Data System (ADS)
Richardson, Derek C.; Walsh, K. J.; Murdoch, N.; Michel, P.; Schwartz, S. R.
2010-10-01
We present a new particle-based numerical method for the simulation of granular dynamics, with application to motions of particles (regolith) on small solar system bodies and planetary surfaces [1]. The method employs the parallel N-body tree code pkdgrav [2] to search for collisions and compute particle trajectories. Particle confinement is achieved by combining arbitrary combinations of four provided wall primitives, namely infinite plane, finite disk, infinite cylinder, and finite cylinder, and degenerate cases of these. Various wall movements, including translation, oscillation, and rotation, are supported. Several tests of the method are presented, including a model granular "atmosphere” that achieves correct energy equipartition, and a series of tumbler simulations that compare favorably with actual laboratory experiments [3]. DCR and SRS acknowledge NASA Grant No. NNX08AM39G and NSF Grant No. AST0524875; KJW, the Poincaré Fellowship at OCA; NM, Thales Alenia Space and The Open University; and PM and NM, the French Programme National de Planétologie. References: [1] Richardson et al. (2010), Icarus, submitted; [2] Cf. Richardson et al. (2009), P&SS 57, 183 and references therein; [3] Brucks et al. (2007), PRE 75, 032301-1-032301-4.
Likelihood-free Bayesian computation for structural model calibration: a feasibility study
NASA Astrophysics Data System (ADS)
Jin, Seung-Seop; Jung, Hyung-Jo
2016-04-01
Finite element (FE) model updating is often used to associate FE models with corresponding existing structures for the condition assessment. FE model updating is an inverse problem and prone to be ill-posed and ill-conditioning when there are many errors and uncertainties in both an FE model and its corresponding measurements. In this case, it is important to quantify these uncertainties properly. Bayesian FE model updating is one of the well-known methods to quantify parameter uncertainty by updating our prior belief on the parameters with the available measurements. In Bayesian inference, likelihood plays a central role in summarizing the overall residuals between model predictions and corresponding measurements. Therefore, likelihood should be carefully chosen to reflect the characteristics of the residuals. It is generally known that very little or no information is available regarding the statistical characteristics of the residuals. In most cases, the likelihood is assumed to be the independent identically distributed Gaussian distribution with the zero mean and constant variance. However, this assumption may cause biased and over/underestimated estimates of parameters, so that the uncertainty quantification and prediction are questionable. To alleviate the potential misuse of the inadequate likelihood, this study introduced approximate Bayesian computation (i.e., likelihood-free Bayesian inference), which relaxes the need for an explicit likelihood by analyzing the behavior similarities between model predictions and measurements. We performed FE model updating based on likelihood-free Markov chain Monte Carlo (MCMC) without using the likelihood. Based on the result of the numerical study, we observed that the likelihood-free Bayesian computation can quantify the updating parameters correctly and its predictive capability for the measurements, not used in calibrated, is also secured.
Transfer entropy as a log-likelihood ratio.
Barnett, Lionel; Bossomaier, Terry
2012-09-28
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense. PMID:23030125
Transfer Entropy as a Log-Likelihood Ratio
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Bossomaier, Terry
2012-09-01
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Developmental Changes in Children's Understanding of Future Likelihood and Uncertainty
ERIC Educational Resources Information Center
Lagattuta, Kristin Hansen; Sayfan, Liat
2011-01-01
Two measures assessed 4-10-year-olds' and adults' (N = 201) understanding of future likelihood and uncertainty. In one task, participants sequenced sets of event pictures varying by one physical dimension according to increasing future likelihood. In a separate task, participants rated characters' thoughts about the likelihood of future events,…
Maximum-likelihood density modification using pattern recognition of structural motifs
Terwilliger, Thomas C.
2001-01-01
The likelihood-based approach to density modification [Terwilliger (2000 ▶), Acta Cryst. D56, 965–972] is extended to include the recognition of patterns of electron density. Once a region of electron density in a map is recognized as corresponding to a known structural element, the likelihood of the map is reformulated to include a term that reflects how closely the map agrees with the expected density for that structural element. This likelihood is combined with other aspects of the likelihood of the map, including the presence of a flat solvent region and the electron-density distribution in the protein region. This likelihood-based pattern-recognition approach was tested using the recognition of helical segments in a largely helical protein. The pattern-recognition method yields a substantial phase improvement over both conventional and likelihood-based solvent-flattening and histogram-matching methods. The method can potentially be used to recognize any common structural motif and incorporate prior knowledge about that motif into density modification. PMID:11717487
Detection of abrupt changes in dynamic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1984-01-01
Some of the basic ideas associated with the detection of abrupt changes in dynamic systems are presented. Multiple filter-based techniques and residual-based method and the multiple model and generalized likelihood ratio methods are considered. Issues such as the effect of unknown onset time on algorithm complexity and structure and robustness to model uncertainty are discussed.
A maximum likelihood approach to the inverse problem of scatterometry.
Henn, Mark-Alexander; Gross, Hermann; Scholze, Frank; Wurm, Matthias; Elster, Clemens; Bär, Markus
2012-06-01
Scatterometry is frequently used as a non-imaging indirect optical method to reconstruct the critical dimensions (CD) of periodic nanostructures. A particular promising direction is EUV scatterometry with wavelengths in the range of 13 - 14 nm. The conventional approach to determine CDs is the minimization of a least squares function (LSQ). In this paper, we introduce an alternative method based on the maximum likelihood estimation (MLE) that determines the statistical error model parameters directly from measurement data. By using simulation data, we show that the MLE method is able to correct the systematic errors present in LSQ results and improves the accuracy of scatterometry. In a second step, the MLE approach is applied to measurement data from both extreme ultraviolet (EUV) and deep ultraviolet (DUV) scatterometry. Using MLE removes the systematic disagreement of EUV with other methods such as scanning electron microscopy and gives consistent results for DUV. PMID:22714306
Stochastic Maximum Likelihood (SML) parametric estimation of overlapped Doppler echoes
NASA Astrophysics Data System (ADS)
Boyer, E.; Petitdidier, M.; Larzabal, P.
2004-11-01
This paper investigates the area of overlapped echo data processing. In such cases, classical methods, such as Fourier-like techniques or pulse pair methods, fail to estimate the first three spectral moments of the echoes because of their lack of resolution. A promising method, based on a modelization of the covariance matrix of the time series and on a Stochastic Maximum Likelihood (SML) estimation of the parameters of interest, has been recently introduced in literature. This method has been tested on simulations and on few spectra from actual data but no exhaustive investigation of the SML algorithm has been conducted on actual data: this paper fills this gap. The radar data came from the thunderstorm campaign that took place at the National Astronomy and Ionospheric Center (NAIC) in Arecibo, Puerto Rico, in 1998.
NASA Technical Reports Server (NTRS)
Stepner, D. E.; Mehra, R. K.
1973-01-01
A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.
Algebraic and analytic reconstruction methods for dynamic tomography.
Desbat, L; Rit, S; Clackdoyle, R; Mennessier, C; Promayon, E; Ntalampeki, S
2007-01-01
In this work, we discuss algebraic and analytic approaches for dynamic tomography. We present a framework of dynamic tomography for both algebraic and analytic approaches. We finally present numerical experiments. PMID:18002059
ERIC Educational Resources Information Center
Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike
2011-01-01
It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
NASA Astrophysics Data System (ADS)
Song, Qiong; Wang, Yuehuan; Yan, Xiaoyun; Liu, Dang
2015-12-01
In this paper we propose an independent sequential maximum likelihood approach to address the joint track-to-track association and bias removal in multi-sensor information fusion systems. First, we enumerate all kinds of association situation following by estimating a bias for each association. Then we calculate the likelihood of each association after bias compensated. Finally we choose the maximum likelihood of all association situations as the association result and the corresponding bias estimation is the registration result. Considering the high false alarm and interference, we adopt the independent sequential association to calculate the likelihood. Simulation results show that our proposed method can give out the right association results and it can estimate the bias precisely simultaneously for small number of targets in multi-sensor fusion system.
Ellis, J. A.; Siemens, X.; Van Haasteren, R.
2013-05-20
Direct detection of gravitational waves by pulsar timing arrays will become feasible over the next few years. In the low frequency regime (10{sup -7} Hz-10{sup -9} Hz), we expect that a superposition of gravitational waves from many sources will manifest itself as an isotropic stochastic gravitational wave background. Currently, a number of techniques exist to detect such a signal; however, many detection methods are computationally challenging. Here we introduce an approximation to the full likelihood function for a pulsar timing array that results in computational savings proportional to the square of the number of pulsars in the array. Through a series of simulations we show that the approximate likelihood function reproduces results obtained from the full likelihood function. We further show, both analytically and through simulations, that, on average, this approximate likelihood function gives unbiased parameter estimates for astrophysically realistic stochastic background amplitudes.
A fast, always positive definite and normalizable approximation of non-Gaussian likelihoods
NASA Astrophysics Data System (ADS)
Sellentin, Elena
2015-10-01
In this paper we extend the previously published DALI-approximation for likelihoods to cases in which the parameter dependence is in the covariance matrix. The approximation recovers non-Gaussian likelihoods, and reduces to the Fisher matrix approach in the case of Gaussianity. It works with the minimal assumptions of having Gaussian errors on the data, and a covariance matrix that possesses a converging Taylor approximation. The resulting approximation works in cases of severe parameter degeneracies and in cases where the Fisher matrix is singular. It is at least 1000 times faster than a typical Monte Carlo Markov Chain run over the same parameter space. Two example applications, to cases of extremely non-Gaussian likelihoods, are presented - one demonstrates how the method succeeds in reconstructing completely a ring-shaped likelihood. A public code is released here: http://lnasellentin.github.io/DALI/.
Steered Molecular Dynamics Methods Applied to Enzyme Mechanism and Energetics.
Ramírez, C L; Martí, M A; Roitberg, A E
2016-01-01
One of the main goals of chemistry is to understand the underlying principles of chemical reactions, in terms of both its reaction mechanism and the thermodynamics that govern it. Using hybrid quantum mechanics/molecular mechanics (QM/MM)-based methods in combination with a biased sampling scheme, it is possible to simulate chemical reactions occurring inside complex environments such as an enzyme, or aqueous solution, and determining the corresponding free energy profile, which provides direct comparison with experimental determined kinetic and equilibrium parameters. Among the most promising biasing schemes is the multiple steered molecular dynamics method, which in combination with Jarzynski's Relationship (JR) allows obtaining the equilibrium free energy profile, from a finite set of nonequilibrium reactive trajectories by exponentially averaging the individual work profiles. However, obtaining statistically converged and accurate profiles is far from easy and may result in increased computational cost if the selected steering speed and number of trajectories are inappropriately chosen. In this small review, using the extensively studied chorismate to prephenate conversion reaction, we first present a systematic study of how key parameters such as pulling speed, number of trajectories, and reaction progress are related to the resulting work distributions and in turn the accuracy of the free energy obtained with JR. Second, and in the context of QM/MM strategies, we introduce the Hybrid Differential Relaxation Algorithm, and show how it allows obtaining more accurate free energy profiles using faster pulling speeds and smaller number of trajectories and thus smaller computational cost. PMID:27497165
Dynamically controlled crystallization method and apparatus and crystals obtained thereby
NASA Technical Reports Server (NTRS)
Arnowitz, Leonard (Inventor); Steinberg, Emanuel (Inventor)
2003-01-01
A method and apparatus for dynamically controlling the crystallization of molecules including a crystallization chamber (14) or chambers for holding molecules in a precipitant solution, one or more precipitant solution reservoirs (16, 18), communication passages (17, 19) respectively coupling the crystallization chamber(s) with each of the precipitant solution reservoirs, and transfer mechanisms (20, 21, 22, 24, 26, 28) configured to respectively transfer precipitant solution between each of the precipitant solution reservoirs and the crystallization chamber(s). The transfer mechanisms are interlocked to maintain a constant volume of precipitant solution in the crystallization chamber(s). Precipitant solutions of different concentrations are transferred into and out of the crystallization chamber(s) to adjust the concentration of precipitant in the crystallization chamber(s) to achieve precise control of the crystallization process. The method and apparatus can be used effectively to grow crystals under reduced gravity conditions such as microgravity conditions of space, and under conditions of reduced or enhanced effective gravity as induced by a powerful magnetic field.
CMBFIT: Rapid WMAP likelihood calculations with normal parameters
NASA Astrophysics Data System (ADS)
Sandvik, Håvard B.; Tegmark, Max; Wang, Xiaomin; Zaldarriaga, Matias
2004-03-01
We present a method for ultrafast confrontation of the Wilkinson Microwave Anisotropy Probe (WMAP) cosmic microwave background observations with theoretical models, implemented as a publicly available software package called CMBFIT, useful for anyone wishing to measure cosmological parameters by combining WMAP with other observations. The method takes advantage of the underlying physics by transforming into a set of parameters where the WMAP likelihood surface is accurately fit by the exponential of a quartic or sextic polynomial. Building on previous physics based approximations by Hu et al., Kosowsky et al., and Chu et al., it combines their speed with precision cosmology grade accuracy. A FORTRAN code for computing the WMAP likelihood for a given set of parameters is provided, precalibrated against CMBFAST, accurate to Δ ln L˜0.05 over the entire 2σ region of the parameter space for 6 parameter “vanilla” ΛCDM models. We also provide 7-parameter fits including spatial curvature, gravitational waves and a running spectral index.
Approximate maximum likelihood decoding of block codes
NASA Technical Reports Server (NTRS)
Greenberger, H. J.
1979-01-01
Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.
Groups, information theory, and Einstein's likelihood principle
NASA Astrophysics Data System (ADS)
Sicuro, Gabriele; Tempesta, Piergiulio
2016-04-01
We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.
Modelling default and likelihood reasoning as probabilistic
NASA Technical Reports Server (NTRS)
Buntine, Wray
1990-01-01
A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.
Groups, information theory, and Einstein's likelihood principle.
Sicuro, Gabriele; Tempesta, Piergiulio
2016-04-01
We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts. PMID:27176234
Likelihood-based error correction for holographic storage systems
NASA Astrophysics Data System (ADS)
Neifeld, Mark A.; Chou, Wu-Chun
1999-11-01
We consider a volume holographic memory (VHM) system that is corrupted by interpixel interference (IPI) and detector noise. We compare hard-decision Reed-Solomon (RS) decoding with both hard- and soft-decision algorithms for 2D array decoding. RS codes are shown to provide larger VHM storage capacity and density as compared with array codes when hard-decision methods are employed. A new likelihood-based soft-decision algorithm for 2D array decoding is described. The new decoding algorithm is motivated by iterative turbo-decoding methods and is capable of incorporating a priori knowledge of the corrupting IPI channel during decoding. The new algorithm is shown to offer VHM capacity and density performance superior to hard-decision RS methods.
Immersed boundary conditions method for computational fluid dynamics problems
NASA Astrophysics Data System (ADS)
Husain, Syed Zahid
This dissertation presents implicit spectrally-accurate algorithms based on the concept of immersed boundary conditions (IBC) for solving a range of computational fluid dynamics (CFD) problems where the physical domains involve boundary irregularities. Both fixed and moving irregularities are considered with particular emphasis placed on the two-dimensional moving boundary problems. The physical model problems considered are comprised of the Laplace operator, the biharmonic operator and the Navier-Stokes equations, and thus cover the most commonly encountered types of operators in CFD analyses. The IBC algorithm uses a fixed and regular computational domain with flow domain immersed inside the computational domain. Boundary conditions along the edges of the time-dependent flow domain enter the algorithm in the form of internal constraints. Spectral spatial discretization for two-dimensional problems is based on Fourier expansions in the stream-wise direction and Chebyshev expansions in the normal-to-the-wall direction. Up to fourth-order implicit temporal discretization methods have been implemented. The IBC algorithm is shown to deliver the theoretically predicted accuracy in both time and space. Construction of the boundary constraints in the IBC algorithm provides degrees of freedom in excess of that required to formulate a closed system of algebraic equations. The 'classical IBC formulation' works by retaining number boundary constraints that are just sufficient to form a closed system of equations. The use of additional boundary constraints leads to the 'over-determined formulation' of the IBC algorithm. Over-determined systems are explored in order to improve the accuracy of the IBC method and to expand its applicability to more extreme geometries. Standard direct over-determined solvers based on evaluation of pseudo-inverses of the complete coefficient matrices have been tested on three model problems, namely, the Laplace equation, the biharmonic equation
Simulation of dynamic interface fracture using spectral boundary integral method
NASA Astrophysics Data System (ADS)
Harish, Ajay Bangalore
Simulation of three-dimensional dynamic fracture events constitutes one of the most challenging topics in the field of computational mechanics. Spontaneous dynamic fracture along the interface of two elastic solids is of great importance and interest to a number of disciplines in engineering and science. Applications include dynamic fractures in aircraft structures, earthquakes, thermal shocks in nuclear containment vessels and delamination in layered composite materials.
Robinson, Lucy F; Atlas, Lauren Y; Wager, Tor D
2015-03-01
We present a new method, State-based Dynamic Community Structure, that detects time-dependent community structure in networks of brain regions. Most analyses of functional connectivity assume that network behavior is static in time, or differs between task conditions with known timing. Our goal is to determine whether brain network topology remains stationary over time, or if changes in network organization occur at unknown time points. Changes in network organization may be related to shifts in neurological state, such as those associated with learning, drug uptake or experimental conditions. Using a hidden Markov stochastic blockmodel, we define a time-dependent community structure. We apply this approach to data from a functional magnetic resonance imaging experiment examining how contextual factors influence drug-induced analgesia. Results reveal that networks involved in pain, working memory, and emotion show distinct profiles of time-varying connectivity. PMID:25534114
Analysis methods for fast impurity ion dynamics data
Den Hartog, D.J.; Almagri, A.F.; Prager, S.C.; Fonck, R.J.
1994-08-01
A high resolution spectrometer has been developed and used on the MST reversed-field pinch (RFP) to measure passively impurity ion temperatures and flow velocities with 10 {mu}s temporal resolution. Such measurements of MHD-scale fluctuations are particularly relevant in the RFP because the flow velocity fluctuation induced transport of current (the ``MHD dynamo``) may produce the magnetic field reversal characteristic of an RFP. This instrument will also be used to measure rapid changes in the equilibrium flow velocity, such as occur during locking and H-mode transition. The precision of measurements made to date is <0.6 km/s. The authors are developing accurate analysis techniques appropriate to the reduction of this fast ion dynamics data. Moment analysis and curve-fitting routines have been evaluated for noise sensitivity and robustness. Also presented is an analysis method which correctly separates the flux-surface average of the correlated fluctuations in u and B from the fluctuations due to rigid shifts of the plasma column.
Method for increasing the dynamic range of mass spectrometers
Belov, Mikhail; Smith, Richard D.; Udseth, Harold R.
2004-09-07
A method for enhancing the dynamic range of a mass spectrometer by first passing a sample of ions through the mass spectrometer having a quadrupole ion filter, whereupon the intensities of the mass spectrum of the sample are measured. From the mass spectrum, ions within this sample are then identified for subsequent ejection. As further sampling introduces more ions into the mass spectrometer, the appropriate rf voltages are applied to a quadrupole ion filter, thereby selectively ejecting the undesired ions previously identified. In this manner, the desired ions may be collected for longer periods of time in an ion trap, thus allowing better collection and subsequent analysis of the desired ions. The ion trap used for accumulation may be the same ion trap used for mass analysis, in which case the mass analysis is performed directly, or it may be an intermediate trap. In the case where collection is an intermediate trap, the desired ions are accumulated in the intermediate trap, and then transferred to a separate mass analyzer. The present invention finds particular utility where the mass analysis is performed in an ion trap mass spectrometer or a Fourier transform ion cyclotron resonance mass spectrometer.
Applying dynamic methods in off-line signature recognition
NASA Astrophysics Data System (ADS)
Igarza, Juan Jose; Hernaez, Inmaculada; Goirizelaia, Inaki; Espinosa, Koldo
2004-08-01
In this paper we present the work developed on off-line signature verification using Hidden Markov Models (HMM). HMM is a well-known technique used by other biometric features, for instance, in speaker recognition and dynamic or on-line signature verification. Our goal here is to extend Left-to-Right (LR)-HMM to the field of static or off-line signature processing using results provided by image connectivity analysis. The chain encoding of perimeter points for each blob obtained by this analysis is an ordered set of points in the space, clockwise around the perimeter of the blob. We discuss two different ways of generating the models depending on the way the blobs obtained from the connectivity analysis are ordered. In the first proposed method, blobs are ordered according to their perimeter length. In the second proposal, blobs are ordered in their natural reading order, i.e. from the top to the bottom and left to right. Finally, two LR-HMM models are trained using the parameters obtained by the mentioned techniques. Verification results of the two techniques are compared and some improvements are proposed.
PARTICLE-GAS DYNAMICS WITH ATHENA: METHOD AND CONVERGENCE
Bai Xuening; Stone, James M. E-mail: jstone@astro.princeton.ed
2010-10-15
The Athena magnetohydrodynamics code has been extended to integrate the motion of particles coupled with the gas via aerodynamic drag in order to study the dynamics of gas and solids in protoplanetary disks (PPDs) and the formation of planetesimals. Our particle-gas hybrid scheme is based on a second-order predictor-corrector method. Careful treatment of the momentum feedback on the gas guarantees exact conservation. The hybrid scheme is stable and convergent in most regimes relevant to PPDs. We describe a semi-implicit integrator generalized from the leap-frog approach. In the absence of drag force, it preserves the geometric properties of a particle orbit. We also present a fully implicit integrator that is unconditionally stable for all regimes of particle-gas coupling. Using our hybrid code, we study the numerical convergence of the nonlinear saturated state of the streaming instability. We find that gas flow properties are well converged with modest grid resolution (128 cells per pressure length {eta}r for dimensionless stopping time {tau} {sub s} = 0.1) and an equal number of particles and grid cells. On the other hand, particle clumping properties converge only at higher resolutions, and finer resolution leads to stronger clumping before convergence is reached. Finally, we find that the measurement of particle transport properties resulted from the streaming instability may be subject to error of about {+-}20%.
Brief communication "Likelihood of societal preparedness for global change"
NASA Astrophysics Data System (ADS)
Vogel, R. M.; Rosner, A.; Kirshen, P. H.
2013-01-01
Anthropogenic influences on earth system processes are now pervasive, resulting in trends in river discharge, pollution levels, ocean levels, precipitation, temperature, wind, landslides, bird and plant populations and a myriad of other important natural hazards relating to earth system state variables. Thousands of trend detection studies have been published which report the statistical significance of observed trends. Unfortunately, such studies only concentrate on the null hypothesis of "no trend". Little or no attention is given to the power of such statistical trend tests, which would quantify the likelihood that we might ignore a trend if it really existed. The probability of missing the trend if it exists, known as the type II error, informs us about the likelihood of whether or not society is prepared to accommodate and respond to such trends. We describe how the power or probability of detecting a trend if it exists, depends critically on our ability to develop improved multivariate deterministic and statistical methods for predicting future trends in earth system processes. Several other research and policy implications for improving our understanding of trend detection and our societal response to those trends are discussed.
Expected likelihood for tracking in clutter with particle filters
NASA Astrophysics Data System (ADS)
Marrs, Alan; Maskell, Simon; Bar-Shalom, Yaakov
2002-08-01
The standard approach to tracking a single target in clutter, using the Kalman filter or extended Kalman filter, is to gate the measurements using the predicted measurement covariance and then to update the predicted state using probabilistic data association. When tracking with a particle filter, an analog to the predicted measurement covariance is not directly available and could only be constructed as an approximation to the current particle cloud. A common alternative is to use a form of soft gating, based upon a Student's-t likelihood, that is motivated by the concept of score functions in classical statistical hypothesis testing. In this paper, we combine the score function and probabilistic data association approaches to develop a new method for tracking in clutter using a particle filter. This is done by deriving an expected likelihood from known measurement and clutter statistics. The performance of this new approach is assessed on a series of bearings-only tracking scenarios with uncertain sensor location and non-Gaussian clutter.
Does Hearing Aid Use Increase the Likelihood of Cerumen Impaction?
Arthur, Jonathan; Williams, Huw
2015-01-01
Background and Objectives Impacted cerumen is a common condition in adults. It is commonly believed that wearing hearing aids may increase the cerumen impaction, although no empirical evidence exist. The current study was aimed at studying if the use of hearing aids increase the likelihood of impaction of cerumen. Subjects and Methods The study used retrospective design. The study sample included 164 consecutive patients who were referred to cerumen clinic from Royal Glamorgan Hospital, Wales. Audiologist classified the cerumen impaction into four categories (i.e., no cerumen; non-occluding cerumen; occluding cerumen; and fully non-occluding cerumen and debris). Chi-square analysis was performed to study the association between hearing aid use and cerumen impaction. Results The current study results showed no association between hearing aid use and cerumen impaction. Also, there was no association between right/left ear and cerumen impaction. Conclusions These results interesting and contrary to our assumption that hearing aid use increases the likelihood of cerumen impaction. More well-controlled studies with prospective designs are needed to confirm if these results are accurate. PMID:26771016
Likelihood free inference for Markov processes: a comparison.
Owen, Jamie; Wilkinson, Darren J; Gillespie, Colin S
2015-04-01
Approaches to Bayesian inference for problems with intractable likelihoods have become increasingly important in recent years. Approximate Bayesian computation (ABC) and "likelihood free" Markov chain Monte Carlo techniques are popular methods for tackling inference in these scenarios but such techniques are computationally expensive. In this paper we compare the two approaches to inference, with a particular focus on parameter inference for stochastic kinetic models, widely used in systems biology. Discrete time transition kernels for models of this type are intractable for all but the most trivial systems yet forward simulation is usually straightforward. We discuss the relative merits and drawbacks of each approach whilst considering the computational cost implications and efficiency of these techniques. In order to explore the properties of each approach we examine a range of observation regimes using two example models. We use a Lotka-Volterra predator-prey model to explore the impact of full or partial species observations using various time course observations under the assumption of known and unknown measurement error. Further investigation into the impact of observation error is then made using a Schlögl system, a test case which exhibits bi-modal state stability in some regions of parameter space. PMID:25720092
Tandon, Ankita; Wang, Ming; Roe, Kevin C.; Patel, Surju; Ghahramani, Nasrollah
2016-01-01
Background There is wide variation in referral for kidney transplant and preemptive kidney transplant (PKT). Patient characteristics such as age, race, sex and geographic location have been cited as contributing factors to this disparity. We hypothesize that the characteristics of nephrologists interplay with the patients' characteristics to influence the referral decision. In this study, we used hypothetical case scenarios to assess nephrologists' decisions regarding transplant referral. Methods A total of 3180 nephrologists were invited to participate. Among those interested, 252 were randomly selected to receive a survey in which nephrologists were asked whether they would recommend transplant for the 25 hypothetical patients. Logistic regression models with single covariates and multiple covariates were used to identify patient characteristics associated with likelihood of being referred for transplant and to identify nephrologists' characteristics associated with likelihood of referring for transplant. Results Of the 252 potential participants, 216 completed the survey. A nephrologist's affiliation with an academic institution was associated with a higher likelihood of referral, and being ‘>10 years from fellowship’ was associated with lower likelihood of referring patients for transplant. Patient age <50 years was associated with higher likelihood of referral. Rural location and smoking history/chronic obstructive pulmonary disease were associated with lower likelihood of being referred for transplant. The nephrologist's affiliation with an academic institution was associated with higher likelihood of referring for preemptive transplant, and the patient having a rural residence was associated with lower likelihood of being referred for preemptive transplant. Conclusions The variability in transplant referral is related to patients' age and geographic location as well as the nephrologists' affiliation with an academic institution and time since completion
An improved version of the Green's function molecular dynamics method
NASA Astrophysics Data System (ADS)
Kong, Ling Ti; Denniston, Colin; Müser, Martin H.
2011-02-01
This work presents an improved version of the Green's function molecular dynamics method (Kong et al., 2009; Campañá and Müser, 2004 [1,2]), which enables one to study the elastic response of a three-dimensional solid to an external stress field by taking into consideration only atoms near the surface. In the previous implementation, the effective elastic coefficients measured at the Γ-point were altered to reduce finite size effects: their eigenvalues corresponding to the acoustic modes were set to zero. This scheme was found to work well for simple Bravais lattices as long as only atoms within the last layer were treated as Green's function atoms. However, it failed to function as expected in all other cases. It turns out that a violation of the acoustic sum rule for the effective elastic coefficients at Γ (Kong, 2010 [3]) was responsible for this behavior. In the new version, the acoustic sum rule is enforced by adopting an iterative procedure, which is found to be physically more meaningful than the previous one. In addition, the new algorithm allows one to treat lattices with bases and the Green's function slab is no longer confined to one layer. New version program summaryProgram title: FixGFC/FixGFMD v1.12 Catalogue identifier: AECW_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECW_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 206 436 No. of bytes in distributed program, including test data, etc.: 4 314 850 Distribution format: tar.gz Programming language: C++ Computer: All Operating system: Linux Has the code been vectorized or parallelized?: Yes. Code has been parallelized using MPI directives. RAM: Depends on the problem Classification: 7.7 External routines: LAMMPS ( http://lammps.sandia.gov/), MPI ( http
Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation
NASA Astrophysics Data System (ADS)
Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.
2015-11-01
We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.
Dynamic permeability of porous media by the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Adler, P.; Pazdniakou, A.
2012-04-01
The main objective of our work is to determine the dynamic permeability of three dimensional porous media by means of the Lattice Boltzmann method (LBM). The Navier-Stokes equation can be numerically solved by LBM which is widely used to address various fluid dynamics problems. Space is discretized by a three-dimensional cubic lattice and time is discretized as well. The generally accepted notation for lattice Boltzmann models is DdQq where D stands for space dimension and Q for the number of discrete velocities. The present model is denoted by D3Q19. Moreover, the Two Relaxation Times variant of the Multi Relaxation Times model is implemented. Bounce back boundary conditions are used on the solid-fluid interfaces. The porous medium is spatially periodic. Reconstructed media were used; they are obtained by imposing a porosity and a correlation function characterized by a correlation length. Real samples can be obtained by MicroCT. In contrast with other previous contributions, the dynamic permeability K(omega) which is a complex number, is derived by imposing an oscillating body force of pulsation omega on the unit cell and by deriving the amplitude and the phase shift of the resulting time dependent seepage velocity. The influence of two limiting parameters, namely the Knudsen number Kn and the discretization for high frequencies, on K(omega) is carefully studied for the first time. Kn is proportional to nu/(cs H) where nu is the kinematic viscosity, cs the speed of sound in the fluid and H a characteristic length scale of the porous medium. Several porous media such as the classical plane Poiseuille flow and the reconstructed media are used to show that it is only for small enough values of Kn that reliable results are obtained. Otherwise, the data depend on Kn and may even be totally unphysical. However, it should be noticed that the limiting value of Kn could not be derived in general since it depends very much on the structure of the medium. Problems occur at
Dynamic permeability of porous media by the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Pazdniakou, A.; Adler, P. M.
2011-12-01
The main objective of our work is to determine the dynamic permeability of three dimensional porous media by means of the Lattice Boltzmann method (LBM). The Navier-Stokes equation can be numerically solved by LBM which is widely used to address various fluid dynamics problems. Space is discretized by a three-dimensional cubic lattice and time is discretized as well. The generally accepted notation for lattice Boltzmann models is DdQq where D stands for space dimension and Q for the number of discrete velocities. The present model is denoted by D3Q19. Moreover, the Two Relaxation Times variant of the Multi Relaxation Times model is implemented. Bounce back boundary conditions are used on the solid-fluid interfaces. The porous medium is spatially periodic. Reconstructed media were used; they are obtained by imposing a porosity and a correlation function characterized by a correlation length. Real samples can be obtained by MicroCT. In contrast with other previous contributions, the dynamic permeability K(omega) which is a complex number, is derived by imposing an oscillating body force of pulsation omega on the unit cell and by deriving the amplitude and the phase shift of the resulting time dependent seepage velocity. The influence of two limiting parameters, namely the Knudsen number Kn and the discretization for high frequencies, on K(omega) is carefully studied for the first time. Kn is proportional to nu/(c_s H) where nu is the kinematic viscosity, c_s the speed of sound in the fluid and H a characteristic length scale of the porous medium. Several porous media such as the classical plane Poiseuille flow and the reconstructed media are used to show that it is only for small enough values of Kn that reliable results are obtained. Otherwise, the data depend on Kn and may even be totally unphysical. However, it should be noticed that the limiting value of Kn could not be derived in general since it depends very much on the structure of the medium. Problems occur
The maximum likelihood dating of magnetostratigraphic sections
NASA Astrophysics Data System (ADS)
Man, Otakar
2011-04-01
In general, stratigraphic sections are dated by biostratigraphy and magnetic polarity stratigraphy (MPS) is subsequently used to improve the dating of specific section horizons or to correlate these horizons in different sections of similar age. This paper shows, however, that the identification of a record of a sufficient number of geomagnetic polarity reversals against a reference scale often does not require any complementary information. The deposition and possible subsequent erosion of the section is herein regarded as a stochastic process, whose discrete time increments are independent and normally distributed. This model enables the expression of the time dependence of the magnetic record of section increments in terms of probability. To date samples bracketing the geomagnetic polarity reversal horizons, their levels are combined with various sequences of successive polarity reversals drawn from the reference scale. Each particular combination gives rise to specific constraints on the unknown ages of the primary remanent magnetization of samples. The problem is solved by the constrained maximization of the likelihood function with respect to these ages and parameters of the model, and by subsequent maximization of this function over the set of possible combinations. A statistical test of the significance of this solution is given. The application of this algorithm to various published magnetostratigraphic sections that included nine or more polarity reversals gave satisfactory results. This possible self-sufficiency makes MPS less dependent on other dating techniques.
Targeted maximum likelihood estimation in safety analysis
Lendle, Samuel D.; Fireman, Bruce; van der Laan, Mark J.
2013-01-01
Objectives To compare the performance of a targeted maximum likelihood estimator (TMLE) and a collaborative TMLE (CTMLE) to other estimators in a drug safety analysis, including a regression-based estimator, propensity score (PS)–based estimators, and an alternate doubly robust (DR) estimator in a real example and simulations. Study Design and Setting The real data set is a subset of observational data from Kaiser Permanente Northern California formatted for use in active drug safety surveillance. Both the real and simulated data sets include potential confounders, a treatment variable indicating use of one of two antidiabetic treatments and an outcome variable indicating occurrence of an acute myocardial infarction (AMI). Results In the real data example, there is no difference in AMI rates between treatments. In simulations, the double robustness property is demonstrated: DR estimators are consistent if either the initial outcome regression or PS estimator is consistent, whereas other estimators are inconsistent if the initial estimator is not consistent. In simulations with near-positivity violations, CTMLE performs well relative to other estimators by adaptively estimating the PS. Conclusion Each of the DR estimators was consistent, and TMLE and CTMLE had the smallest mean squared error in simulations. PMID:23849159
Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; Smith, Emily A.; Vaswani, Namrata; Petrich, Jacob W.
2016-02-10
The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as “residual minimization” (RM) and “maximum likelihood” (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number ofmore » “photon counts” was approximately 20, 200, 1000, 3000, and 6000 and there were about 2–200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson’s weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. Here, the robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.« less
A simple objective method for determining a dynamic journal collection.
Bastille, J D; Mankin, C J
1980-10-01
In order to determine the content of a journal collection responsive to both user needs and space and dollar constraints, quantitative measures of the use of a 647-title collection have been related to space and cost requirements to develop objective criteria for a dynamic collection for the Treadwell Library at the Massachusetts General Hospital, a large medical research center. Data were collected for one calendar year (1977) and stored with the elements for each title's profile in a computerized file. To account for the effect of the bulk of the journal runs on the number of uses, raw use data have been adjusted using linear shelf space required for each title to produce a factor called density of use. Titles have been ranked by raw use and by density of use with space and cost requirements for each. Data have also been analyzed for five special categories of use. Given automated means of collecting and storing data, use measures should be collected continuously. Using raw use frequency ranking to relate use to space and costs seems sensible since a decision point cutoff can be chosen in terms of the potential interlibrary loans generated. But it places new titles at risk while protecting titles with long, little used runs. Basing decisions on density of use frequency ranking seems to produce a larger yield of titles with fewer potential interlibrary loans and to identify titles with overlong runs which may be pruned or converted to microform. The method developed is simple and practical. Its design will be improved to apply to data collected in 1980 for a continuous study of journal use. The problem addressed is essentially one of inventory control. Viewed as such it makes good financial sense to measure use as part of the routine operation of the library to provide information for effective management decisions. PMID:7437589
The dud-alternative effect in likelihood judgment.
Windschitl, Paul D; Chambers, John R
2004-01-01
The judged likelihood of a focal outcome should generally decrease as the list of alternative possibilities increases. For example, the likelihood that a runner will win a race goes down when 2 new entries are added to the field. However, 6 experiments demonstrate that the presence of implausible alternatives (duds) often increases the judged likelihood of a focal outcome. This dud-alternative effect was detected for judgments involving uncertainty about trivia facts and stochastic events. Nonnumeric likelihood measures and betting measures reliably detected the effect, but numeric likelihood measures did not. Time pressure increased the magnitude of the effect. The results were consistent with a contrast-effect account: The inclusion of duds increases the perceived strength of the evidence for the focal outcome, thereby affecting its judged likelihood. PMID:14736307
Dynamic Characteristics of Penor Peat Using MASW Method
NASA Astrophysics Data System (ADS)
Zainorabidin, A.; Said, M. J. M.
2016-07-01
The dynamic behaviour of soil affected the mechanical properties of soil such as shear wave velocity, shear modulus, damping ratio and poisson's ratio [1] which is becoming important aspect need to be considered for structures influences by dynamic movement. This study is to determine the dynamic behaviour of Penor peat such as shear wave velocity using MASW and estimation its shear modulus. Peat soils are very problematic soils since it's have high compressibility, low shear strength, high moisture content and low bearing capacity which is very not suitable materials to construct any foundation structures. Shear wave velocity ranges between 32.94 - 95.89 m/s and shear modulus are ranging between 0.93 - 8.01 MPa. The differences of both dynamic properties are due to the changes of peat density and affected by the fibre content, organic content, degree of degradation and moisture content.
Regional Earthquake Likelihood Models: A realm on shaky grounds?
NASA Astrophysics Data System (ADS)
Kossobokov, V.
2005-12-01
Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" earthquake forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability models are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability models on the empirical data. It is not an easy task in the frames of the Regional Earthquake Likelihood Models (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering model" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency
Increasing Power of Groupwise Association Test with Likelihood Ratio Test
NASA Astrophysics Data System (ADS)
Sul, Jae Hoon; Han, Buhm; Eskin, Eleazar
Sequencing studies have been discovering a numerous number of rare variants, allowing the identification of the effects of rare variants on disease susceptibility. As a method to increase the statistical power of studies on rare variants, several groupwise association tests that group rare variants in genes and detect associations between groups and diseases have been proposed. One major challenge in these methods is to determine which variants are causal in a group, and to overcome this challenge, previous methods used prior information that specifies how likely each variant is causal. Another source of information that can be used to determine causal variants is observation data because case individuals are likely to have more causal variants than control individuals. In this paper, we introduce a likelihood ratio test (LRT) that uses both data and prior information to infer which variants are causal and uses this finding to determine whether a group of variants is involved in a disease. We demonstrate through simulations that LRT achieves higher power than previous methods. We also evaluate our method on mutation screening data of the susceptibility gene for ataxia telangiectasia, and show that LRT can detect an association in real data. To increase the computational speed of our method, we show how we can decompose the computation of LRT, and propose an efficient permutation test. With this optimization, we can efficiently compute an LRT statistic and its significance at a genome-wide level. The software for our method is publicly available at http://genetics.cs.ucla.edu/rarevariants.
Saavedra, Serguei; Rohr, Rudolf P; Fortuna, Miguel A; Selva, Nuria; Bascompte, Jordi
2016-04-01
Many of the observed species interactions embedded in ecological communities are not permanent, but are characterized by temporal changes that are observed along with abiotic and biotic variations. While work has been done describing and quantifying these changes, little is known about their consequences for species coexistence. Here, we investigate the extent to which changes of species composition impact the likelihood of persistence of the predator-prey community in the highly seasonal Białowieza Primeval Forest (northeast Poland), and the extent to which seasonal changes of species interactions (predator diet) modulate the expected impact. This likelihood is estimated extending recent developments on the study of structural stability in ecological communities. We find that the observed species turnover strongly varies the likelihood of community persistence between summer and winter. Importantly, we demonstrate that the observed seasonal interaction changes minimize the variation in the likelihood of persistence associated with species turnover across the year. We find that these community dynamics can be explained as the coupling of individual species to their environment by minimizing both the variation in persistence conditions and the interaction changes between seasons. Our results provide a homeostatic explanation for seasonal species interactions and suggest that monitoring the association of interactions changes with the level of variation in community dynamics can provide a good indicator of the response of species to environmental pressures. PMID:27220203
Identification of space shuttle main engine dynamics
NASA Technical Reports Server (NTRS)
Duyar, Ahmet; Guo, Ten-Huei; Merrill, Walter C.
1989-01-01
System identification techniques are used to represent the dynamic behavior of the Space Shuttle Main Engine. The transfer function matrices of the linearized models of both the closed loop and the open loop system are obtained by using the recursive maximum likelihood method.
Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs
Nix, D.A.; Hogden, J.E.
1998-12-01
The authors describe Maximum-Likelihood Continuity Mapping (MALCOM) as an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete ''hidden'' space constrained by a fixed finite-automata architecture, MALCOM has a continuous hidden space (a continuity map) that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a far more realistic model of the speech production process. The authors support this claim by generating continuity maps for three speakers and using the resulting MALCOM paths to predict measured speech articulator data. The correlations between the MALCOM paths (obtained from only the speech acoustics) and the actual articulator movements average 0.77 on an independent test set not used to train MALCOM nor the predictor. On average, this unsupervised model achieves 92% of performance obtained using the corresponding supervised method.
H.264 SVC Complexity Reduction Based on Likelihood Mode Decision
Balaji, L.; Thyagharajan, K. K.
2015-01-01
H.264 Advanced Video Coding (AVC) was prolonged to Scalable Video Coding (SVC). SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD) is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method. PMID:26221623
H.264 SVC Complexity Reduction Based on Likelihood Mode Decision.
Balaji, L; Thyagharajan, K K
2015-01-01
H.264 Advanced Video Coding (AVC) was prolonged to Scalable Video Coding (SVC). SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD) is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method. PMID:26221623
Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood
Bondell, Howard D.; Stefanski, Leonard A.
2013-01-01
Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805
A Quasi-Likelihood Approach to Nonnegative Matrix Factorization.
Devarajan, Karthik; Cheung, Vincent C K
2016-08-01
A unified approach to nonnegative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proved using the expectation-maximization algorithm. In addition, a measure to evaluate the goodness of fit of the resulting factorization is described. The proposed methods allow modeling of nonlinear effects using appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511
Maximum likelihood: Extracting unbiased information from complex networks
NASA Astrophysics Data System (ADS)
Garlaschelli, Diego; Loffredo, Maria I.
2008-07-01
The choice of free parameters in network models is subjective, since it depends on what topological properties are being monitored. However, we show that the maximum likelihood (ML) principle indicates a unique, statistically rigorous parameter choice, associated with a well-defined topological feature. We then find that, if the ML condition is incompatible with the built-in parameter choice, network models turn out to be intrinsically ill defined or biased. To overcome this problem, we construct a class of safely unbiased models. We also propose an extension of these results that leads to the fascinating possibility to extract, only from topological data, the “hidden variables” underlying network organization, making them “no longer hidden.” We test our method on World Trade Web data, where we recover the empirical gross domestic product using only topological information.
Maximum-likelihood estimation of photon-number distribution from homodyne statistics
NASA Astrophysics Data System (ADS)
Banaszek, Konrad
1998-06-01
We present a method for reconstructing the photon-number distribution from the homodyne statistics based on maximization of the likelihood function derived from the exact statistical description of a homodyne experiment. This method incorporates in a natural way the physical constraints on the reconstructed quantities, and the compensation for the nonunit detection efficiency.
ERIC Educational Resources Information Center
Klein, Andreas G.; Muthen, Bengt O.
2007-01-01
In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…
NASA Technical Reports Server (NTRS)
Gottlieb, D.; Turkel, E.
1980-01-01
New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.
Dynamical models of elliptical galaxies - I. Simple methods
NASA Astrophysics Data System (ADS)
Agnello, A.; Evans, N. W.; Romanowsky, A. J.
2014-08-01
We study dynamical models for elliptical galaxies, deriving the projected kinematic profiles in a form that is valid for general surface brightness laws and (spherical) total mass profiles, without the need for any explicit deprojection. We provide accurate approximations of the line of sight and aperture-averaged velocity dispersion profiles for galaxies with total mass density profiles with slope near -2 and with modest velocity anisotropy using only single or double integrals, respectively. This is already sufficient to recover many of the kinematic properties of nearby ellipticals. As an application, we provide two different sets of mass estimators for elliptical galaxies, based on either the velocity dispersion at a location at or near the effective radius, or the aperture-averaged velocity dispersion. In the large aperture (virial) limit, mass estimators are naturally independent of anisotropy. The spherical mass enclosed within the effective radius Re can be estimated as 2.4 R_e < σ 2_p > / G, where < σ ^2_p > is the average of the squared velocity dispersion over a finite aperture. This formula does not depend on assumptions such as mass-follows-light, and is a compromise between the cases of small and large aperture sizes. Its general agreement with results from other methods in the literature makes it a reliable means to infer masses in the absence of detailed kinematic information. If on the other hand the velocity dispersion profile is available, tight mass estimates can be found that are independent of the mass-model and anisotropy profile. In particular, for a de Vaucouleurs surface brightness, the velocity dispersion measured at ≈1Re yields a tight mass estimate (with 10 per cent accuracy) at ≈3Re that is independent of the mass model and the anisotropy profile. This allows us to probe the importance of dark matter at radii where it dominates the mass budget of galaxies. Explicit formulae are given for small anisotropy, large radii and/or power
Method for making a dynamic pressure sensor and a pressure sensor made according to the method
NASA Technical Reports Server (NTRS)
Zuckerwar, Allan J. (Inventor); Robbins, William E. (Inventor); Robins, Glenn M. (Inventor)
1994-01-01
A method for providing a perfectly flat top with a sharp edge on a dynamic pressure sensor using a cup-shaped stretched membrane as a sensing element is described. First, metal is deposited on the membrane and surrounding areas. Next, the side wall of the pressure sensor with the deposited metal is machined to a predetermined size. Finally, deposited metal is removed from the top of the membrane in small steps, by machining or lapping while the pressure sensor is mounted in a jig or the wall of a test object, until the true top surface of the membrane appears. A thin indicator layer having a color contrasting with the color of the membrane may be applied to the top of the membrane before metal is deposited to facilitate the determination of when to stop metal removal from the top surface of the membrane.
The repeated replacement method: a pure Lagrangian meshfree method for computational fluid dynamics.
Walker, Wade A
2012-01-01
In this paper we describe the repeated replacement method (RRM), a new meshfree method for computational fluid dynamics (CFD). RRM simulates fluid flow by modeling compressible fluids' tendency to evolve towards a state of constant density, velocity, and pressure. To evolve a fluid flow simulation forward in time, RRM repeatedly "chops out" fluid from active areas and replaces it with new "flattened" fluid cells with the same mass, momentum, and energy. We call the new cells "flattened" because we give them constant density, velocity, and pressure, even though the chopped-out fluid may have had gradients in these primitive variables. RRM adaptively chooses the sizes and locations of the areas it chops out and replaces. It creates more and smaller new cells in areas of high gradient, and fewer and larger new cells in areas of lower gradient. This naturally leads to an adaptive level of accuracy, where more computational effort is spent on active areas of the fluid, and less effort is spent on inactive areas. We show that for common test problems, RRM produces results similar to other high-resolution CFD methods, while using a very different mathematical framework. RRM does not use Riemann solvers, flux or slope limiters, a mesh, or a stencil, and it operates in a purely Lagrangian mode. RRM also does not evaluate numerical derivatives, does not integrate equations of motion, and does not solve systems of equations. PMID:22866175
A comparative study on the restrictions of dynamic test methods
NASA Astrophysics Data System (ADS)
Majzoobi, GH.; Lahmi, S.
2015-09-01
Dynamic behavior of materials is investigated using different devices. Each of the devices has some restrictions. For instance, the stress-strain curve of the materials can be captured at high strain rates only with Hopkinson bar. However, by using a new approach some of the other techniques could be used to obtain the constants of material models such as Johnson-Cook model too. In this work, the restrictions of some devices such as drop hammer, Taylor test, Flying wedge, Shot impact test, dynamic tensile extrusion and Hopkinson bars which are used to characterize the material properties at high strain rates are described. The level of strain and strain rate and their restrictions are very important in examining the efficiency of each of the devices. For instance, necking or bulging in tensile and compressive Hopkinson bars, fragmentation in dynamic tensile extrusion and petaling in Taylor test are restricting issues in the level of strain rate attainable in the devices.
Gait recognition under carrying condition: a static dynamic fusion method
NASA Astrophysics Data System (ADS)
Yu, Guan; Li, Chang-Tsun; Hu, Yongjian
2012-06-01
When an individual carries an object, such as a briefcase, conventional gait recognition algorithms based on average silhouette/Gait Energy Image (GEI) do not always perform well as the object carried may have the potential of being mistakenly regarded as a part of the human body. To solve such a problem, in this paper, instead of directly applying GEI to represent the gait information, we propose a novel dynamic feature template for classification. Based on this extracted dynamic information and some static feature templates (i.e., head part and trunk part), we cast gait recognition on the large USF (University of South Florida) database by adopting a static/dynamic fusion strategy. For the experiments involving carrying condition covariate, significant improvements are achieved when compared with other classic algorithms.
Augmented composite likelihood for copula modeling in family studies under biased sampling.
Zhong, Yujie; Cook, Richard J
2016-07-01
The heritability of chronic diseases can be effectively studied by examining the nature and extent of within-family associations in disease onset times. Families are typically accrued through a biased sampling scheme in which affected individuals are identified and sampled along with their relatives who may provide right-censored or current status data on their disease onset times. We develop likelihood and composite likelihood methods for modeling the within-family association in these times through copula models in which dependencies are characterized by Kendall's [Formula: see text] Auxiliary data from independent individuals are exploited by augmentating composite likelihoods to increase precision of marginal parameter estimates and consequently increase efficiency in dependence parameter estimation. An application to a motivating family study in psoriatic arthritis illustrates the method and provides some evidence of excessive paternal transmission of risk. PMID:26819481
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2014-10-28
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Thermal dynamics of thermoelectric phenomena from frequency resolved methods
NASA Astrophysics Data System (ADS)
García-Cañadas, J.; Min, G.
2016-03-01
Understanding the dynamics of thermoelectric (TE) phenomena is important for the detailed knowledge of the operation of TE materials and devices. By analyzing the impedance response of both a single TE element and a TE device under suspended conditions, we provide new insights into the thermal dynamics of these systems. The analysis is performed employing parameters such as the thermal penetration depth, the characteristic thermal diffusion frequency and the thermal diffusion time. It is shown that in both systems the dynamics of the thermoelectric response is governed by how the Peltier heat production/absorption at the junctions evolves. In a single thermoelement, at high frequencies the thermal waves diffuse semi-infinitely from the junctions towards the half-length. When the frequency is reduced, the thermal waves can penetrate further and eventually reach the half-length where they start to cancel each other and further penetration is blocked. In the case of a TE module, semi-infinite thermal diffusion along the thickness of the ceramic layers occurs at the highest frequencies. As the frequency is decreased, heat storage in the ceramics becomes dominant and starts to compete with the diffusion of the thermal waves towards the half-length of the thermoelements. Finally, the cancellation of the waves occurs at the lowest frequencies. It is demonstrated that the analysis is able to identify and separate the different physical processes and to provide a detailed understanding of the dynamics of different thermoelectric effects.
Technologies and Truth Games: Research as a Dynamic Method
ERIC Educational Resources Information Center
Hassett, Dawnene D.
2010-01-01
This article offers a way of thinking about literacy instruction that critiques current reasoning, but also provides a space to dynamically think outside of prevalent practices. It presents a framework for both planning and studying literacy pedagogy that combines a practical everyday model of the reading process with Michel Foucault's (1988c)…
A Maximum Likelihood Approach to Correlational Outlier Identification.
ERIC Educational Resources Information Center
Bacon, Donald R.
1995-01-01
A maximum likelihood approach to correlational outlier identification is introduced and compared to the Mahalanobis D squared and Comrey D statistics through Monte Carlo simulation. Identification performance depends on the nature of correlational outliers and the measure used, but the maximum likelihood approach is the most robust performance…
A Survey of the Likelihood Approach to Bioequivalence Trials
Choi, Leena; Caffo, Brian; Rohde, Charles
2009-01-01
SUMMARY Bioequivalence trials are abbreviated clinical trials whereby a generic drug or new formulation is evaluated to determine if it is “equivalent” to a corresponding previously approved brand-name drug or formulation. In this manuscript, we survey the process of testing bioequivalence and advocate the likelihood paradigm for representing the resulting data as evidence. We emphasize the unique conflicts between hypothesis testing and confidence intervals in this area - which we believe are indicative of the existence of the systemic defects in the frequentist approach - that the likelihood paradigm avoids. We suggest the direct use of profile likelihoods for evaluating bioequivalence. We discuss how the likelihood approach is useful to present the evidence for both average and population bioequivalence within a unified framework. We also examine the main properties of profile likelihoods and estimated likelihoods under simulation. This simulation study shows that profile likelihoods offer a viable alternative to the (unknown) true likelihood for a range of parameters commensurate with bioequivalence research. PMID:18618422
The Dud-Alternative Effect in Likelihood Judgment
ERIC Educational Resources Information Center
Windschitl, Paul D.; Chambers, John R.
2004-01-01
The judged likelihood of a focal outcome should generally decrease as the list of alternative possibilities increases. For example, the likelihood that a runner will win a race goes down when 2 new entries are added to the field. However, 6 experiments demonstrate that the presence of implausible alternatives (duds) often increases the judged…
NASA Astrophysics Data System (ADS)
Suzuki, Yasumitsu; Watanabe, Kazuyuki; Abedi, Ali; Agostini, Federica; Min, Seung Kyu; Maitra, Neepa; Gross, E. K. U.
The exact factorization of the electron-nuclear wave function allows to define the time-dependent potential energy surfaces (TDPESs) responsible for the nuclear dynamics and electron dynamics. Recently a novel coupled-trajectory mixed quantum-classical (CT-MQC) approach based on this TDPES has been developed, which accurately reproduces both nuclear and electron dynamics. Here we study the TDPES for laser-induced electron localization with a view to developing a MQC method for strong-field processes. We show our recent progress in applying the CT-MQC approach to the systems with many degrees of freedom.
Computer program offers new method for constructing periodic orbits in nonlinear dynamical systems
NASA Technical Reports Server (NTRS)
Bennett, A. G.; Hanafy, L. M.; Palmore, J. I.
1968-01-01
Computer program uses an iterative method to construct precisely periodic orbits which dynamically approximate solutions that converge to precise dynamical solutions in the limit of the sequence. The method used is a modification of the generalized Newton-Raphson algorithm used in analyzing two point boundary problems.
ON THE LIKELIHOOD OF PLANET FORMATION IN CLOSE BINARIES
Jang-Condell, Hannah
2015-02-01
To date, several exoplanets have been discovered orbiting stars with close binary companions (a ≲ 30 AU). The fact that planets can form in these dynamically challenging environments implies that planet formation must be a robust process. The initial protoplanetary disks in these systems from which planets must form should be tidally truncated to radii of a few AU, which indicates that the efficiency of planet formation must be high. Here, we examine the truncation of circumstellar protoplanetary disks in close binary systems, studying how the likelihood of planet formation is affected over a range of disk parameters. If the semimajor axis of the binary is too small or its eccentricity is too high, the disk will have too little mass for planet formation to occur. However, we find that the stars in the binary systems known to have planets should have once hosted circumstellar disks that were capable of supporting planet formation despite their truncation. We present a way to characterize the feasibility of planet formation based on binary orbital parameters such as stellar mass, companion mass, eccentricity, and semimajor axis. Using this measure, we can quantify the robustness of planet formation in close binaries and better understand the overall efficiency of planet formation in general.
The Likelihood of Experiencing Relative Poverty over the Life Course
Rank, Mark R.; Hirschl, Thomas A.
2015-01-01
Research on poverty in the United States has largely consisted of examining cross-sectional levels of absolute poverty. In this analysis, we focus on understanding relative poverty within a life course context. Specifically, we analyze the likelihood of individuals falling below the 20th percentile and the 10th percentile of the income distribution between the ages of 25 and 60. A series of life tables are constructed using the nationally representative Panel Study of Income Dynamics data set. This includes panel data from 1968 through 2011. Results indicate that the prevalence of relative poverty is quite high. Consequently, between the ages of 25 to 60, 61.8 percent of the population will experience a year below the 20th percentile, and 42.1 percent will experience a year below the 10th percentile. Characteristics associated with experiencing these levels of poverty include those who are younger, nonwhite, female, not married, with 12 years or less of education, or who have a work disability. PMID:26200781
A comparative study of computational methods in cosmic gas dynamics
NASA Technical Reports Server (NTRS)
Van Albada, G. D.; Van Leer, B.; Roberts, W. W., Jr.
1982-01-01
Many theoretical investigations of fluid flows in astrophysics require extensive numerical calculations. The selection of an appropriate computational method is, therefore, important for the astronomer who has to solve an astrophysical flow problem. The present investigation has the objective to provide an informational basis for such a selection by comparing a variety of numerical methods with the aid of a test problem. The test problem involves a simple, one-dimensional model of the gas flow in a spiral galaxy. The numerical methods considered include the beam scheme, Godunov's method (G), the second-order flux-splitting method (FS2), MacCormack's method, and the flux corrected transport methods of Boris and Book (1973). It is found that the best second-order method (FS2) outperforms the best first-order method (G) by a huge margin.
Combining evidence using likelihood ratios in writer verification
NASA Astrophysics Data System (ADS)
Srihari, Sargur; Kovalenko, Dimitry; Tang, Yi; Ball, Gregory
2013-01-01
Forensic identification is the task of determining whether or not observed evidence arose from a known source. It involves determining a likelihood ratio (LR) - the ratio of the joint probability of the evidence and source under the identification hypothesis (that the evidence came from the source) and under the exclusion hypothesis (that the evidence did not arise from the source). In LR- based decision methods, particularly handwriting comparison, a variable number of input evidences is used. A decision based on many pieces of evidence can result in nearly the same LR as one based on few pieces of evidence. We consider methods for distinguishing between such situations. One of these is to provide confidence intervals together with the decisions and another is to combine the inputs using weights. We propose a new method that generalizes the Bayesian approach and uses an explicitly defined discount function. Empirical evaluation with several data sets including synthetically generated ones and handwriting comparison shows greater flexibility of the proposed method.
Dynamic analysis method of offshore jack-up platforms in regular and random waves
NASA Astrophysics Data System (ADS)
Yu, Hao; Li, Xiaoyu; Yang, Shuguang
2012-03-01
A jack-up platform, with its particular structure, showed obvious dynamic characteristics under complex environmental loads in extreme conditions. In this paper, taking a simplified 3-D finite element dynamic model in extreme storm conditions as research object, a transient dynamic analysis method was proposed, which was under both regular and irregular wave loads. The steps of dynamic analysis under extreme conditions were illustrated with an applied case, and the dynamic amplification factor (DAF) was calculated for each response parameter of base shear, overturning moment and hull sway. Finally, the structural response results of dynamic and static were compared and analyzed. The results indicated that the static strength analysis of the Jack-up Platforms was not enough under the dynamic loads including wave and current, further dynamic response analysis considering both computational efficiency and accuracy was necessary.
Eliciting information from experts on the likelihood of rapid climate change.
Arnell, Nigel W; Tompkins, Emma L; Adger, W Neil
2005-12-01
The threat of so-called rapid or abrupt climate change has generated considerable public interest because of its potentially significant impacts. The collapse of the North Atlantic Thermohaline Circulation or the West Antarctic Ice Sheet, for example, would have potentially catastrophic effects on temperatures and sea level, respectively. But how likely are such extreme climatic changes? Is it possible actually to estimate likelihoods? This article reviews the societal demand for the likelihoods of rapid or abrupt climate change, and different methods for estimating likelihoods: past experience, model simulation, or through the elicitation of expert judgments. The article describes a survey to estimate the likelihoods of two characterizations of rapid climate change, and explores the issues associated with such surveys and the value of information produced. The surveys were based on key scientists chosen for their expertise in the climate science of abrupt climate change. Most survey respondents ascribed low likelihoods to rapid climate change, due either to the collapse of the Thermohaline Circulation or increased positive feedbacks. In each case one assessment was an order of magnitude higher than the others. We explore a high rate of refusal to participate in this expert survey: many scientists prefer to rely on output from future climate model simulations. PMID:16506972
Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin
2016-01-01
In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680
Likelihood Ratios for Glaucoma Diagnosis Using Spectral Domain Optical Coherence Tomography
Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M.; Weinreb, Robert N.; Medeiros, Felipe A.
2014-01-01
Purpose To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral domain optical coherence tomography (spectral-domain OCT). Design Observational cohort study. Methods 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the Receiver Operating Characteristic (ROC) curve. Results Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86μm were associated with positive LRs, i.e., LRs greater than 1; whereas RNFL thickness values higher than 86μm were associated with negative LRs, i.e., LRs smaller than 1. A modified Fagan nomogram was provided to assist calculation of post-test probability of disease from the calculated likelihood ratios and pretest probability of disease. Conclusion The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision-making. PMID:23972303
A method of measuring dynamic strain under electromagnetic forming conditions.
Chen, Jinling; Xi, Xuekui; Wang, Sijun; Lu, Jun; Guo, Chenglong; Wang, Wenquan; Liu, Enke; Wang, Wenhong; Liu, Lin; Wu, Guangheng
2016-04-01
Dynamic strain measurement is rather important for the characterization of mechanical behaviors in electromagnetic forming process, but it has been hindered by high strain rate and serious electromagnetic interference for years. In this work, a simple and effective strain measuring technique for physical and mechanical behavior studies in the electromagnetic forming process has been developed. High resolution (∼5 ppm) of strain curves of a budging aluminum tube in pulsed electromagnetic field has been successfully measured using this technique. The measured strain rate is about 10(5) s(-1), which depends on the discharging conditions, nearly one order of magnitude of higher than that under conventional split Hopkins pressure bar loading conditions (∼10(4) s(-1)). It has been found that the dynamic fracture toughness of an aluminum alloy is significantly enhanced during the electromagnetic forming, which explains why the formability is much larger under electromagnetic forging conditions in comparison with conventional forging processes. PMID:27131683
Phase portrait methods for verifying fluid dynamic simulations
Stewart, H.B.
1989-01-01
As computing resources become more powerful and accessible, engineers more frequently face the difficult and challenging engineering problem of accurately simulating nonlinear dynamic phenomena. Although mathematical models are usually available, in the form of initial value problems for differential equations, the behavior of the solutions of nonlinear models is often poorly understood. A notable example is fluid dynamics: while the Navier-Stokes equations are believed to correctly describe turbulent flow, no exact mathematical solution of these equations in the turbulent regime is known. Differential equations can of course be solved numerically, but how are we to assess numerical solutions of complex phenomena without some understanding of the mathematical problem and its solutions to guide us
A method of measuring dynamic strain under electromagnetic forming conditions
NASA Astrophysics Data System (ADS)
Chen, Jinling; Xi, Xuekui; Wang, Sijun; Lu, Jun; Guo, Chenglong; Wang, Wenquan; Liu, Enke; Wang, Wenhong; Liu, Lin; Wu, Guangheng
2016-04-01
Dynamic strain measurement is rather important for the characterization of mechanical behaviors in electromagnetic forming process, but it has been hindered by high strain rate and serious electromagnetic interference for years. In this work, a simple and effective strain measuring technique for physical and mechanical behavior studies in the electromagnetic forming process has been developed. High resolution (˜5 ppm) of strain curves of a budging aluminum tube in pulsed electromagnetic field has been successfully measured using this technique. The measured strain rate is about 105 s-1, which depends on the discharging conditions, nearly one order of magnitude of higher than that under conventional split Hopkins pressure bar loading conditions (˜104 s-1). It has been found that the dynamic fracture toughness of an aluminum alloy is significantly enhanced during the electromagnetic forming, which explains why the formability is much larger under electromagnetic forging conditions in comparison with conventional forging processes.
Calibration of two complex ecosystem models with different likelihood functions
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model
Drawing Dynamical and Parameters Planes of Iterative Families and Methods
Chicharro, Francisco I.
2013-01-01
The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones). PMID:24376386
Drawing dynamical and parameters planes of iterative families and methods.
Chicharro, Francisco I; Cordero, Alicia; Torregrosa, Juan R
2013-01-01
The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones). PMID:24376386
Computational Fluid Dynamics. [numerical methods and algorithm development
NASA Technical Reports Server (NTRS)
1992-01-01
This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.
Method and system for dynamic probabilistic risk assessment
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)
2013-01-01
The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.
Dynamic State Estimation Utilizing High Performance Computing Methods
Schneider, Kevin P.; Huang, Zhenyu; Yang, Bo; Hauer, Matthew L.; Nieplocha, Jaroslaw
2009-03-18
The state estimation tools which are currently deployed in power system control rooms are based on a quasi-steady-state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper presents an overview of the Kalman Filtering process and then focuses on the implementation of the predication component on multiple processors.
Electronically Nonadiabatic Dynamics via Semiclassical Initial Value Methods
Miller, William H.
2008-12-11
In the late 1970's Meyer and Miller (MM) [J. Chem. Phys. 70, 3214 (1979)] presented a classical Hamiltonian corresponding to a finite set of electronic states of a molecular system (i.e., the various potential energy surfaces and their couplings), so that classical trajectory simulations could be carried out treating the nuclear and electronic degrees of freedom (DOF) in an equivalent dynamical framework (i.e., by classical mechanics), thereby describing non-adiabatic dynamics in a more unified manner. Much later Stock and Thoss (ST) [Phys. Rev. Lett. 78, 578 (1997)] showed that the MM model is actually not a 'model', but rather a 'representation' of the nuclear-electronic system; i.e., were the MMST nuclear-electronic Hamiltonian taken as a Hamiltonian operator and used in the Schroedinger equation, the exact (quantum) nuclear-electronic dynamics would be obtained. In recent years various initial value representations (IVRs) of semiclassical (SC) theory have been used with the MMST Hamiltonian to describe electronically non-adiabatic processes. Of special interest is the fact that though the classical trajectories generated by the MMST Hamiltonian (and which are the 'input' for an SC-IVR treatment) are 'Ehrenfest trajectories', when they are used within the SC-IVR framework the nuclear motion emerges from regions of non-adiabaticity on one potential energy surface (PES) or another, and not on an average PES as in the traditional Ehrenfest model. Examples are presented to illustrate and (hopefully) illuminate this behavior.
A general methodology for maximum likelihood inference from band-recovery data
Conroy, M.J.; Williams, B.K.
1984-01-01
A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.
Development and Evaluation of a Hybrid Dynamical-Statistical Downscaling Method
NASA Astrophysics Data System (ADS)
Walton, Daniel Burton
Regional climate change studies usually rely on downscaling of global climate model (GCM) output in order to resolve important fine-scale features and processes that govern local climate. Previous efforts have used one of two techniques: (1) dynamical downscaling, in which a regional climate model is forced at the boundaries by GCM output, or (2) statistical downscaling, which employs historical empirical relationships to go from coarse to fine resolution. Studies using these methods have been criticized because they either dynamical downscaled only a few GCMs, or used statistical downscaling on an ensemble of GCMs, but missed important dynamical effects in the climate change signal. This study describes the development and evaluation of a hybrid dynamical-statstical downscaling method that utilizes aspects of both dynamical and statistical downscaling to address these concerns. The first step of the hybrid method is to use dynamical downscaling to understand the most important physical processes that contribute to the climate change signal in the region of interest. Then a statistical model is built based on the patterns and relationships identified from dynamical downscaling. This statistical model can be used to downscale an entire ensemble of GCMs quickly and efficiently. The hybrid method is first applied to a domain covering Los Angeles Region to generate projections of temperature change between the 2041-2060 and 1981-2000 periods for 32 CMIP5 GCMs. The hybrid method is also applied to a larger region covering all of California and the adjacent ocean. The hybrid method works well in both areas, primarily because a single feature, the land-sea contrast in the warming, controls the overwhelming majority of the spatial detail. Finally, the dynamically downscaled temperature change patterns are compared to those produced by two commonly-used statistical methods, BCSD and BCCA. Results show that dynamical downscaling recovers important spatial features that the
Li Xiantao Yang, Jerry Z. E, Weinan
2010-05-20
We present a multiscale model for numerical simulations of dynamics of crystalline solids. The method combines the continuum nonlinear elasto-dynamics model, which models the stress waves and physical loading conditions, and molecular dynamics model, which provides the nonlinear constitutive relation and resolves the atomic structures near local defects. The coupling of the two models is achieved based on a general framework for multiscale modeling - the heterogeneous multiscale method (HMM). We derive an explicit coupling condition at the atomistic/continuum interface. Application to the dynamics of brittle cracks under various loading conditions is presented as test examples.
Application of lattice Boltzmann method for analysis of underwater vortex dynamics
NASA Astrophysics Data System (ADS)
Nuraiman, Dian; Viridi, Sparisoma; Purqon, Acep
2015-09-01
Vortex dynamics is one of problems arising in fluid dynamics. Vortices are a major characteristic of turbulent flow. We perform the Lattice Boltzmann Method (LBM) with Bhatnagar-Gross-Krook (BGK) approximation to analyze the underwater vortex dynamics close to the shoreline. Additionally, the Smagorinsky tubulence model is applied to treat turbulent flow and a special method for free surface treatment is applied to overcome free surface. Furthermore, we investigate the effect of the turbulence factor and the seabed profile to vortex dynamics. The results show a smaller turbulence factor affected to more turbulent flow and coral reefs reduced movement of vortex towards the shoreline.
Davis, Mitchell A; Dunn, Andrew K
2015-06-29
Few methods exist that can accurately handle dynamic light scattering in the regime between single and highly multiple scattering. We demonstrate dynamic light scattering Monte Carlo (DLS-MC), a numerical method by which the electric field autocorrelation function may be calculated for arbitrary geometries if the optical properties and particle motion are known or assumed. DLS-MC requires no assumptions regarding the number of scattering events, the final form of the autocorrelation function, or the degree of correlation between scattering events. Furthermore, the method is capable of rapidly determining the effect of particle motion changes on the autocorrelation function in heterogeneous samples. We experimentally validated the method and demonstrated that the simulations match both the expected form and the experimental results. We also demonstrate the perturbation capabilities of the method by calculating the autocorrelation function of flow in a representation of mouse microvasculature and determining the sensitivity to flow changes as a function of depth. PMID:26191723
Quantifying uncertainty, variability and likelihood for ordinary differential equation models
2010-01-01
Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations. PMID:21029410
On penalized likelihood estimation for a non-proportional hazards regression model.
Devarajan, Karthik; Ebrahimi, Nader
2013-07-01
In this paper, a semi-parametric generalization of the Cox model that permits crossing hazard curves is described. A theoretical framework for estimation in this model is developed based on penalized likelihood methods. It is shown that the optimal solution to the baseline hazard, baseline cumulative hazard and their ratio are hyperbolic splines with knots at the distinct failure times. PMID:24791034
Evaluation of Smoking Prevention Television Messages Based on the Elaboration Likelihood Model
ERIC Educational Resources Information Center
Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.
2011-01-01
Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from…
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2010-01-01
In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…
ERIC Educational Resources Information Center
Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon
2011-01-01
The purpose of this study is to compare the maximum likelihood (ML) and Bayesian estimation methods for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…
Integrated likelihoods in parametric survival models for highly clustered censored data.
Cortese, Giuliana; Sartori, Nicola
2016-07-01
In studies that involve censored time-to-event data, stratification is frequently encountered due to different reasons, such as stratified sampling or model adjustment due to violation of model assumptions. Often, the main interest is not in the clustering variables, and the cluster-related parameters are treated as nuisance. When inference is about a parameter of interest in presence of many nuisance parameters, standard likelihood methods often perform very poorly and may lead to severe bias. This problem is particularly evident in models for clustered data with cluster-specific nuisance parameters, when the number of clusters is relatively high with respect to the within-cluster size. However, it is still unclear how the presence of censoring would affect this issue. We consider clustered failure time data with independent censoring, and propose frequentist inference based on an integrated likelihood. We then apply the proposed approach to a stratified Weibull model. Simulation studies show that appropriately defined integrated likelihoods provide very accurate inferential results in all circumstances, such as for highly clustered data or heavy censoring, even in extreme settings where standard likelihood procedures lead to strongly misleading results. We show that the proposed method performs generally as well as the frailty model, but it is superior when the frailty distribution is seriously misspecified. An application, which concerns treatments for a frequent disease in late-stage HIV-infected people, illustrates the proposed inferential method in Weibull regression models, and compares different inferential conclusions from alternative methods. PMID:26210670
Estimation of Maximum Likelihood of the Unextendable Dead Time Period in a Flow of Physical Events
NASA Astrophysics Data System (ADS)
Gortsev, A. M.; Solov'ev, A. A.
2016-03-01
A flow of physical events (photons, electrons, etc.) is studied. One of the mathematical models of such flows is the MAP-flow of events. The flow circulates under conditions of the unextendable dead time period, when the dead time period is unknown. The dead time period is estimated by the method of maximum likelihood from observations of arrival instants of events.
Survey of decentralized control methods. [for large scale dynamic systems
NASA Technical Reports Server (NTRS)
Athans, M.
1975-01-01
An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.
Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery
Luttman, A.
2012-03-30
The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.
Optimal control methods for controlling bacterial populations with persister dynamics
NASA Astrophysics Data System (ADS)
Cogan, N. G.
2016-06-01
Bacterial tolerance to antibiotics is a well-known phenomena; however, only recent studies of bacterial biofilms have shown how multifaceted tolerance really is. By joining into a structured community and offering shared protection and gene transfer, bacterial populations can protect themselves genotypically, phenotypically and physically. In this study, we collect a line of research that focuses on phenotypic (or plastic) tolerance. The dynamics of persister formation are becoming better understood, even though there are major questions that remain. The thrust of our results indicate that even without detailed description of the biological mechanisms, theoretical studies can offer strategies that can eradicate bacterial populations with existing drugs.
Von Werder, Sylvie C F A; Kleiber, Tim; Disselhorst-Klug, Catherine
2015-01-01
The human motor system permits a wide variety of complex movements. Thereby, the inter-individual variability as well as the biomechanical aspects of the performed movement itself contribute to the challenge of the interpretation of sEMG signals in dynamic contractions. A procedure for the systematic analysis of sEMG recordings during dynamic contraction was introduced, which includes categorization of the data in combination with the analysis of frequency distributions of the sEMG with a probabilistic approach. Using the example of elbow flexion and extension the procedure was evaluated with 10 healthy subjects. The recorded sEMG signals of brachioradialis were categorized into a combination of constant and variable movement factors, which originate from the performed movement. Subsequently, for each combination of movement factors cumulative frequency distributions were computed for each subject separately. Finally, the probability of the difference of muscular activation in varying movement conditions was assessed. The probabilistic approach was compared to a deterministic analysis of the same data. Both approaches observed a significant change of muscular activation of brachioradialis during concentric and eccentric contractions exclusively for flexion and extension angles exceeding 30°. However, with the probabilistic approach additional information on the likelihood that the tested effect occurs can be provided. Especially for movements under uncontrollable boundary conditions, this information to assess the confidence of the detected results is of high relevance. Thus, the procedure provides new insights into the quantification and interpretation of muscular activity. PMID:25717304
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2004-01-28
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2002-10-19
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
Free energy reconstruction from steered dynamics without post-processing
Athenes, Manuel; Marinica, Mihai-Cosmin
2010-09-20
Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, we accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.
A general method for modeling population dynamics and its applications.
Shestopaloff, Yuri K
2013-12-01
Studying populations, be it a microbe colony or mankind, is important for understanding how complex systems evolve and exist. Such knowledge also often provides insights into evolution, history and different aspects of human life. By and large, populations' prosperity and decline is about transformation of certain resources into quantity and other characteristics of populations through growth, replication, expansion and acquisition of resources. We introduce a general model of population change, applicable to different types of populations, which interconnects numerous factors influencing population dynamics, such as nutrient influx and nutrient consumption, reproduction period, reproduction rate, etc. It is also possible to take into account specific growth features of individual organisms. We considered two recently discovered distinct growth scenarios: first, when organisms do not change their grown mass regardless of nutrients availability, and the second when organisms can reduce their grown mass by several times in a nutritionally poor environment. We found that nutrient supply and reproduction period are two major factors influencing the shape of population growth curves. There is also a difference in population dynamics between these two groups. Organisms belonging to the second group are significantly more adaptive to reduction of nutrients and far more resistant to extinction. Also, such organisms have substantially more frequent and lesser in amplitude fluctuations of population quantity for the same periodic nutrient supply (compared to the first group). Proposed model allows adequately describing virtually any possible growth scenario, including complex ones with periodic and irregular nutrient supply and other changing parameters, which present approaches cannot do. PMID:24057917
Modelling forest dynamics: a perspective from point process methods.
Comas, Carlos; Mateu, Jorge
2007-04-01
This paper reviews the main applications of (marked) point process theory in forestry including functions to analyse spatial variability and the main (marked) point process models. Although correlation functions do describe spatial variability at distinct range of scale, they are clearly restricted to the analysis of few dominant species since they are based on pairwise analysis. This has over-simplified the spatial analysis of complex forest dynamics involving "large" number of species. Moreover, although process models can reproduce, to some extent, real forest spatial patterns of trees, the biological forest-ecological interpretation of the resulting spatial structures is difficult since these models usually lack of biological realism. This problem gains in strength as usually most of these point process models are defined in terms of purely spatial relationships, though in real life, forest develop through time. We thus aim to discuss the applicability of such formulations to analyse and simulate "real" forest dynamics and unwrap their shortcomes. We present a unified approach of modern spatially explicit forest growth models. Finally, we focus on a continuous space-time stochastic process as an alternative approach to generate marked point patterns evolving through space and time. PMID:17476943
Dynamical Monte Carlo methods for plasma-surface reactions
NASA Astrophysics Data System (ADS)
Guerra, Vasco; Marinov, Daniil
2016-08-01
Different dynamical Monte Carlo algorithms to investigate molecule formation on surfaces are developed, evaluated and compared with the deterministic approach based on reaction-rate equations. These include a null event algorithm, the n-fold way/BKL algorithm and an ‘hybrid’ variant of the latter. NO2 formation by NO oxidation on Pyrex and O recombination on silica with the formation of O2 are taken as case studies. The influence of the grid size on the CPU calculation time and the accuracy of the results is analysed. The role of Langmuir–Hinsehlwood recombination involving two physisorbed atoms and the effect of back diffusion and its inclusion in a deterministic formulation are investigated and discussed. It is shown that dynamical Monte Carlo schemes are flexible, simple to implement, describe easily elementary processes that are not straightforward to include in deterministic simulations, can run very efficiently if appropriately chosen and give highly reliable results. Moreover, the present approach provides a relatively simple procedure to describe fully coupled surface and gas phase chemistries.
One testing method of dynamic linearity of an accelerometer
NASA Astrophysics Data System (ADS)
Lei, Jing-Yu; Guo, Wei-Guo; Tan, Xue-Ming; Shi, Yun-Bo
2015-09-01
To effectively test dynamic linearity of an accelerometer over a wide rang of 104 g to about 20 × 104g, one published patent technology is first experimentally verified and analysed, and its deficient is presented, then based on stress wave propagation theory on the thin long bar, the relation between the strain signal and the corresponding acceleration signal is obtained, one special link of two coaxial projectile is developed. These two coaxial metal cylinders (inner cylinder and circular tube) are used as projectiles, to prevent their mutual slip inside the gun barrel during movement, the one end of two projectiles is always fastened by small screws. Ti6-AL4-V bar with diameter of 30 mm is used to propagate loading stress pulse. The resultant compression wave can be measured by the strain gauges on the bar, and a half -sine strain pulse is obtained. The measuring accelerometer is attached on the other end of the bar by a vacuum clamp. In this clamp, the accelerometer only bear compression wave, the reflected tension pulse make the accelerometer off the bar. Using this system, dynamic linearity measurement of accelerometer can be easily tested in wider range of acceleration values. And a really measuring results are presented.
A review of action estimation methods for galactic dynamics
NASA Astrophysics Data System (ADS)
Sanders, Jason L.; Binney, James
2016-04-01
We review the available methods for estimating actions, angles and frequencies of orbits in both axisymmetric and triaxial potentials. The methods are separated into two classes. Unless an orbit has been trapped by a resonance, convergent, or iterative, methods are able to recover the actions to arbitrarily high accuracy given sufficient computing time. Faster non-convergent methods rely on the potential being sufficiently close to a separable potential, and the accuracy of the action estimate cannot be improved through further computation. We critically compare the accuracy of the methods and the required computation time for a range of orbits in an axisymmetric multicomponent Galactic potential. We introduce a new method for estimating actions that builds on the adiabatic approximation of Schönrich & Binney and discuss the accuracy required for the actions, angles and frequencies using suitable distribution functions for the thin and thick discs, the stellar halo and a star stream. We conclude that for studies of the disc and smooth halo component of the Milky Way, the most suitable compromise between speed and accuracy is the Stäckel Fudge, whilst when studying streams the non-convergent methods do not offer sufficient accuracy and the most suitable method is computing the actions from an orbit integration via a generating function. All the software used in this study can be downloaded from https://github.com/jls713/tact.
Testing and Validation of the Dynamic Interia Measurement Method
NASA Technical Reports Server (NTRS)
Chin, Alexander; Herrera, Claudia; Spivey, Natalie; Fladung, William; Cloutier, David
2015-01-01
This presentation describes the DIM method and how it measures the inertia properties of an object by analyzing the frequency response functions measured during a ground vibration test (GVT). The DIM method has been in development at the University of Cincinnati and has shown success on a variety of small scale test articles. The NASA AFRC version was modified for larger applications.
Method and apparatus for dynamic focusing of ultrasound energy
Candy, James V.
2002-01-01
Method and system disclosed herein include noninvasively detecting, separating and destroying multiple masses (tumors, cysts, etc.) through a plurality of iterations from tissue (e.g., breast tissue). The method and system may open new frontiers with the implication of noninvasive treatment of masses in the biomedical area along with the expanding technology of acoustic surgery.
NASA Technical Reports Server (NTRS)
Schweikhard, W. G.; Chen, Y. S.
1986-01-01
The Melick method of inlet flow dynamic distortion prediction by statistical means is outlined. A hypothetic vortex model is used as the basis for the mathematical formulations. The main variables are identified by matching the theoretical total pressure rms ratio with the measured total pressure rms ratio. Data comparisons, using the HiMAT inlet test data set, indicate satisfactory prediction of the dynamic peak distortion for cases with boundary layer control device vortex generators. A method for the dynamic probe selection was developed. Validity of the probe selection criteria is demonstrated by comparing the reduced-probe predictions with the 40-probe predictions. It is indicated that the the number of dynamic probes can be reduced to as few as two and still retain good accuracy.
Piecewise-parabolic methods for astrophysical fluid dynamics
Woodward, P.R.
1983-11-01
A general description of some modern numerical techniques for the simulation of astrophysical fluid flow is presented. The methods are introduced with a thorough discussion of the especially simple case of advection. Attention is focused on the piecewise-parabolic method (PPM). A description of the SLIC method for treating multifluid problems is also given. The discussion is illustrated by a number of advection and hydrodynamics test problems. Finally, a study of Kelvin-Helmholtz instability of supersonic jets using PPM with SLIC fluid interfaces is presented.
Maximum-likelihood estimation of recent shared ancestry (ERSA)
Huff, Chad D.; Witherspoon, David J.; Simonson, Tatum S.; Xing, Jinchuan; Watkins, W. Scott; Zhang, Yuhua; Tuohy, Therese M.; Neklason, Deborah W.; Burt, Randall W.; Guthery, Stephen L.; Woodward, Scott R.; Jorde, Lynn B.
2011-01-01
Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package. PMID:21324875
Maximum likelihood estimation of shear wave speed in transient elastography.
Audière, Stéphane; Angelini, Elsa D; Sandrin, Laurent; Charbit, Maurice
2014-06-01
Ultrasonic transient elastography (TE), enables to assess, under active mechanical constraints, the elasticity of the liver, which correlates with hepatic fibrosis stages. This technique is routinely used in clinical practice to assess noninvasively liver stiffness. The Fibroscan system used in this work generates a shear wave via an impulse stress applied on the surface of the skin and records a temporal series of radio-frequency (RF) lines using a single-element ultrasound probe. A shear wave propagation map (SWPM) is generated as a 2-D map of the displacements along depth and time, derived from the correlations of the sequential 1-D RF lines, assuming that the direction of propagation (DOP) of the shear wave coincides with the ultrasound beam axis (UBA). Under the assumption of pure elastic tissue, elasticity is proportional to the shear wave speed. This paper introduces a novel approach to the processing of the SWPM, deriving the maximum likelihood estimate of the shear wave speed when comparing the observed displacements and the estimates provided by the Green's functions. A simple parametric model is used to interface Green's theoretical values of noisy measures provided by the SWPM, taking into account depth-varying attenuation and time-delay. The proposed method was evaluated on numerical simulations using a finite element method simulator and on physical phantoms. Evaluation on this test database reported very high agreements of shear wave speed measures when DOP and UBA coincide. PMID:24835213
Adaptive methods for nonlinear structural dynamics and crashworthiness analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted
1993-01-01
The objective is to describe three research thrusts in crashworthiness analysis: adaptivity; mixed time integration, or subcycling, in which different timesteps are used for different parts of the mesh in explicit methods; and methods for contact-impact which are highly vectorizable. The techniques are being developed to improve the accuracy of calculations, ease-of-use of crashworthiness programs, and the speed of calculations. The latter is still of importance because crashworthiness calculations are often made with models of 20,000 to 50,000 elements using explicit time integration and require on the order of 20 to 100 hours on current supercomputers. The methodologies are briefly reviewed and then some example calculations employing these methods are described. The methods are also of value to other nonlinear transient computations.
NASA Technical Reports Server (NTRS)
1973-01-01
A study has been made of possible ways to improve the performance of the Langley Research Center's Transonic Dynamics Tunnel (TDT). The major effort was directed toward obtaining increased dynamic pressure in the Mach number range from 0.8 to 1.2, but methods to increase Mach number capability were also considered. Methods studied for increasing dynamic pressure capability were higher total pressure, auxiliary suction, reducing circuit losses, reduced test medium temperature, smaller test section and higher molecular weight test medium. Increased Mach number methods investigated were nozzle block inserts, variable geometry nozzle, changes in test section wall configuration, and auxiliary suction.
Dynamically balanced fuel nozzle and method of operation
Richards, George A.; Janus, Michael C.; Robey, Edward H.
2000-01-01
An apparatus and method of operation designed to reduce undesirably high pressure oscillations in lean premix combustion systems burning hydrocarbon fuels are provided. Natural combustion and nozzle acoustics are employed to generate multiple fuel pockets which, when burned in the combustor, counteract the oscillations caused by variations in heat release in the combustor. A hybrid of active and passive control techniques, the apparatus and method eliminate combustion oscillations over a wide operating range, without the use of moving parts or electronics.
Out-of-atlas likelihood estimation using multi-atlas segmentation
Asman, Andrew J.; Chambless, Lola B.; Thompson, Reid C.; Landman, Bennett A.
2013-01-01
Purpose: Multi-atlas segmentation has been shown to be highly robust and accurate across an extraordinary range of potential applications. However, it is limited to the segmentation of structures that are anatomically consistent across a large population of potential target subjects (i.e., multi-atlas segmentation is limited to “in-atlas” applications). Herein, the authors propose a technique to determine the likelihood that a multi-atlas segmentation estimate is representative of the problem at hand, and, therefore, identify anomalous regions that are not well represented within the atlases. Methods: The authors derive a technique to estimate the out-of-atlas (OOA) likelihood for every voxel in the target image. These estimated likelihoods can be used to determine and localize the probability of an abnormality being present on the target image. Results: Using a collection of manually labeled whole-brain datasets, the authors demonstrate the efficacy of the proposed framework on two distinct applications. First, the authors demonstrate the ability to accurately and robustly detect malignant gliomas in the human brain—an aggressive class of central nervous system neoplasms. Second, the authors demonstrate how this OOA likelihood estimation process can be used within a quality control context for diffusion tensor imaging datasets to detect large-scale imaging artifacts (e.g., aliasing and image shading). Conclusions: The proposed OOA likelihood estimation framework shows great promise for robust and rapid identification of brain abnormalities and imaging artifacts using only weak dependencies on anomaly morphometry and appearance. The authors envision that this approach would allow for application-specific algorithms to focus directly on regions of high OOA likelihood, which would (1) reduce the need for human intervention, and (2) reduce the propensity for false positives. Using the dual perspective, this technique would allow for algorithms to focus on
Speech processing using maximum likelihood continuity mapping
Hogden, John E.
2000-01-01
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Speech processing using maximum likelihood continuity mapping
Hogden, J.E.
2000-04-18
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Low-complexity approximations to maximum likelihood MPSK modulation classification
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2004-01-01
We present a new approximation to the maximum likelihood classifier to discriminate between M-ary and M'-ary phase-shift-keying transmitted on an additive white Gaussian noise (AWGN) channel and received noncoherentl, partially coherently, or coherently.
NASA Astrophysics Data System (ADS)
Hsu, Po Jen; Lai, S. K.; Rapallo, Arnaldo
2014-03-01
Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit
Hsu, Po Jen; Lai, S K; Rapallo, Arnaldo
2014-03-14
Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit
Hsu, Po Jen; Lai, S. K.; Rapallo, Arnaldo
2014-03-14
Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit
Dynamic measurements and uncertainty estimation of clinical thermometers using Monte Carlo method
NASA Astrophysics Data System (ADS)
Ogorevc, Jaka; Bojkovski, Jovan; Pušnik, Igor; Drnovšek, Janko
2016-09-01
Clinical thermometers in intensive care units are used for the continuous measurement of body temperature. This study describes a procedure for dynamic measurement uncertainty evaluation in order to examine the requirements for clinical thermometer dynamic properties in standards and recommendations. In this study thermistors were used as temperature sensors, transient temperature measurements were performed in water and air and the measurement data were processed for the investigation of thermometer dynamic properties. The thermometers were mathematically modelled. A Monte Carlo method was implemented for dynamic measurement uncertainty evaluation. The measurement uncertainty was analysed for static and dynamic conditions. Results showed that dynamic uncertainty is much larger than steady-state uncertainty. The results of dynamic uncertainty analysis were applied on an example of clinical measurements and were compared to current requirements in ISO standard for clinical thermometers. It can be concluded that there was no need for dynamic evaluation of clinical thermometers for continuous measurement, while dynamic measurement uncertainty was within the demands of target uncertainty. Whereas in the case of intermittent predictive thermometers, the thermometer dynamic properties had a significant impact on the measurement result. Estimation of dynamic uncertainty is crucial for the assurance of traceable and comparable measurements.
NASA Technical Reports Server (NTRS)
Weaver, D. L.
1982-01-01
Theoretical methods and solutions of the dynamics of protein folding, protein aggregation, protein structure, and the origin of life are discussed. The elements of a dynamic model representing the initial stages of protein folding are presented. The calculation and experimental determination of the model parameters are discussed. The use of computer simulation for modeling protein folding is considered.
A notion of graph likelihood and an infinite monkey theorem
NASA Astrophysics Data System (ADS)
Banerji, Christopher R. S.; Mansour, Toufik; Severini, Simone
2014-01-01
We play with a graph-theoretic analogue of the folklore infinite monkey theorem. We define a notion of graph likelihood as the probability that a given graph is constructed by a monkey in a number of time steps equal to the number of vertices. We present an algorithm to compute this graph invariant and closed formulas for some infinite classes. We have to leave the computational complexity of the likelihood as an open problem.
NASA Technical Reports Server (NTRS)
Carson, John M., III; Bayard, David S.
2006-01-01
G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
Dynamic multiplexed analysis method using ion mobility spectrometer
Belov, Mikhail E
2010-05-18
A method for multiplexed analysis using ion mobility spectrometer in which the effectiveness and efficiency of the multiplexed method is optimized by automatically adjusting rates of passage of analyte materials through an IMS drift tube during operation of the system. This automatic adjustment is performed by the IMS instrument itself after determining the appropriate levels of adjustment according to the method of the present invention. In one example, the adjustment of the rates of passage for these materials is determined by quantifying the total number of analyte molecules delivered to the ion trap in a preselected period of time, comparing this number to the charge capacity of the ion trap, selecting a gate opening sequence; and implementing the selected gate opening sequence to obtain a preselected rate of analytes within said IMS drift tube.
COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION FOR BANDED PROBABILITY DISTRIBUTIONS
Gjerløw, E.; Mikkelsen, K.; Eriksen, H. K.; Næss, S. K.; Seljebotn, D. S.; Górski, K. M.; Huey, G.; Jewell, J. B.; Rocha, G.; Wehus, I. K.
2013-11-10
We investigate sets of random variables that can be arranged sequentially such that a given variable only depends conditionally on its immediate predecessor. For such sets, we show that the full joint probability distribution may be expressed exclusively in terms of uni- and bivariate marginals. Under the assumption that the cosmic microwave background (CMB) power spectrum likelihood only exhibits correlations within a banded multipole range, Δl{sub C}, we apply this expression to two outstanding problems in CMB likelihood analysis. First, we derive a statistically well-defined hybrid likelihood estimator, merging two independent (e.g., low- and high-l) likelihoods into a single expression that properly accounts for correlations between the two. Applying this expression to the Wilkinson Microwave Anisotropy Probe (WMAP) likelihood, we verify that the effect of correlations on cosmological parameters in the transition region is negligible in terms of cosmological parameters for WMAP; the largest relative shift seen for any parameter is 0.06σ. However, because this may not hold for other experimental setups (e.g., for different instrumental noise properties or analysis masks), but must rather be verified on a case-by-case basis, we recommend our new hybridization scheme for future experiments for statistical self-consistency reasons. Second, we use the same expression to improve the convergence rate of the Blackwell-Rao likelihood estimator, reducing the required number of Monte Carlo samples by several orders of magnitude, and thereby extend it to high-l applications.
Performing dynamic time history analyses by extension of the response spectrum method
Hulbert, G.M.
1983-01-01
A method is presented to calculate the dynamic time history response of finite-element models using results from response spectrum analyses. The proposed modified time history method does not represent a new mathamatical approach to dynamic analysis but suggests a more efficient ordering of the analytical equations and procedures. The modified time history method is considerably faster and less expensive to use than normal time hisory methods. This paper presents the theory and implementation of the modified time history approach along with comparisons of the modified and normal time history methods for a prototypic seismic piping design problem.
Delayed feedback control method for dynamical systems with chaotic saddles
NASA Astrophysics Data System (ADS)
Kobayashi, Miki U.; Aihara, Kazuyuki
2012-08-01
We consider systems whose orbits diverge after chaotic transient for a finite time, and propose a controlmethod for preventing the divergence. These systems generally possess not chaotic attractors but some chaotic saddles. Our aim of control, i.e., the prevention of divergence, is achieved through the stabilization of unstable periodic orbits embedded in the chaotic saddle by making use of the delayed feedback controlmethod. The key concept of our control strategy is the application of the Proper Interior Maximum (PIM) triple method and the method to detect unstable periodic orbits from time series, originally developed by Lathrop and Kostelich, as initial steps before adding the delayed feedback control input. We show that our control method can be applied to the Hénon map and an intermittent androgen suppression (IAS) therapy model, which is a model for therapy of advanced prostate cancer. The fact that our method can be applied to the IAS therapy model indicates that our control strategy may be useful in the therapy of advanced prostate cancer.
Borštnik, Urban; Miller, Benjamin T; Brooks, Bernard R; Janežič, Dušanka
2011-11-15
Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007
Shack-Hartmann wavefront sensor with large dynamic range by adaptive spot search method.
Shinto, Hironobu; Saita, Yusuke; Nomura, Takanori
2016-07-10
A Shack-Hartmann wavefront sensor (SHWFS) that consists of a microlens array and an image sensor has been used to measure the wavefront aberrations of human eyes. However, a conventional SHWFS has finite dynamic range depending on the diameter of the each microlens. The dynamic range cannot be easily expanded without a decrease of the spatial resolution. In this study, an adaptive spot search method to expand the dynamic range of an SHWFS is proposed. In the proposed method, spots are searched with the help of their approximate displacements measured with low spatial resolution and large dynamic range. By the proposed method, a wavefront can be correctly measured even if the spot is beyond the detection area. The adaptive spot search method is realized by using the special microlens array that generates both spots and discriminable patterns. The proposed method enables expanding the dynamic range of an SHWFS with a single shot and short processing time. The performance of the proposed method is compared with that of a conventional SHWFS by optical experiments. Furthermore, the dynamic range of the proposed method is quantitatively evaluated by numerical simulations. PMID:27409319
Random dynamic load identification based on error analysis and weighted total least squares method
NASA Astrophysics Data System (ADS)
Jia, You; Yang, Zhichun; Guo, Ning; Wang, Le
2015-12-01
In most cases, random dynamic load identification problems in structural dynamics are in general ill-posed. A common approach to treat these problems is to reformulate these problems into some well-posed problems by some numerical regularization methods. In a previous paper by the authors, a random dynamic load identification model was built, and a weighted regularization approach based on the proper orthogonal decomposition (POD) was proposed to identify the random dynamic loads. In this paper, the upper bound of relative load identification error in frequency domain is derived. The selection condition and the specific form of the weighting matrix is also proposed and validated analytically and experimentally, In order to improve the accuracy of random dynamic load identification, a weighted total least squares method is proposed to reduce the impact of these errors. To further validate the feasibility and effectiveness of the proposed method, the comparative study of the proposed method and other methods are conducted with the experiment. The experimental results demonstrated that the weighted total least squares method is more effective than other methods for random dynamic load identification.
Introduction to the quantum trajectory method and to Fermi molecular dynamics
NASA Astrophysics Data System (ADS)
La Gattuta, K. J.
2003-06-01
The quantum trajectory method (QTM) will be introduced, and an approximation to the QTM known as Fermi molecular dynamics (FMD) will be described. Results of simulations based on FMD will be mentioned for specific nonequilibrium systems dominated by Coulomb interactions.
NASA Astrophysics Data System (ADS)
Kato, Tomohiro; Hasegawa, Mikio
Chaotic dynamics has been shown to be effective in improving the performance of combinatorial optimization algorithms. In this paper, the performance of chaotic dynamics in the asymmetric traveling salesman problem (ATSP) is investigated by introducing three types of heuristic solution update methods. Numerical simulation has been carried out to compare its performance with simulated annealing and tabu search; thus, the effectiveness of the approach using chaotic dynamics for driving heuristic methods has been shown. The chaotic method is also evaluated in the case of a combinatorial optimization problem in the real world, which can be solved by the same heuristic operation as that for the ATSP. We apply the chaotic method to the DNA fragment assembly problem, which involves building a DNA sequence from several hundred fragments obtained by the genome sequencer. Our simulation results show that the proposed algorithm using chaotic dynamics in a block shift operation exhibits the best performance for the DNA fragment assembly problem.
Maximum likelihood positioning and energy correction for scintillation detectors
NASA Astrophysics Data System (ADS)
Lerche, Christoph W.; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten
2016-02-01
An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30× 30 scintillator pixel array with an 8× 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner’s spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner’s overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.
Maximum likelihood positioning and energy correction for scintillation detectors.
Lerche, Christoph W; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten
2016-02-21
An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the [Formula: see text] scintillator pixel array with an [Formula: see text] SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner's spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner's overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time. PMID:26836394
Dynamic analysis methods for detecting anomalies in asynchronously interacting systems
Kumar, Akshat; Solis, John Hector; Matschke, Benjamin
2014-01-01
Detecting modifications to digital system designs, whether malicious or benign, is problematic due to the complexity of the systems being analyzed. Moreover, static analysis techniques and tools can only be used during the initial design and implementation phases to verify safety and liveness properties. It is computationally intractable to guarantee that any previously verified properties still hold after a system, or even a single component, has been produced by a third-party manufacturer. In this paper we explore new approaches for creating a robust system design by investigating highly-structured computational models that simplify verification and analysis. Our approach avoids the need to fully reconstruct the implemented system by incorporating a small verification component that dynamically detects for deviations from the design specification at run-time. The first approach encodes information extracted from the original system design algebraically into a verification component. During run-time this component randomly queries the implementation for trace information and verifies that no design-level properties have been violated. If any deviation is detected then a pre-specified fail-safe or notification behavior is triggered. Our second approach utilizes a partitioning methodology to view liveness and safety properties as a distributed decision task and the implementation as a proposed protocol that solves this task. Thus the problem of verifying safety and liveness properties is translated to that of verifying that the implementation solves the associated decision task. We develop upon results from distributed systems and algebraic topology to construct a learning mechanism for verifying safety and liveness properties from samples of run-time executions.
C-arm perfusion imaging with a fast penalized maximum-likelihood approach
NASA Astrophysics Data System (ADS)
Frysch, Robert; Pfeiffer, Tim; Bannasch, Sebastian; Serowy, Steffen; Gugel, Sebastian; Skalej, Martin; Rose, Georg
2014-03-01
Perfusion imaging is an essential method for stroke diagnostics. One of the most important factors for a successful therapy is to get the diagnosis as fast as possible. Therefore our approach aims at perfusion imaging (PI) with a cone beam C-arm system providing perfusion information directly in the interventional suite. For PI the imaging system has to provide excellent soft tissue contrast resolution in order to allow the detection of small attenuation enhancement due to contrast agent in the capillary vessels. The limited dynamic range of flat panel detectors as well as the sparse sampling of the slow rotating C-arm in combination with standard reconstruction methods results in limited soft tissue contrast. We choose a penalized maximum-likelihood reconstruction method to get suitable results. To minimize the computational load, the 4D reconstruction task is reduced to several static 3D reconstructions. We also include an ordered subset technique with transitioning to a small number of subsets, which adds sharpness to the image with less iterations while also suppressing the noise. Instead of the standard multiplicative EM correction, we apply a Newton-based optimization to further accelerate the reconstruction algorithm. The latter optimization reduces the computation time by up to 70%. Further acceleration is provided by a multi-GPU implementation of the forward and backward projection, which fulfills the demands of cone beam geometry. In this preliminary study we evaluate this procedure on clinical data. Perfusion maps are computed and compared with reference images from magnetic resonance scans. We found a high correlation between both images.
NASA Technical Reports Server (NTRS)
Murphy, Patrick Charles
1985-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.
Dynamic Docking of Conformationally Constrained Macrocycles: Methods and Applications.
Allen, Scott E; Dokholyan, Nikolay V; Bowers, Albert A
2016-01-15
Many natural products consist of large and flexible macrocycles that engage their targets via multiple contact points. This combination of contained flexibility and large contact area often allows natural products to bind at target surfaces rather than deep pockets, making them attractive scaffolds for inhibiting protein-protein interactions and other challenging therapeutic targets. The increasing ability to manipulate such compounds either biosynthetically or via semisynthetic modification means that these compounds can now be considered as starting points for medchem campaigns rather than solely as ends. Modern medchem benefits substantially from rational improvements made on the basis of molecular docking. As such, docking methods have been enhanced in recent years to deal with the complicated binding modalities and flexible scaffolds of macrocyclic natural products and natural product-like structures. Here, we comprehensively review methods for treating and docking these large macrocyclic scaffolds and discuss some of the resulting advances in medicinal chemistry. PMID:26575401
A self-consistent field method for galactic dynamics
NASA Technical Reports Server (NTRS)
Hernquist, Lars; Ostriker, Jeremiah P.
1992-01-01
The present study describes an algorithm for evolving collisionless stellar systems in order to investigate the evolution of systems with density profiles like the R exp 1/4 law, using only a few terms in the expansions. A good fit is obtained for a truncated isothermal distribution, which renders the method appropriate for galaxies with flat rotation curves. Calculations employing N of about 10 exp 6-7 are straightforward on existing supercomputers, making possible simulations having significantly smoother fields than with direct methods such as tree-codes. Orbits are found in a given static or time-dependent gravitational field; the potential, phi(r, t) is revised from the resultant density, rho(r, t). Possible scientific uses of this technique are discussed, including tidal perturbations of dwarf galaxies, the adiabatic growth of central masses in spheroidal galaxies, instabilities in realistic galaxy models, and secular processes in galactic evolution.
Dynamic optical methods for direct laser written waveguides
NASA Astrophysics Data System (ADS)
Salter, P. S.; Booth, M. J.
2013-03-01
Direct laser writing is widely used to fabricate 3D waveguide devices by modi cation of a materials refractive index. The fabrication delity depends strongly on focal spot quality, which in many cases is impaired by aberrations, particularly spherical aberration caused by refractive index mismatch. We use adaptive optics to correct aberration and maintain fabrication performance at a range of depths. Adaptive multifocus methods are also shown for increasing the fabrication speed for single waveguides.
A new method of modelling and numerical simulation of nonlinear dynamical systems
Colosi, T.; Codreanu, S.
1996-06-01
This work presents the most significant aspects of an original method of modelling and numerical simulation of nonlinear (linear) dynamical systems (1) it assures the local-iterative linearization (LIL) of nonlinear (linear) differential equations and transforms them, in the close proximity of a pivot moment, into algebraic equations. The use of this method is illustrated in the study of a particular nonlinear dynamical systems. The conclusions highlight the advantages of the proposed procedure. {copyright} {ital 1996 American Institute of Physics.}
Catastrophic fault diagnosis in dynamic systems using bond graph methods
Yarom, Tamar.
1990-01-01
Detection and diagnosis of faults has become a critical issue in high performance engineering systems as well as in mass-produced equipment. It is particularly helpful when the diagnosis can be made at the initial design level with respect to a prospective fault list. A number of powerful methods have been developed for aiding in the general fault analysis of designs. Catastrophic faults represent the limit case of complete local failure of connections or components. They result in the interruption of energy transfer between corresponding points in the system. In this work the conventional approach to fault detection and diagnosis is extended by means of bond-graph methods to a wide variety of engineering systems. Attention is focused on catastrophic fault diagnosis. A catastrophic fault dictionary is generated from the system model based on topological properties of the bond graph. The dictionary is processed by existing methods to extract a catastrophic fault report to aid the engineer in performing a design analysis.
Fast WMAP Likelihood Code and GSR PC Functions
NASA Astrophysics Data System (ADS)
Dvorkin, Cora; Hu, Wayne
2010-10-01
We place functional constraints on the shape of the inflaton potential from the cosmic microwave background through a variant of the generalized slow roll approximation that allows large amplitude, rapidly changing deviations from scale-free conditions. Employing a principal component decomposition of the source function G'~3(V'/V)^2 - 2V''/V and keeping only those measured to better than 10% results in 5 nearly independent Gaussian constraints that maybe used to test any single-field inflationary model where such deviations are expected. The first component implies < 3% variations at the 100 Mpc scale. One component shows a 95% CL preference for deviations around the 300 Mpc scale at the ~10% level but the global significance is reduced considering the 5 components examined. This deviation also requires a change in the cold dark matter density which in a flat LCDM model is disfavored by current supernova and Hubble constant data and can be tested with future polarization or high multipole temperature data. Its impact resembles a local running of the tilt from multipoles 30-800 but is only marginally consistent with a constant running beyond this range. For this analysis, we have implemented a ~40x faster WMAP7 likelihood method which we have made publicly available.
Maximum-likelihood estimation of circle parameters via convolution.
Zelniker, Emanuel E; Clarkson, I Vaughan L
2006-04-01
The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374
A massively parallel adaptive finite element method with dynamic load balancing
Devine, K.D.; Flaherty, J.E.; Wheat, S.R.; Maccabe, A.B.
1993-05-01
We construct massively parallel, adaptive finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. We also present results using adaptive p-refinement to reduce the computational cost of the method. We describe tiling, a dynamic, element-based data migration system. Tiling dynamically maintains global load balance in the adaptive method by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. We demonstrate the effectiveness of the dynamic load balancing with adaptive p-refinement examples.
A new uncertain analysis method and its application in vehicle dynamics
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing
2015-01-01
This paper proposes a new uncertain analysis method for vehicle dynamics involving hybrid uncertainty parameters. The Polynomial Chaos (PC) theory that accounts for the random uncertainty is systematically integrated with the Chebyshev inclusion function theory that describes the interval uncertainty, to deliver a Polynomial-Chaos-Chebyshev-Interval (PCCI) method. The PCCI method is non-intrusive, because it does not require the amendment of the original solver for different and complicated dynamics problems. Two types of evaluation indexes are established: the first includes interval mean (IM) and interval variance (IV), and the second are the mean of lower bound (MLB), the variance of lower bound (VLB), the mean of upper bound (MUB) and the variance of upper bound (VUB). The Monte Carlo method is combined with the scanning method to produce the reference results, and then a 4-DOF vehicle roll plan model is employed to demonstrate the effectiveness of the proposed method for vehicle dynamics.
Maximum likelihood representation of MIPAS profiles
NASA Astrophysics Data System (ADS)
von Clarmann, T.; Glatthor, N.; Plieninger, J.
2015-03-01
In order to avoid problems connected with the content of a priori information in volume mixing ratio vertical profiles measured with the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), a user-friendly representation of the data has been developed which will be made available in addition to the regular data product. In this representation, the data will be provided on a fixed pressure grid coarse enough to allow a virtually unconstrained retrieval. As to avoid data interpolation, the grid is chosen to be a subset of the pressure grids used by the Chemistry Climate Model Initiative and the Data Initiative within the Stratosphere-troposphere Processes And their Role in Climate (SPARC) project as well as the Intergovernmental Panel of Climate Change climatologies and model calculations. For representation, the profiles have been transformed to boxcar base functions, which means that volume mixing ratios are constant within a layer. This representation is thought to be more adequate for comparison with model data. While this method is applicable also to vertical profiles of other species, the method is discussed using ozone as an example.
Maximum likelihood representation of MIPAS profiles
NASA Astrophysics Data System (ADS)
von Clarmann, T.; Glatthor, N.; Plieninger, J.
2015-07-01
In order to avoid problems connected with the content of a priori information in volume mixing ratio vertical profiles measured with the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), a user-friendly representation of the data has been developed which will be made available in addition to the regular data product. In this representation, the data will be provided on a fixed pressure grid coarse enough to allow a virtually unconstrained retrieval. To avoid data interpolation, the grid is chosen to be a subset of the pressure grids used by the Chemistry-Climate Model Initiative and the Data Initiative within the Stratosphere-troposphere Processes And their Role in Climate (SPARC) project as well as the Intergovernmental Panel of Climate Change climatologies and model calculations. For representation, the profiles have been transformed to boxcar base functions, which means that volume mixing ratios are constant within a layer. This representation is thought to be more adequate for comparison with model data. While this method is applicable also to vertical profiles of other species, the method is discussed using ozone as an example.
A multi-similarity spectral clustering method for community detection in dynamic networks
Qin, Xuanmei; Dai, Weidi; Jiao, Pengfei; Wang, Wenjun; Yuan, Ning
2016-01-01
Community structure is one of the fundamental characteristics of complex networks. Many methods have been proposed for community detection. However, most of these methods are designed for static networks and are not suitable for dynamic networks that evolve over time. Recently, the evolutionary clustering framework was proposed for clustering dynamic data, and it can also be used for community detection in dynamic networks. In this paper, a multi-similarity spectral (MSSC) method is proposed as an improvement to the former evolutionary clustering method. To detect the community structure in dynamic networks, our method considers the different similarity metrics of networks. First, multiple similarity matrices are constructed for each snapshot of dynamic networks. Then, a dynamic co-training algorithm is proposed by bootstrapping the clustering of different similarity measures. Compared with a number of baseline models, the experimental results show that the proposed MSSC method has better performance on some widely used synthetic and real-world datasets with ground-truth community structure that change over time. PMID:27528179
A multi-similarity spectral clustering method for community detection in dynamic networks.
Qin, Xuanmei; Dai, Weidi; Jiao, Pengfei; Wang, Wenjun; Yuan, Ning
2016-01-01
Community structure is one of the fundamental characteristics of complex networks. Many methods have been proposed for community detection. However, most of these methods are designed for static networks and are not suitable for dynamic networks that evolve over time. Recently, the evolutionary clustering framework was proposed for clustering dynamic data, and it can also be used for community detection in dynamic networks. In this paper, a multi-similarity spectral (MSSC) method is proposed as an improvement to the former evolutionary clustering method. To detect the community structure in dynamic networks, our method considers the different similarity metrics of networks. First, multiple similarity matrices are constructed for each snapshot of dynamic networks. Then, a dynamic co-training algorithm is proposed by bootstrapping the clustering of different similarity measures. Compared with a number of baseline models, the experimental results show that the proposed MSSC method has better performance on some widely used synthetic and real-world datasets with ground-truth community structure that change over time. PMID:27528179