A maximum likelihood map of chromosome 1.
Rao, D C; Keats, B J; Lalouel, J M; Morton, N E; Yee, S
1979-01-01
Thirteen loci are mapped on chromosome 1 from genetic evidence. The maximum likelihood map presented permits confirmation that Scianna (SC) and a fourteenth locus, phenylketonuria (PKU), are on chromosome 1, although the location of the latter on the PGM1-AMY segment is uncertain. Eight other controversial genetic assignments are rejected, providing a practical demonstration of the resolution which maximum likelihood theory brings to mapping. PMID:293128
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
Lod scores for gene mapping in the presence of marker map uncertainty.
Stringham, H M; Boehnke, M
2001-07-01
Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.
Speech processing using maximum likelihood continuity mapping
Hogden, John E.
2000-01-01
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Speech processing using maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, J.E.
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits
Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling
2013-01-01
Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762
Geological mapping in northwestern Saudi Arabia using LANDSAT multispectral techniques
NASA Technical Reports Server (NTRS)
Blodget, H. W.; Brown, G. F.; Moik, J. G.
1975-01-01
Various computer enhancement and data extraction systems using LANDSAT data were assessed and used to complement a continuing geologic mapping program. Interactive digital classification techniques using both the parallel-piped and maximum-likelihood statistical approaches achieve very limited success in areas of highly dissected terrain. Computer enhanced imagery developed by color compositing stretched MSS ratio data was constructed for a test site in northwestern Saudi Arabia. Initial results indicate that several igneous and sedimentary rock types can be discriminated.
Likelihood-based modification of experimental crystal structure electron density maps
Terwilliger, Thomas C [Sante Fe, NM
2005-04-16
A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2010-01-01
In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…
Reyes-Valdés, M H; Stelly, D M
1995-01-01
Frequencies of meiotic configurations in cytogenetic stocks are dependent on chiasma frequencies in segments defined by centromeres, breakpoints, and telomeres. The expectation maximization algorithm is proposed as a general method to perform maximum likelihood estimations of the chiasma frequencies in the intervals between such locations. The estimates can be translated via mapping functions into genetic maps of cytogenetic landmarks. One set of observational data was analyzed to exemplify application of these methods, results of which were largely concordant with other comparable data. The method was also tested by Monte Carlo simulation of frequencies of meiotic configurations from a monotelodisomic translocation heterozygote, assuming six different sample sizes. The estimate averages were always close to the values given initially to the parameters. The maximum likelihood estimation procedures can be extended readily to other kinds of cytogenetic stocks and allow the pooling of diverse cytogenetic data to collectively estimate lengths of segments, arms, and chromosomes. Images Fig. 1 PMID:7568226
Five Methods for Estimating Angoff Cut Scores with IRT
ERIC Educational Resources Information Center
Wyse, Adam E.
2017-01-01
This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…
Mapping grass communities based on multi-temporal Landsat TM imagery and environmental variables
NASA Astrophysics Data System (ADS)
Zeng, Yuandi; Liu, Yanfang; Liu, Yaolin; de Leeuw, Jan
2007-06-01
Information on the spatial distribution of grass communities in wetland is increasingly recognized as important for effective wetland management and biological conservation. Remote sensing techniques has been proved to be an effective alternative to intensive and costly ground surveys for mapping grass community. However, the mapping accuracy of grass communities in wetland is still not preferable. The aim of this paper is to develop an effective method to map grass communities in Poyang Lake Natural Reserve. Through statistic analysis, elevation is selected as an environmental variable for its high relationship with the distribution of grass communities; NDVI stacked from images of different months was used to generate Carex community map; the image in October was used to discriminate Miscanthus and Cynodon communities. Classifications were firstly performed with maximum likelihood classifier using single date satellite image with and without elevation; then layered classifications were performed using multi-temporal satellite imagery and elevation with maximum likelihood classifier, decision tree and artificial neural network separately. The results show that environmental variables can improve the mapping accuracy; and the classification with multitemporal imagery and elevation is significantly better than that with single date image and elevation (p=0.001). Besides, maximum likelihood (a=92.71%, k=0.90) and artificial neural network (a=94.79%, k=0.93) perform significantly better than decision tree (a=86.46%, k=0.83).
Land cover mapping after the tsunami event over Nanggroe Aceh Darussalam (NAD) province, Indonesia
NASA Astrophysics Data System (ADS)
Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Alias, A. N.; Mohd. Saleh, N.; Wong, C. J.; Surbakti, M. S.
2008-03-01
Remote sensing offers an important means of detecting and analyzing temporal changes occurring in our landscape. This research used remote sensing to quantify land use/land cover changes at the Nanggroe Aceh Darussalam (Nad) province, Indonesia on a regional scale. The objective of this paper is to assess the changed produced from the analysis of Landsat TM data. A Landsat TM image was used to develop land cover classification map for the 27 March 2005. Four supervised classifications techniques (Maximum Likelihood, Minimum Distance-to- Mean, Parallelepiped and Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier) were performed to the satellite image. Training sites and accuracy assessment were needed for supervised classification techniques. The training sites were established using polygons based on the colour image. High detection accuracy (>80%) and overall Kappa (>0.80) were achieved by the Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier in this study. This preliminary study has produced a promising result. This indicates that land cover mapping can be carried out using remote sensing classification method of the satellite digital imagery.
Spacecraft Charging and the Microwave Anisotropy Probe Spacecraft
NASA Technical Reports Server (NTRS)
Timothy, VanSant J.; Neergaard, Linda F.
1998-01-01
The Microwave Anisotropy Probe (MAP), a MIDEX mission built in partnership between Princeton University and the NASA Goddard Space Flight Center (GSFC), will study the cosmic microwave background. It will be inserted into a highly elliptical earth orbit for several weeks and then use a lunar gravity assist to orbit around the second Lagrangian point (L2), 1.5 million kilometers, anti-sunward from the earth. The charging environment for the phasing loops and at L2 was evaluated. There is a limited set of data for L2; the GEOTAIL spacecraft measured relatively low spacecraft potentials (approx. 50 V maximum) near L2. The main area of concern for charging on the MAP spacecraft is the well-established threat posed by the "geosynchronous region" between 6-10 Re. The launch in the autumn of 2000 will coincide with the falling of the solar maximum, a period when the likelihood of a substorm is higher than usual. The likelihood of a substorm at that time has been roughly estimated to be on the order of 20% for a typical MAP mission profile. Because of the possibility of spacecraft charging, a requirement for conductive spacecraft surfaces was established early in the program. Subsequent NASCAP/GEO analyses for the MAP spacecraft demonstrated that a significant portion of the sunlit surface (solar cell cover glass and sunshade) could have nonconductive surfaces without significantly raising differential charging. The need for conductive materials on surfaces continually in eclipse has also been reinforced by NASCAP analyses.
Averaged kick maps: less noise, more signal…and probably less bias
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; Afonine, Pavel V.; Gunčar, Gregor
2009-09-01
Averaged kick maps are the sum of a series of individual kick maps, where each map is calculated from atomic coordinates modified by random shifts. These maps offer the possibility of an improved and less model-biased map interpretation. Use of reliable density maps is crucial for rapid and successful crystal structure determination. Here, the averaged kick (AK) map approach is investigated, its application is generalized and it is compared with other map-calculation methods. AK maps are the sum of a series of kick maps, where each kick map is calculated from atomic coordinates modified by random shifts. As such, theymore » are a numerical analogue of maximum-likelihood maps. AK maps can be unweighted or maximum-likelihood (σ{sub A}) weighted. Analysis shows that they are comparable and correspond better to the final model than σ{sub A} and simulated-annealing maps. The AK maps were challenged by a difficult structure-validation case, in which they were able to clarify the problematic region in the density without the need for model rebuilding. The conclusion is that AK maps can be useful throughout the entire progress of crystal structure determination, offering the possibility of improved map interpretation.« less
An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.
ERIC Educational Resources Information Center
De Ayala, R. J.; And Others
Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…
Maximum-Likelihood Detection Of Noncoherent CPM
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, J.
The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation maymore » decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.« less
NASA Astrophysics Data System (ADS)
Gjaja, Marin N.
1997-11-01
Neural networks for supervised and unsupervised learning are developed and applied to problems in remote sensing, continuous map learning, and speech perception. Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART networks synthesize fuzzy logic and neural networks, and supervised ARTMAP networks incorporate ART modules for prediction and classification. New ART and ARTMAP methods resulting from analyses of data structure, parameter specification, and category selection are developed. Architectural modifications providing flexibility for a variety of applications are also introduced and explored. A new methodology for automatic mapping from Landsat Thematic Mapper (TM) and terrain data, based on fuzzy ARTMAP, is developed. System capabilities are tested on a challenging remote sensing problem, prediction of vegetation classes in the Cleveland National Forest from spectral and terrain features. After training at the pixel level, performance is tested at the stand level, using sites not seen during training. Results are compared to those of maximum likelihood classifiers, back propagation neural networks, and K-nearest neighbor algorithms. Best performance is obtained using a hybrid system based on a convex combination of fuzzy ARTMAP and maximum likelihood predictions. This work forms the foundation for additional studies exploring fuzzy ARTMAP's capability to estimate class mixture composition for non-homogeneous sites. Exploratory simulations apply ARTMAP to the problem of learning continuous multidimensional mappings. A novel system architecture retains basic ARTMAP properties of incremental and fast learning in an on-line setting while adding components to solve this class of problems. The perceptual magnet effect is a language-specific phenomenon arising early in infant speech development that is characterized by a warping of speech sound perception. An unsupervised neural network model is proposed that embodies two principal hypotheses supported by experimental data--that sensory experience guides language-specific development of an auditory neural map and that a population vector can predict psychological phenomena based on map cell activities. Model simulations show how a nonuniform distribution of map cell firing preferences can develop from language-specific input and give rise to the magnet effect.
ATAC Autocuer Modeling Analysis.
1981-01-01
the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of
A LANDSAT study of ephemeral and perennial rangeland vegetation and soils
NASA Technical Reports Server (NTRS)
Bentley, R. G., Jr. (Principal Investigator); Salmon-Drexler, B. C.; Bonner, W. J.; Vincent, R. K.
1976-01-01
The author has identified the following significant results. Several methods of computer processing were applied to LANDSAT data for mapping vegetation characteristics of perennial rangeland in Montana and ephemeral rangeland in Arizona. The choice of optimal processing technique was dependent on prescribed mapping and site condition. Single channel level slicing and ratioing of channels were used for simple enhancement. Predictive models for mapping percent vegetation cover based on data from field spectra and LANDSAT data were generated by multiple linear regression of six unique LANDSAT spectral ratios. Ratio gating logic and maximum likelihood classification were applied successfully to recognize plant communities in Montana. Maximum likelihood classification did little to improve recognition of terrain features when compared to a single channel density slice in sparsely vegetated Arizona. LANDSAT was found to be more sensitive to differences between plant communities based on percentages of vigorous vegetation than to actual physical or spectral differences among plant species.
Genetic mapping in the presence of genotyping errors.
Cartwright, Dustin A; Troggio, Michela; Velasco, Riccardo; Gutin, Alexander
2007-08-01
Genetic maps are built using the genotypes of many related individuals. Genotyping errors in these data sets can distort genetic maps, especially by inflating the distances. We have extended the traditional likelihood model used for genetic mapping to include the possibility of genotyping errors. Each individual marker is assigned an error rate, which is inferred from the data, just as the genetic distances are. We have developed a software package, called TMAP, which uses this model to find maximum-likelihood maps for phase-known pedigrees. We have tested our methods using a data set in Vitis and on simulated data and confirmed that our method dramatically reduces the inflationary effect caused by increasing the number of markers and leads to more accurate orders.
Genetic Mapping in the Presence of Genotyping Errors
Cartwright, Dustin A.; Troggio, Michela; Velasco, Riccardo; Gutin, Alexander
2007-01-01
Genetic maps are built using the genotypes of many related individuals. Genotyping errors in these data sets can distort genetic maps, especially by inflating the distances. We have extended the traditional likelihood model used for genetic mapping to include the possibility of genotyping errors. Each individual marker is assigned an error rate, which is inferred from the data, just as the genetic distances are. We have developed a software package, called TMAP, which uses this model to find maximum-likelihood maps for phase-known pedigrees. We have tested our methods using a data set in Vitis and on simulated data and confirmed that our method dramatically reduces the inflationary effect caused by increasing the number of markers and leads to more accurate orders. PMID:17277374
Maximum-likelihood block detection of noncoherent continuous phase modulation
NASA Technical Reports Server (NTRS)
Simon, Marvin K.; Divsalar, Dariush
1993-01-01
This paper examines maximum-likelihood block detection of uncoded full response CPM over an additive white Gaussian noise (AWGN) channel. Both the maximum-likelihood metrics and the bit error probability performances of the associated detection algorithms are considered. The special and popular case of minimum-shift-keying (MSK) corresponding to h = 0.5 and constant amplitude frequency pulse is treated separately. The many new receiver structures that result from this investigation can be compared to the traditional ones that have been used in the past both from the standpoint of simplicity of implementation and optimality of performance.
Design of simplified maximum-likelihood receivers for multiuser CPM systems.
Bing, Li; Bai, Baoming
2014-01-01
A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.
Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key
ERIC Educational Resources Information Center
France, Stephen L.; Batchelder, William H.
2015-01-01
Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…
NASA Astrophysics Data System (ADS)
Chakraborty, A.; Goto, H.
2017-12-01
The 2011 off the Pacific coast of Tohoku earthquake caused severe damage in many areas further inside the mainland because of site-amplification. Furukawa district in Miyagi Prefecture, Japan recorded significant spatial differences in ground motion even at sub-kilometer scales. The site responses in the damage zone far exceeded the levels in the hazard maps. A reason why the mismatch occurred is that mapping follow only the mean value at the measurement locations with no regard to the data uncertainties and thus are not always reliable. Our research objective is to develop a methodology to incorporate data uncertainties in mapping and propose a reliable map. The methodology is based on a hierarchical Bayesian modeling of normally-distributed site responses in space where the mean (μ), site-specific variance (σ2) and between-sites variance(s2) parameters are treated as unknowns with a prior distribution. The observation data is artificially created site responses with varying means and variances for 150 seismic events across 50 locations in one-dimensional space. Spatially auto-correlated random effects were added to the mean (μ) using a conditionally autoregressive (CAR) prior. The inferences on the unknown parameters are done using Markov Chain Monte Carlo methods from the posterior distribution. The goal is to find reliable estimates of μ sensitive to uncertainties. During initial trials, we observed that the tau (=1/s2) parameter of CAR prior controls the μ estimation. Using a constraint, s = 1/(k×σ), five spatial models with varying k-values were created. We define reliability to be measured by the model likelihood and propose the maximum likelihood model to be highly reliable. The model with maximum likelihood was selected using a 5-fold cross-validation technique. The results show that the maximum likelihood model (μ*) follows the site-specific mean at low uncertainties and converges to the model-mean at higher uncertainties (Fig.1). This result is highly significant as it successfully incorporates the effect of data uncertainties in mapping. This novel approach can be applied to any research field using mapping techniques. The methodology is now being applied to real records from a very dense seismic network in Furukawa district, Miyagi Prefecture, Japan to generate a reliable map of the site responses.
Maximum a posteriori decoder for digital communications
NASA Technical Reports Server (NTRS)
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Chen, Rui; Hyrien, Ollivier
2011-01-01
This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356
Mark D. Nelson; Ronald E. McRoberts; Greg C. Liknes; Geoffrey R. Holden
2005-01-01
Landsat Thematic Mapper (TM) satellite imagery and Forest Inventory and Analysis (FIA) plot data were used to construct forest/nonforest maps of Mapping Zone 41, National Land Cover Dataset 2000 (NLCD 2000). Stratification approaches resulting from Maximum Likelihood, Fuzzy Convolution, Logistic Regression, and k-Nearest Neighbors classification/prediction methods were...
Improving estimates of genetic maps: a meta-analysis-based approach.
Stewart, William C L
2007-07-01
Inaccurate genetic (or linkage) maps can reduce the power to detect linkage, increase type I error, and distort haplotype and relationship inference. To improve the accuracy of existing maps, I propose a meta-analysis-based method that combines independent map estimates into a single estimate of the linkage map. The method uses the variance of each independent map estimate to combine them efficiently, whether the map estimates use the same set of markers or not. As compared with a joint analysis of the pooled genotype data, the proposed method is attractive for three reasons: (1) it has comparable efficiency to the maximum likelihood map estimate when the pooled data are homogeneous; (2) relative to existing map estimation methods, it can have increased efficiency when the pooled data are heterogeneous; and (3) it avoids the practical difficulties of pooling human subjects data. On the basis of simulated data modeled after two real data sets, the proposed method can reduce the sampling variation of linkage maps commonly used in whole-genome linkage scans. Furthermore, when the independent map estimates are also maximum likelihood estimates, the proposed method performs as well as or better than when they are estimated by the program CRIMAP. Since variance estimates of maps may not always be available, I demonstrate the feasibility of three different variance estimators. Overall, the method should prove useful to investigators who need map positions for markers not contained in publicly available maps, and to those who wish to minimize the negative effects of inaccurate maps. Copyright 2007 Wiley-Liss, Inc.
Campos-Filho, N; Franco, E L
1989-02-01
A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.
An improved image non-blind image deblurring method based on FoEs
NASA Astrophysics Data System (ADS)
Zhu, Qidan; Sun, Lei
2013-03-01
Traditional non-blind image deblurring algorithms always use maximum a posterior(MAP). MAP estimates involving natural image priors can reduce the ripples effectively in contrast to maximum likelihood(ML). However, they have been found lacking in terms of restoration performance. Based on this issue, we utilize MAP with KL penalty to replace traditional MAP. We develop an image reconstruction algorithm that minimizes the KL divergence between the reference distribution and the prior distribution. The approximate KL penalty can restrain over-smooth caused by MAP. We use three groups of images and Harris corner detection to prove our method. The experimental results show that our algorithm of non-blind image restoration can effectively reduce the ringing effect and exhibit the state-of-the-art deblurring results.
NASA Technical Reports Server (NTRS)
Kelly, D. A.; Fermelia, A.; Lee, G. K. F.
1990-01-01
An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.
Box-Cox transformation for QTL mapping.
Yang, Runqing; Yi, Nengjun; Xu, Shizhong
2006-01-01
The maximum likelihood method of QTL mapping assumes that the phenotypic values of a quantitative trait follow a normal distribution. If the assumption is violated, some forms of transformation should be taken to make the assumption approximately true. The Box-Cox transformation is a general transformation method which can be applied to many different types of data. The flexibility of the Box-Cox transformation is due to a variable, called transformation factor, appearing in the Box-Cox formula. We developed a maximum likelihood method that treats the transformation factor as an unknown parameter, which is estimated from the data simultaneously along with the QTL parameters. The method makes an objective choice of data transformation and thus can be applied to QTL analysis for many different types of data. Simulation studies show that (1) Box-Cox transformation can substantially increase the power of QTL detection; (2) Box-Cox transformation can replace some specialized transformation methods that are commonly used in QTL mapping; and (3) applying the Box-Cox transformation to data already normally distributed does not harm the result.
Integrated Efforts for Analysis of Geophysical Measurements and Models.
1997-09-26
12b. DISTRIBUTION CODE 13. ABSTRACT ( Maximum 200 words) This contract supported investigations of integrated applications of physics, ephemerides...REGIONS AND GPS DATA VALIDATIONS 20 2.5 PL-SCINDA: VISUALIZATION AND ANALYSIS TECHNIQUES 22 2.5.1 View Controls 23 2.5.2 Map Selection...and IR data, about cloudy pixels. Clustering and maximum likelihood classification algorithms categorize up to four cloud layers into stratiform or
Applying six classifiers to airborne hyperspectral imagery for detecting giant reed
USDA-ARS?s Scientific Manuscript database
This study evaluated and compared six different image classifiers, including minimum distance (MD), Mahalanobis distance (MAHD), maximum likelihood (ML), spectral angle mapper (SAM), mixture tuned matched filtering (MTMF) and support vector machine (SVM), for detecting and mapping giant reed (Arundo...
Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C
2018-04-01
A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.
NASA Astrophysics Data System (ADS)
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi
2018-06-01
Objective. Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. Approach. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Main results. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Significance. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi
2018-06-01
Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.
ERIC Educational Resources Information Center
Wall, Melanie M.; Guo, Jia; Amemiya, Yasuo
2012-01-01
Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus…
Wang, Shijun; Liu, Peter; Turkbey, Baris; Choyke, Peter; Pinto, Peter; Summers, Ronald M
2012-01-01
In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data. Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.
Maximum Likelihood Item Easiness Models for Test Theory Without an Answer Key
Batchelder, William H.
2014-01-01
Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce two extensions to the basic model in order to account for item rating easiness/difficulty. The first extension is a multiplicative model and the second is an additive model. We show how the multiplicative model is related to the Rasch model. We describe several maximum-likelihood estimation procedures for the models and discuss issues of model fit and identifiability. We describe how the CCT models could be used to give alternative consensus-based measures of reliability. We demonstrate the utility of both the basic and extended models on a set of essay rating data and give ideas for future research. PMID:29795812
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; ...
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less
PROBABILISTIC CROSS-IDENTIFICATION IN CROWDED FIELDS AS AN ASSIGNMENT PROBLEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budavári, Tamás; Basu, Amitabh, E-mail: budavari@jhu.edu, E-mail: basu.amitabh@jhu.edu
2016-10-01
One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.
Probabilistic Cross-identification in Crowded Fields as an Assignment Problem
NASA Astrophysics Data System (ADS)
Budavári, Tamás; Basu, Amitabh
2016-10-01
One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.
ILP-based maximum likelihood genome scaffolding
2014-01-01
Background Interest in de novo genome assembly has been renewed in the past decade due to rapid advances in high-throughput sequencing (HTS) technologies which generate relatively short reads resulting in highly fragmented assemblies consisting of contigs. Additional long-range linkage information is typically used to orient, order, and link contigs into larger structures referred to as scaffolds. Due to library preparation artifacts and erroneous mapping of reads originating from repeats, scaffolding remains a challenging problem. In this paper, we provide a scalable scaffolding algorithm (SILP2) employing a maximum likelihood model capturing read mapping uncertainty and/or non-uniformity of contig coverage which is solved using integer linear programming. A Non-Serial Dynamic Programming (NSDP) paradigm is applied to render our algorithm useful in the processing of larger mammalian genomes. To compare scaffolding tools, we employ novel quantitative metrics in addition to the extant metrics in the field. We have also expanded the set of experiments to include scaffolding of low-complexity metagenomic samples. Results SILP2 achieves better scalability throughg a more efficient NSDP algorithm than previous release of SILP. The results show that SILP2 compares favorably to previous methods OPERA and MIP in both scalability and accuracy for scaffolding single genomes of up to human size, and significantly outperforms them on scaffolding low-complexity metagenomic samples. Conclusions Equipped with NSDP, SILP2 is able to scaffold large mammalian genomes, resulting in the longest and most accurate scaffolds. The ILP formulation for the maximum likelihood model is shown to be flexible enough to handle metagenomic samples. PMID:25253180
Mapping benthic macroalgal communities in the coastal zone using CHRIS-PROBA mode 2 images
NASA Astrophysics Data System (ADS)
Casal, G.; Kutser, T.; Domínguez-Gómez, J. A.; Sánchez-Carnero, N.; Freire, J.
2011-09-01
The ecological importance of benthic macroalgal communities in coastal ecosystems has been recognised worldwide and the application of remote sensing to study these communities presents certain advantages respect to in situ methods. The present study used three CHRIS-PROBA images to analyse macroalgal communities distribution in the Seno de Corcubión (NW Spain). The use of this sensor represent a challenge given that its design, build and deployment programme is intended to follow the principles of the "faster, better, cheaper". To assess the application of this sensor to macroalgal mapping, two types of classifications were carried out: Maximum Likelihood and Spectral Angle Mapper (SAM). Maximum Likelihood classifier showed positive results, reaching overall accuracy percentages higher than 90% and kappa coefficients higher than 0.80 for the bottom classes shallow submerged sand, deep submerged sand, macroalgae less than 5 m and macroalgae between 5 and 10 m depth. The differentiation among macroalgal groups using SAM classifications showed positive results for green seaweeds although the differentiation between brown and red algae was not clear in the study area.
NASA Astrophysics Data System (ADS)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
NASA Technical Reports Server (NTRS)
Hannah, J. W.; Thomas, G. L.; Esparza, F.
1975-01-01
A land use map of Orange County, Florida was prepared from EREP photography while LANDSAT and EREP multispectral scanner data were used to provide more detailed information on Orlando and its suburbs. The generalized maps were prepared by tracing the patterns on an overlay, using an enlarging viewer. Digital analysis of the multispectral scanner data was basically the maximum likelihood classification method with training sample input and computer printer mapping of the results. Urban features delineated by the maps are discussed. It is concluded that computer classification, accompanied by human interpretation and manual simplification can produce land use maps which are useful on a regional, county, and city basis.
Mapping soil types from multispectral scanner data.
NASA Technical Reports Server (NTRS)
Kristof, S. J.; Zachary, A. L.
1971-01-01
Multispectral remote sensing and computer-implemented pattern recognition techniques were used for automatic ?mapping' of soil types. This approach involves subjective selection of a set of reference samples from a gray-level display of spectral variations which was generated by a computer. Each resolution element is then classified using a maximum likelihood ratio. Output is a computer printout on which the researcher assigns a different symbol to each class. Four soil test areas in Indiana were experimentally examined using this approach, and partially successful results were obtained.
Terrain Classification Using Multi-Wavelength Lidar Data
2015-09-01
Figure 9. Pseudo- NDVI of three layers within the vertical structure of the forest. (Top) First return from the LiDAR instrument, including the ground...in NDVI throughout the vertical canopy. ........................................................17 Figure 10. Optech Titan operating wavelengths...and Ranging LMS LiDAR Mapping Suite ML Maximum Likelihood NIR Near Infrared N-D VIS n-Dimensional Visualizer NDVI Normalized Difference
Integrating Entropy-Based Naïve Bayes and GIS for Spatial Evaluation of Flood Hazard.
Liu, Rui; Chen, Yun; Wu, Jianping; Gao, Lei; Barrett, Damian; Xu, Tingbao; Li, Xiaojuan; Li, Linyi; Huang, Chang; Yu, Jia
2017-04-01
Regional flood risk caused by intensive rainfall under extreme climate conditions has increasingly attracted global attention. Mapping and evaluation of flood hazard are vital parts in flood risk assessment. This study develops an integrated framework for estimating spatial likelihood of flood hazard by coupling weighted naïve Bayes (WNB), geographic information system, and remote sensing. The north part of Fitzroy River Basin in Queensland, Australia, was selected as a case study site. The environmental indices, including extreme rainfall, evapotranspiration, net-water index, soil water retention, elevation, slope, drainage proximity, and density, were generated from spatial data representing climate, soil, vegetation, hydrology, and topography. These indices were weighted using the statistics-based entropy method. The weighted indices were input into the WNB-based model to delineate a regional flood risk map that indicates the likelihood of flood occurrence. The resultant map was validated by the maximum inundation extent extracted from moderate resolution imaging spectroradiometer (MODIS) imagery. The evaluation results, including mapping and evaluation of the distribution of flood hazard, are helpful in guiding flood inundation disaster responses for the region. The novel approach presented consists of weighted grid data, image-based sampling and validation, cell-by-cell probability inferring and spatial mapping. It is superior to an existing spatial naive Bayes (NB) method for regional flood hazard assessment. It can also be extended to other likelihood-related environmental hazard studies. © 2016 Society for Risk Analysis.
Pascazio, Vito; Schirinzi, Gilda
2002-01-01
In this paper, a technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities.
Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.
Dick, Bernhard
2014-01-14
A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.
Correlation of diffusion and perfusion MRI with Ki-67 in high-grade meningiomas.
Ginat, Daniel T; Mangla, Rajiv; Yeaney, Gabrielle; Wang, Henry Z
2010-12-01
Atypical and anaplastic meningiomas have a greater likelihood of recurrence than benign meningiomas. The risk for recurrence is often estimated using the Ki-67 labeling index. The purpose of this study was to determine the correlation between Ki-67 and regional cerebral blood volume (rCBV) and between Ki-67 and apparent diffusion coefficient (ADC) in atypical and anaplastic meningiomas. A retrospective review of the advanced imaging and immunohistochemical characteristics of atypical and anaplastic meningiomas was performed. The relative minimum ADC, relative maximum rCBV, and specimen Ki-67 index were measured. Pearson's correlation was used to compare these parameters. There were 23 cases with available ADC maps and 20 cases with available rCBV maps. The average Ki-67 among the cases with ADC maps and rCBV maps was 17.6% (range, 5-38%) and 16.7% (range, 3-38%), respectively. The mean minimum ADC ratio was 0.91 (SD, 0.26) and the mean maximum rCBV ratio was 22.5 (SD, 7.9). There was a significant positive correlation between maximum rCBV and Ki-67 (Pearson's correlation, 0.69; p = 0.00038). However, there was no significant correlation between minimum ADC and Ki-67 (Pearson's correlation, -0.051; p = 0.70). Maximum rCBV correlated significantly with Ki-67 in high-grade meningiomas.
Using known map category marginal frequencies to improve estimates of thematic map accuracy
NASA Technical Reports Server (NTRS)
Card, D. H.
1982-01-01
By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.
Johnson, Rebecca N; Agapow, Paul-Michael; Crozier, Ross H
2003-11-01
The ant subfamily Formicinae is a large assemblage (2458 species (J. Nat. Hist. 29 (1995) 1037), including species that weave leaf nests together with larval silk and in which the metapleural gland-the ancestrally defining ant character-has been secondarily lost. We used sequences from two mitochondrial genes (cytochrome b and cytochrome oxidase 2) from 18 formicine and 4 outgroup taxa to derive a robust phylogeny, employing a search for tree islands using 10000 randomly constructed trees as starting points and deriving a maximum likelihood consensus tree from the ML tree and those not significantly different from it. Non-parametric bootstrapping showed that the ML consensus tree fit the data significantly better than three scenarios based on morphology, with that of Bolton (Identification Guide to the Ant Genera of the World, Harvard University Press, Cambridge, MA) being the best among these alternative trees. Trait mapping showed that weaving had arisen at least four times and possibly been lost once. A maximum likelihood analysis showed that loss of the metapleural gland is significantly associated with the weaver life-pattern. The graph of the frequencies with which trees were discovered versus their likelihood indicates that trees with high likelihoods have much larger basins of attraction than those with lower likelihoods. While this result indicates that single searches are more likely to find high- than low-likelihood tree islands, it also indicates that searching only for the single best tree may lose important information.
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; ...
2017-08-25
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
Duels where both marksmen ’home’ or ’zero in’ on one another are here considered, and the effect of this on the win probability is determined. It is...leads to win probabilities that can be straightforwardly evaluated. Maximum-likelihood estimation of the hit probability and homing from field data is outlined. The solutions of the duels are displayed as contour maps. (Author)
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator)
1984-01-01
The spatial, geometric, and radiometric qualities of LANDSAT 4 thematic mapper (TM) and multispectral scanner (MSS) data were evaluated by interpreting, through visual and computer means, film and digital products for selected agricultural and forest cover types in California. Multispectral analyses employing Bayesian maximum likelihood, discrete relaxation, and unsupervised clustering algorithms were used to compare the usefulness of TM and MSS data for discriminating individual cover types. Some of the significant results are as follows: (1) for maximizing the interpretability of agricultural and forest resources, TM color composites should contain spectral bands in the visible, near-reflectance infrared, and middle-reflectance infrared regions, namely TM 4 and TM % and must contain TM 4 in all cases even at the expense of excluding TM 5; (2) using enlarged TM film products, planimetric accuracy of mapped poins was within 91 meters (RMSE east) and 117 meters (RMSE north); (3) using TM digital products, planimetric accuracy of mapped points was within 12.0 meters (RMSE east) and 13.7 meters (RMSE north); and (4) applying a contextual classification algorithm to TM data provided classification accuracies competitive with Bayesian maximum likelihood.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si
2014-12-01
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.
2017-01-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
NASA Technical Reports Server (NTRS)
Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.
1995-01-01
We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2006-01-01
Structural equation models are widely appreciated in social-psychological research and other behavioral research to model relations between latent constructs and manifest variables and to control for measurement error. Most applications of SEMs are based on fully observed continuous normal data and models with a linear structural equation.…
Djordjevic, Ivan B; Vasic, Bane
2006-05-29
A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.
Mapping quantitative trait loci for binary trait in the F2:3 design.
Zhu, Chengsong; Zhang, Yuan-Ming; Guo, Zhigang
2008-12-01
In the analysis of inheritance of quantitative traits with low heritability, an F(2:3) design that genotypes plants in F(2) and phenotypes plants in F(2:3) progeny is often used in plant genetics. Although statistical approaches for mapping quantitative trait loci (QTL) in the F(2:3) design have been well developed, those for binary traits of biological interest and economic importance are seldom addressed. In this study, an attempt was made to map binary trait loci (BTL) in the F(2:3) design. The fundamental idea was: the F(2) plants were genotyped, all phenotypic values of each F(2:3) progeny were measured for binary trait, and these binary trait values and the marker genotype informations were used to detect BTL under the penetrance and liability models. The proposed method was verified by a series of Monte-Carlo simulation experiments. These results showed that maximum likelihood approaches under the penetrance and liability models provide accurate estimates for the effects and the locations of BTL with high statistical power, even under of low heritability. Moreover, the penetrance model is as efficient as the liability model, and the F(2:3) design is more efficient than classical F(2) design, even though only a single progeny is collected from each F(2:3) family. With the maximum likelihood approaches under the penetrance and the liability models developed in this study, we can map binary traits as we can do for quantitative trait in the F(2:3) design.
2013-03-01
intermediate frequency LFM linear frequency modulation MAP maximum a posteriori MATLAB® matrix laboratory ML maximun likelihood OFDM orthogonal frequency...spectrum, frequency hopping, and orthogonal frequency division multiplexing ( OFDM ) modulations. Feature analysis would be a good research thrust to...determine feature relevance and decide if removing any features improves performance. Also, extending the system for simulations using a MIMO receiver or
Minimization for conditional simulation: Relationship to optimal transport
NASA Astrophysics Data System (ADS)
Oliver, Dean S.
2014-05-01
In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.
NASA Technical Reports Server (NTRS)
Spruce, Joe
2001-01-01
Yellowstone National Park (YNP) contains a diversity of land cover. YNP managers need site-specific land cover maps, which may be produced more effectively using high-resolution hyperspectral imagery. ISODATA clustering techniques have aided operational multispectral image classification and may benefit certain hyperspectral data applications if optimally applied. In response, a study was performed for an area in northeast YNP using 11 select bands of low-altitude AVIRIS data calibrated to ground reflectance. These data were subjected to ISODATA clustering and Maximum Likelihood Classification techniques to produce a moderately detailed land cover map. The latter has good apparent overall agreement with field surveys and aerial photo interpretation.
CARHTA GENE: multipopulation integrated genetic and radiation hybrid mapping.
de Givry, Simon; Bouchez, Martin; Chabrier, Patrick; Milan, Denis; Schiex, Thomas
2005-04-15
CAR(H)(T)A GENE: is an integrated genetic and radiation hybrid (RH) mapping tool which can deal with multiple populations, including mixtures of genetic and RH data. CAR(H)(T)A GENE: performs multipoint maximum likelihood estimations with accelerated expectation-maximization algorithms for some pedigrees and has sophisticated algorithms for marker ordering. Dedicated heuristics for framework mapping are also included. CAR(H)(T)A GENE: can be used as a C++ library, through a shell command and a graphical interface. The XML output for companion tools is integrated. The program is available free of charge from www.inra.fr/bia/T/CarthaGene for Linux, Windows and Solaris machines (with Open Source). tschiex@toulouse.inra.fr.
New estimates of the CMB angular power spectra from the WMAP 5 year low-resolution data
NASA Astrophysics Data System (ADS)
Gruppuso, A.; de Rosa, A.; Cabella, P.; Paci, F.; Finelli, F.; Natoli, P.; de Gasperis, G.; Mandolesi, N.
2009-11-01
A quadratic maximum likelihood (QML) estimator is applied to the Wilkinson Microwave Anisotropy Probe (WMAP) 5 year low-resolution maps to compute the cosmic microwave background angular power spectra (APS) at large scales for both temperature and polarization. Estimates and error bars for the six APS are provided up to l = 32 and compared, when possible, to those obtained by the WMAP team, without finding any inconsistency. The conditional likelihood slices are also computed for the Cl of all the six power spectra from l = 2 to 10 through a pixel-based likelihood code. Both the codes treat the covariance for (T, Q, U) in a single matrix without employing any approximation. The inputs of both the codes (foreground-reduced maps, related covariances and masks) are provided by the WMAP team. The peaks of the likelihood slices are always consistent with the QML estimates within the error bars; however, an excellent agreement occurs when the QML estimates are used as a fiducial power spectrum instead of the best-fitting theoretical power spectrum. By the full computation of the conditional likelihood on the estimated spectra, the value of the temperature quadrupole CTTl=2 is found to be less than 2σ away from the WMAP 5 year Λ cold dark matter best-fitting value. The BB spectrum is found to be well consistent with zero, and upper limits on the B modes are provided. The parity odd signals TB and EB are found to be consistent with zero.
Planning applications in East Central Florida
NASA Technical Reports Server (NTRS)
Hannah, J. W. (Principal Investigator); Thomas, G. L.; Esparza, F.; Millard, J. J.
1974-01-01
The author has identified the following significant results. This is a study of applications of ERTS data to planning problems, especially as applicable to East Central Florida. The primary method has been computer analysis of digital data, with visual analysis of images serving to supplement the digital analysis. The principal method of analysis was supervised maximum likelihood classification, supplemented by density slicing and mapping of ratios of band intensities. Land-use maps have been prepared for several urban and non-urban sectors. Thematic maps have been found to be a useful form of the land-use maps. Change-monitoring has been found to be an appropriate and useful application. Mapping of marsh regions has been found effective and useful in this region. Local planners have participated in selecting training samples and in the checking and interpretation of results.
Bi-orthogonal Symbol Mapping and Detection in Optical CDMA Communication System
NASA Astrophysics Data System (ADS)
Liu, Maw-Yang
2017-12-01
In this paper, the bi-orthogonal symbol mapping and detection scheme is investigated in time-spreading wavelength-hopping optical CDMA communication system. The carrier-hopping prime code is exploited as signature sequence, whose put-of-phase autocorrelation is zero. Based on the orthogonality of carrier-hopping prime code, the equal weight orthogonal signaling scheme can be constructed, and the proposed scheme using bi-orthogonal symbol mapping and detection can be developed. The transmitted binary data bits are mapped into corresponding bi-orthogonal symbols, where the orthogonal matrix code and its complement are utilized. In the receiver, the received bi-orthogonal data symbol is fed into the maximum likelihood decoder for detection. Under such symbol mapping and detection, the proposed scheme can greatly enlarge the Euclidean distance; hence, the system performance can be drastically improved.
Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia
2015-04-01
In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.
Fragment assignment in the cloud with eXpress-D
2013-01-01
Background Probabilistic assignment of ambiguously mapped fragments produced by high-throughput sequencing experiments has been demonstrated to greatly improve accuracy in the analysis of RNA-Seq and ChIP-Seq, and is an essential step in many other sequence census experiments. A maximum likelihood method using the expectation-maximization (EM) algorithm for optimization is commonly used to solve this problem. However, batch EM-based approaches do not scale well with the size of sequencing datasets, which have been increasing dramatically over the past few years. Thus, current approaches to fragment assignment rely on heuristics or approximations for tractability. Results We present an implementation of a distributed EM solution to the fragment assignment problem using Spark, a data analytics framework that can scale by leveraging compute clusters within datacenters–“the cloud”. We demonstrate that our implementation easily scales to billions of sequenced fragments, while providing the exact maximum likelihood assignment of ambiguous fragments. The accuracy of the method is shown to be an improvement over the most widely used tools available and can be run in a constant amount of time when cluster resources are scaled linearly with the amount of input data. Conclusions The cloud offers one solution for the difficulties faced in the analysis of massive high-thoughput sequencing data, which continue to grow rapidly. Researchers in bioinformatics must follow developments in distributed systems–such as new frameworks like Spark–for ways to port existing methods to the cloud and help them scale to the datasets of the future. Our software, eXpress-D, is freely available at: http://github.com/adarob/express-d. PMID:24314033
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies
Rukhin, Andrew L.
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.
Rukhin, Andrew L
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.
Tang, Jian.; Chen, Yuwei.; Jaakkola, Anttoni.; Liu, Jinbing.; Hyyppä, Juha.; Hyyppä, Hannu.
2014-01-01
Laser scan matching with grid-based maps is a promising tool for real-time indoor positioning of mobile Unmanned Ground Vehicles (UGVs). While there are critical implementation problems, such as the ability to estimate the position by sensing the unknown indoor environment with sufficient accuracy and low enough latency for stable vehicle control, further development work is necessary. Unfortunately, most of the existing methods employ heuristics for quick positioning in which numerous accumulated errors easily lead to loss of positioning accuracy. This severely restricts its applications in large areas and over lengthy periods of time. This paper introduces an efficient real-time mobile UGV indoor positioning system for large-area applications using laser scan matching with an improved probabilistically-motivated Maximum Likelihood Estimation (IMLE) algorithm, which is based on a multi-resolution patch-divided grid likelihood map. Compared with traditional methods, the improvements embodied in IMLE include: (a) Iterative Closed Point (ICP) preprocessing, which adaptively decreases the search scope; (b) a totally brute search matching method on multi-resolution map layers, based on the likelihood value between current laser scan and the grid map within refined search scope, adopted to obtain the global optimum position at each scan matching; and (c) a patch-divided likelihood map supporting a large indoor area. A UGV platform called NAVIS was designed, manufactured, and tested based on a low-cost robot integrating a LiDAR and an odometer sensor to verify the IMLE algorithm. A series of experiments based on simulated data and field tests with NAVIS proved that the proposed IMEL algorithm is a better way to perform local scan matching that can offer a quick and stable positioning solution with high accuracy so it can be part of a large area localization/mapping, application. The NAVIS platform can reach an updating rate of 12 Hz in a feature-rich environment and 2 Hz even in a feature-poor environment, respectively. Therefore, it can be utilized in a real-time application. PMID:24999715
Tang, Jian; Chen, Yuwei; Jaakkola, Anttoni; Liu, Jinbing; Hyyppä, Juha; Hyyppä, Hannu
2014-07-04
Laser scan matching with grid-based maps is a promising tool for real-time indoor positioning of mobile Unmanned Ground Vehicles (UGVs). While there are critical implementation problems, such as the ability to estimate the position by sensing the unknown indoor environment with sufficient accuracy and low enough latency for stable vehicle control, further development work is necessary. Unfortunately, most of the existing methods employ heuristics for quick positioning in which numerous accumulated errors easily lead to loss of positioning accuracy. This severely restricts its applications in large areas and over lengthy periods of time. This paper introduces an efficient real-time mobile UGV indoor positioning system for large-area applications using laser scan matching with an improved probabilistically-motivated Maximum Likelihood Estimation (IMLE) algorithm, which is based on a multi-resolution patch-divided grid likelihood map. Compared with traditional methods, the improvements embodied in IMLE include: (a) Iterative Closed Point (ICP) preprocessing, which adaptively decreases the search scope; (b) a totally brute search matching method on multi-resolution map layers, based on the likelihood value between current laser scan and the grid map within refined search scope, adopted to obtain the global optimum position at each scan matching; and (c) a patch-divided likelihood map supporting a large indoor area. A UGV platform called NAVIS was designed, manufactured, and tested based on a low-cost robot integrating a LiDAR and an odometer sensor to verify the IMLE algorithm. A series of experiments based on simulated data and field tests with NAVIS proved that the proposed IMEL algorithm is a better way to perform local scan matching that can offer a quick and stable positioning solution with high accuracy so it can be part of a large area localization/mapping, application. The NAVIS platform can reach an updating rate of 12 Hz in a feature-rich environment and 2 Hz even in a feature-poor environment, respectively. Therefore, it can be utilized in a real-time application.
Sparsity-constrained PET image reconstruction with learned dictionaries
NASA Astrophysics Data System (ADS)
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
ERIC Educational Resources Information Center
Rhemtulla, Mijke; Brosseau-Liard, Patricia E.; Savalei, Victoria
2012-01-01
A simulation study compared the performance of robust normal theory maximum likelihood (ML) and robust categorical least squares (cat-LS) methodology for estimating confirmatory factor analysis models with ordinal variables. Data were generated from 2 models with 2-7 categories, 4 sample sizes, 2 latent distributions, and 5 patterns of category…
2010-06-01
GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non
MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavel, D.T.
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.
Inverse Ising problem in continuous time: A latent variable approach
NASA Astrophysics Data System (ADS)
Donner, Christian; Opper, Manfred
2017-12-01
We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.
MAP Estimators for Piecewise Continuous Inversion
2016-08-08
MAP estimators for piecewise continuous inversion M M Dunlop1 and A M Stuart Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK E...Published 8 August 2016 Abstract We study the inverse problem of estimating a field ua from data comprising a finite set of nonlinear functionals of ua...then natural to study maximum a posterior (MAP) estimators. Recently (Dashti et al 2013 Inverse Problems 29 095017) it has been shown that MAP
Mapping of quantitative trait loci using the skew-normal distribution.
Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos
2007-11-01
In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation
NASA Technical Reports Server (NTRS)
Maine, R. E.
1981-01-01
The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Spatial Statistics of Large Astronomical Databases: An Algorithmic Approach
NASA Technical Reports Server (NTRS)
Szapudi, Istvan
2004-01-01
In this AISRP, the we have demonstrated that the correlation function i) can be calculated for MAP in minutes (about 45 minutes for Planck) on a modest 500Mhz workstation ii) the corresponding method, although theoretically suboptimal, produces nearly optimal results for realistic noise and cut sky. This trillion fold improvement in speed over the standard maximum likelihood technique opens up tremendous new possibilities, which will be persued in the follow up.
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Social Background and School Continuation Decisions. Discussion Papers No. 462.
ERIC Educational Resources Information Center
Mare, Robert D.
In this paper, logistic response models of the effects of parental socioeconomic characteristics and family structure on the probability of making selected school transitions for white American males are estimated by maximum likelihood. Transition levels examined include: (l) completion of elementary school; (2) attendance at high school; (3)…
ERIC Educational Resources Information Center
Coughlin, Kevin B.
2013-01-01
This study is intended to provide researchers with empirically derived guidelines for conducting factor analytic studies in research contexts that include dichotomous and continuous levels of measurement. This study is based on the hypotheses that ordinary least squares (OLS) factor analysis will yield more accurate parameter estimates than…
Fast estimation of diffusion tensors under Rician noise by the EM algorithm.
Liu, Jia; Gasbarra, Dario; Railavo, Juha
2016-01-15
Diffusion tensor imaging (DTI) is widely used to characterize, in vivo, the white matter of the central nerve system (CNS). This biological tissue contains much anatomic, structural and orientational information of fibers in human brain. Spectral data from the displacement distribution of water molecules located in the brain tissue are collected by a magnetic resonance scanner and acquired in the Fourier domain. After the Fourier inversion, the noise distribution is Gaussian in both real and imaginary parts and, as a consequence, the recorded magnitude data are corrupted by Rician noise. Statistical estimation of diffusion leads a non-linear regression problem. In this paper, we present a fast computational method for maximum likelihood estimation (MLE) of diffusivities under the Rician noise model based on the expectation maximization (EM) algorithm. By using data augmentation, we are able to transform a non-linear regression problem into the generalized linear modeling framework, reducing dramatically the computational cost. The Fisher-scoring method is used for achieving fast convergence of the tensor parameter. The new method is implemented and applied using both synthetic and real data in a wide range of b-amplitudes up to 14,000s/mm(2). Higher accuracy and precision of the Rician estimates are achieved compared with other log-normal based methods. In addition, we extend the maximum likelihood (ML) framework to the maximum a posteriori (MAP) estimation in DTI under the aforementioned scheme by specifying the priors. We will describe how close numerically are the estimators of model parameters obtained through MLE and MAP estimation. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.
Use of LANDSAT imagery for wildlife habitat mapping in northeast and east central Alaska
NASA Technical Reports Server (NTRS)
Lent, P. C. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Two scenes were analyzed by applying an iterative cluster analysis to a 2% random data sample and then using the resulting clusters as a training set basis for maximum likelihood classification. Twenty-six and twenty-seven categorical classes, respectively resulted from this process. The majority of classes in each case were quite specific vegetation types; each of these types has specific value as moose habitat.
The use of LANDSAT data to monitor the urban growth of Sao Paulo Metropolitan area
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Niero, M.; Lombardo, M. A.; Foresti, C.
1982-01-01
Urban growth from 1977 to 1979 of the region between Billings and the Guarapiranga reservoir was mapped and the problematic urban areas identified using several LANDSAT products. Visual and automatic interpretation techniques were applied to the data. Computer compatible tapes of LANDSAT multispectral scanner data were analyzed through the maximum likelihood Gaussian algorithm. The feasibility of monitoring fast urban growth by remote sensing techniques for efficient urban planning and control is demonstrated.
A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.
ERIC Educational Resources Information Center
McKinley, Robert L.; Reckase, Mark D.
A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…
Maximum likelihood solution for inclination-only data in paleomagnetism
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2010-08-01
We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.
NASA Astrophysics Data System (ADS)
Malatesta, Luca; Attorre, Fabio; Altobelli, Alfredo; Adeeb, Ahmed; De Sanctis, Michele; Taleb, Nadim M.; Scholte, Paul T.; Vitale, Marcello
2013-01-01
Socotra Island (Yemen), a global biodiversity hotspot, is characterized by high geomorphological and biological diversity. In this study, we present a high-resolution vegetation map of the island based on combining vegetation analysis and classification with remote sensing. Two different image classification approaches were tested to assess the most accurate one in mapping the vegetation mosaic of Socotra. Spectral signatures of the vegetation classes were obtained through a Gaussian mixture distribution model, and a sequential maximum a posteriori (SMAP) classification was applied to account for the heterogeneity and the complex spatial pattern of the arid vegetation. This approach was compared to the traditional maximum likelihood (ML) classification. Satellite data were represented by a RapidEye image with 5 m pixel resolution and five spectral bands. Classified vegetation relevés were used to obtain the training and evaluation sets for the main plant communities. Postclassification sorting was performed to adjust the classification through various rule-based operations. Twenty-eight classes were mapped, and SMAP, with an accuracy of 87%, proved to be more effective than ML (accuracy: 66%). The resulting map will represent an important instrument for the elaboration of conservation strategies and the sustainable use of natural resources in the island.
Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier
2010-05-01
PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.
The recursive maximum likelihood proportion estimator: User's guide and test results
NASA Technical Reports Server (NTRS)
Vanrooy, D. L.
1976-01-01
Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.
New applications of maximum likelihood and Bayesian statistics in macromolecular crystallography.
McCoy, Airlie J
2002-10-01
Maximum likelihood methods are well known to macromolecular crystallographers as the methods of choice for isomorphous phasing and structure refinement. Recently, the use of maximum likelihood and Bayesian statistics has extended to the areas of molecular replacement and density modification, placing these methods on a stronger statistical foundation and making them more accurate and effective.
Back to Normal! Gaussianizing posterior distributions for cosmological probes
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2014-05-01
We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.
On the existence of maximum likelihood estimates for presence-only data
Hefley, Trevor J.; Hooten, Mevin B.
2015-01-01
It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.
A Regional Analysis of Non-Methane Hydrocarbons And Meteorology of The Rural Southeast United States
1996-01-01
Zt is an ARIMA time series. This is a typical regression model , except that it allows for autocorrelation in the error term Z. In this work, an ARMA...data=folder; var residual; run; II Statistical output of 1992 regression model on 1993 ozone data ARIMA Procedure Maximum Likelihood Estimation Approx...at each of the sites, and to show the effect of synoptic meteorology on high ozone by examining NOAA daily weather maps and climatic data
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
NASA Technical Reports Server (NTRS)
Laubenthal, N. A.; Bertsch, D.; Lal, N.; Etienne, A.; Mcdonald, L.; Mattox, J.; Sreekumar, P.; Nolan, P.; Fierro, J.
1992-01-01
The Energetic Gamma Ray Telescope Experiment (EGRET) on the Compton Gamma Ray Observatory has been in orbit for more than a year and is being used to map the full sky for gamma rays in a wide energy range from 30 to 20,000 MeV. Already these measurements have resulted in a wide range of exciting new information on quasars, pulsars, galactic sources, and diffuse gamma ray emission. The central part of the analysis is done with sky maps that typically cover an 80 x 80 degree section of the sky for an exposure time of several days. Specific software developed for this program generates the counts, exposure, and intensity maps. The analysis is done on a network of UNIX based workstations and takes full advantage of a custom-built user interface called X-dialog. The maps that are generated are stored in the FITS format for a collection of energies. These, along with similar diffuse emission background maps generated from a model calculation, serve as input to a maximum likelihood program that produces maps of likelihood with optional contours that are used to evaluate regions for sources. Likelihood also evaluates the background corrected intensity at each location for each energy interval from which spectra can be generated. Being in a standard FITS format permits all of the maps to be easily accessed by the full complement of tools available in several commercial astronomical analysis systems. In the EGRET case, IDL is used to produce graphics plots in two and three dimensions and to quickly implement any special evaluation that might be desired. Other custom-built software, such as the spectral and pulsar analyses, take advantage of the XView toolkit for display and Postscript output for the color hard copy. This poster paper outlines the data flow and provides examples of the user interfaces and output products. It stresses the advantages that are derived from the integration of the specific instrument-unique software and powerful commercial tools for graphics and statistical evaluation. This approach has several proven advantages including flexibility, a minimum of development effort, ease of use, and portability.
Computation of nonparametric convex hazard estimators via profile methods.
Jankowski, Hanna K; Wellner, Jon A
2009-05-01
This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.
NASA Technical Reports Server (NTRS)
Gramenopoulos, N. (Principal Investigator)
1973-01-01
The author has identified the following significant results. For the recognition of terrain types, spatial signatures are developed from the diffraction patterns of small areas of ERTS-1 images. This knowledge is exploited for the measurements of a small number of meaningful spatial features from the digital Fourier transforms of ERTS-1 image cells containing 32 x 32 picture elements. Using these spatial features and a heuristic algorithm, the terrain types in the vicinity of Phoenix, Arizona were recognized by the computer with a high accuracy. Then, the spatial features were combined with spectral features and using the maximum likelihood criterion the recognition accuracy of terrain types increased substantially. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. Nonlinear transformations of the feature vectors are required so that the terrain class statistics become approximately Gaussian. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month but vary substantially between seasons.
Maximum likelihood sequence estimation for optical complex direct modulation.
Che, Di; Yuan, Feng; Shieh, William
2017-04-17
Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.
ERIC Educational Resources Information Center
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.
2016-01-01
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
Maximum likelihood estimation of signal-to-noise ratio and combiner weight
NASA Technical Reports Server (NTRS)
Kalson, S.; Dolinar, S. J.
1986-01-01
An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.
Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine
1999-01-01
Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...
Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.
Rad, Kamiar Rahnama; Paninski, Liam
2010-01-01
Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.
NASA Astrophysics Data System (ADS)
Eggers, G. L.; Lewis, K. W.; Simons, F. J.; Olhede, S.
2013-12-01
Venus does not possess a plate-tectonic system like that observed on Earth, and many surface features--such as tesserae and coronae--lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere, requiring a study of topography and gravity, and how they relate. Past studies of topography dealt with mapping and classification of visually observed features, and studies of gravity dealt with inverting the relation between topography and gravity anomalies to recover surface density and elastic thickness in either the space (correlation) or the spectral (admittance, coherence) domain. In the former case, geological features could be delineated but not classified quantitatively. In the latter case, rectangular or circular data windows were used, lacking geological definition. While the estimates of lithospheric strength on this basis were quantitative, they lacked robust error estimates. Here, we remapped the surface into 77 regions visually and qualitatively defined from a combination of Magellan topography, gravity, and radar images. We parameterize the spectral covariance of the observed topography, treating it as a Gaussian process assumed to be stationary over the mapped regions, using a three-parameter isotropic Matern model, and perform maximum-likelihood based inversions for the parameters. We discuss the parameter distribution across the Venusian surface and across terrain types such as coronoae, dorsae, tesserae, and their relation with mean elevation and latitudinal position. We find that the three-parameter model, while mathematically established and applicable to Venus topography, is overparameterized, and thus reduce the results to a two-parameter description of the peak spectral variance and the range-to-half-peak variance (in function of the wavenumber). With the reduction the clustering of geological region types in two-parameter space becomes promising. Finally, we perform inversions for the JOINT spectral variance of topography and gravity, in which the INITIAL loading by topography retains the Matern form but the FINAL topography and gravity are the result of flexural compensation. In our modeling, we pay explicit attention to finite-field spectral estimation effects (and their remedy via tapering), and to the implementation of statistical tests (for anisotropy, for initial-loading process correlation, to ascertain the proper density contrasts and interface depth in a two-layer model), robustness assessment and uncertainty quantification, as well as to algorithmic intricacies related to low-dimensional but poorly scaled maximum-likelihood inversions. We conclude that Venusian geomorphic terrains are well described by their 2-D topographic and gravity (cross-)power spectra, and the spectral properties of distinct geologic provinces on Venus are worth quantifying via maximum-likelihood-based methods under idealized three-parameter Matern distributions. Analysis of fitted parameters and the fitted-data residuals reveals natural variability in the (sub)surface properties on Venus, as well as some directional anisotropy. Geologic regions tend to cluster according to terrain type in our parameter space, which we analyze to confirm their shared geologic histories and utilize for guidance in ongoing mapping efforts of Venus and other terrestrial bodies.
Distributed multimodal data fusion for large scale wireless sensor networks
NASA Astrophysics Data System (ADS)
Ertin, Emre
2006-05-01
Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
Fernández-Esparrach, Glòria; Bernal, Jorge; López-Cerón, Maria; Córdova, Henry; Sánchez-Montes, Cristina; Rodríguez de Miguel, Cristina; Sánchez, Francisco Javier
2016-09-01
Polyp miss-rate is a drawback of colonoscopy that increases significantly for small polyps. We explored the efficacy of an automatic computer-vision method for polyp detection. Our method relies on a model that defines polyp boundaries as valleys of image intensity. Valley information is integrated into energy maps that represent the likelihood of the presence of a polyp. In 24 videos containing polyps from routine colonoscopies, all polyps were detected in at least one frame. The mean of the maximum values on the energy map was higher for frames with polyps than without (P < 0.001). Performance improved in high quality frames (AUC = 0.79 [95 %CI 0.70 - 0.87] vs. 0.75 [95 %CI 0.66 - 0.83]). With 3.75 set as the maximum threshold value, sensitivity and specificity for the detection of polyps were 70.4 % (95 %CI 60.3 % - 80.8 %) and 72.4 % (95 %CI 61.6 % - 84.6 %), respectively. Energy maps performed well for colonic polyp detection, indicating their potential applicability in clinical practice. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Technical Reports Server (NTRS)
Hoffbeck, Joseph P.; Landgrebe, David A.
1994-01-01
Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.
2013-01-01
are calculated from coherently -detected fields, e.g., coherent Doppler lidar . Our CRB results reveal that the best-case mean-square error scales as 1...1088 (2001). 7. K. Asaka, Y. Hirano, K. Tatsumi, K. Kasahara, and T. Tajime, “A pseudo-random frequency modulation continuous wave coherent lidar using...multiple returns,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 2170–2180 (2007). 11. T. J. Karr, “Atmospheric phase error in coherent laser radar
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
NASA Technical Reports Server (NTRS)
Scholz, D.; Fuhs, N.; Hixson, M.
1979-01-01
The overall objective of this study was to apply and evaluate several of the currently available classification schemes for crop identification. The approaches examined were: (1) a per point Gaussian maximum likelihood classifier, (2) a per point sum of normal densities classifier, (3) a per point linear classifier, (4) a per point Gaussian maximum likelihood decision tree classifier, and (5) a texture sensitive per field Gaussian maximum likelihood classifier. Three agricultural data sets were used in the study: areas from Fayette County, Illinois, and Pottawattamie and Shelby Counties in Iowa. The segments were located in two distinct regions of the Corn Belt to sample variability in soils, climate, and agricultural practices.
Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian
2017-03-01
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.
Cramer-Rao Bound, MUSIC, and Maximum Likelihood. Effects of Temporal Phase Difference
1990-11-01
Technical Report 1373 November 1990 Cramer-Rao Bound, MUSIC , And Maximum Likelihood Effects of Temporal Phase o Difference C. V. TranI OTIC Approved... MUSIC , and Maximum Likelihood (ML) asymptotic variances corresponding to the two-source direction-of-arrival estimation where sources were modeled as...1pI = 1.00, SNR = 20 dB ..................................... 27 2. MUSIC for two equipowered signals impinging on a 5-element ULA (a) IpI = 0.50, SNR
NASA/BLM APT, phase 2. Volume 2: Technology demonstration. [Arizona
NASA Technical Reports Server (NTRS)
1981-01-01
Techniques described include: (1) steps in the preprocessing of LANDSAT data; (2) the training of a classifier; (3) maximum likelihood classification and precision; (4) geometric correction; (5) class description; (6) digitizing; (7) digital terrain data; (8) an overview of sample design; (9) allocation and selection of primary sample units; (10) interpretation of secondary sample units; (11) data collection ground plots; (12) data reductions; (13) analysis for productivity estimation and map verification; (14) cost analysis; and (150) LANDSAT digital products. The evaluation of the pre-inventory planning for P.J. is included.
Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques
NASA Technical Reports Server (NTRS)
Messmore, J.; Copeland, G. E.; Levy, G. F.
1975-01-01
This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95%), and progress is being made towards identifying the mapped spectral classes.
Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques
NASA Technical Reports Server (NTRS)
Messmore, J.; Copeland, G. E.; Levy, G. F.
1975-01-01
This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95 percent), and progress is being made towards identifying the mapped spectral classes.
Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian
2016-08-01
In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.
Lin, Feng-Chang; Zhu, Jun
2012-01-01
We develop continuous-time models for the analysis of environmental or ecological monitoring data such that subjects are observed at multiple monitoring time points across space. Of particular interest are additive hazards regression models where the baseline hazard function can take on flexible forms. We consider time-varying covariates and take into account spatial dependence via autoregression in space and time. We develop statistical inference for the regression coefficients via partial likelihood. Asymptotic properties, including consistency and asymptotic normality, are established for parameter estimates under suitable regularity conditions. Feasible algorithms utilizing existing statistical software packages are developed for computation. We also consider a simpler additive hazards model with homogeneous baseline hazard and develop hypothesis testing for homogeneity. A simulation study demonstrates that the statistical inference using partial likelihood has sound finite-sample properties and offers a viable alternative to maximum likelihood estimation. For illustration, we analyze data from an ecological study that monitors bark beetle colonization of red pines in a plantation of Wisconsin.
Surface mapping of spike potential fields: experienced EEGers vs. computerized analysis.
Koszer, S; Moshé, S L; Legatt, A D; Shinnar, S; Goldensohn, E S
1996-03-01
An EEG epileptiform spike focus recorded with scalp electrodes is clinically localized by visual estimation of the point of maximal voltage and the distribution of its surrounding voltages. We compared such estimated voltage maps, drawn by experienced electroencephalographers (EEGers), with a computerized spline interpolation technique employed in the commercially available software package FOCUS. Twenty-two spikes were recorded from 15 patients during long-term continuous EEG monitoring. Maps of voltage distribution from the 28 electrodes surrounding the points of maximum change in slope (the spike maximum) were constructed by the EEGer. The same points of maximum spike and voltage distributions at the 29 electrodes were mapped by computerized spline interpolation and a comparison between the two methods was made. The findings indicate that the computerized spline mapping techniques employed in FOCUS construct voltage maps with similar maxima and distributions as the maps created by experienced EEGers. The dynamics of spike activity, including correlations, are better visualized using the computerized technique than by manual interpretation alone. Its use as a technique for spike localization is accurate and adds information of potential clinical value.
Stochastic control system parameter identifiability
NASA Technical Reports Server (NTRS)
Lee, C. H.; Herget, C. J.
1975-01-01
The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.
NASA Astrophysics Data System (ADS)
De Clippele, L. H.; Gafeira, J.; Robert, K.; Hennige, S.; Lavaleye, M. S.; Duineveld, G. C. A.; Huvenne, V. A. I.; Roberts, J. M.
2017-03-01
Cold-water corals form substantial biogenic habitats on continental shelves and in deep-sea areas with topographic highs, such as banks and seamounts. In the Atlantic, many reef and mound complexes are engineered by Lophelia pertusa, the dominant framework-forming coral. In this study, a variety of mapping approaches were used at a range of scales to map the distribution of both cold-water coral habitats and individual coral colonies at the Mingulay Reef Complex (west Scotland). The new ArcGIS-based British Geological Survey (BGS) seabed mapping toolbox semi-automatically delineated over 500 Lophelia reef `mini-mounds' from bathymetry data with 2-m resolution. The morphometric and acoustic characteristics of the mini-mounds were also automatically quantified and captured using this toolbox. Coral presence data were derived from high-definition remotely operated vehicle (ROV) records and high-resolution microbathymetry collected by a ROV-mounted multibeam echosounder. With a resolution of 0.35 × 0.35 m, the microbathymetry covers 0.6 km2 in the centre of the study area and allowed identification of individual live coral colonies in acoustic data for the first time. Maximum water depth, maximum rugosity, mean rugosity, bathymetric positioning index and maximum current speed were identified as the environmental variables that contributed most to the prediction of live coral presence. These variables were used to create a predictive map of the likelihood of presence of live cold-water coral colonies in the area of the Mingulay Reef Complex covered by the 2-m resolution data set. Predictive maps of live corals across the reef will be especially valuable for future long-term monitoring surveys, including those needed to understand the impacts of global climate change. This is the first study using the newly developed BGS seabed mapping toolbox and an ROV-based microbathymetric grid to explore the environmental variables that control coral growth on cold-water coral reefs.
NASA Astrophysics Data System (ADS)
Mondini, Alessandro C.; Chang, Kang-Tsung; Chiang, Shou-Hao; Schlögel, Romy; Notarnicola, Claudia; Saito, Hitoshi
2017-12-01
We propose a framework to systematically generate event landslide inventory maps from satellite images in southern Taiwan, where landslides are frequent and abundant. The spectral information is used to assess the pixel land cover class membership probability through a Maximum Likelihood classifier trained with randomly generated synthetic land cover spectral fingerprints, which are obtained from an independent training images dataset. Pixels are classified as landslides when the calculated landslide class membership probability, weighted by a susceptibility model, is higher than membership probabilities of other classes. We generated synthetic fingerprints from two FORMOSAT-2 images acquired in 2009 and tested the procedure on two other images, one in 2005 and the other in 2009. We also obtained two landslide maps through manual interpretation. The agreement between the two sets of inventories is given by the Cohen's k coefficients of 0.62 and 0.64, respectively. This procedure can now classify a new FORMOSAT-2 image automatically facilitating the production of landslide inventory maps.
NASA Astrophysics Data System (ADS)
Rokni Deilmai, B.; Ahmad, B. Bin; Zabihi, H.
2014-06-01
Mapping is essential for the analysis of the land use and land cover, which influence many environmental processes and properties. For the purpose of the creation of land cover maps, it is important to minimize error. These errors will propagate into later analyses based on these land cover maps. The reliability of land cover maps derived from remotely sensed data depends on an accurate classification. In this study, we have analyzed multispectral data using two different classifiers including Maximum Likelihood Classifier (MLC) and Support Vector Machine (SVM). To pursue this aim, Landsat Thematic Mapper data and identical field-based training sample datasets in Johor Malaysia used for each classification method, which results indicate in five land cover classes forest, oil palm, urban area, water, rubber. Classification results indicate that SVM was more accurate than MLC. With demonstrated capability to produce reliable cover results, the SVM methods should be especially useful for land cover classification.
A general methodology for maximum likelihood inference from band-recovery data
Conroy, M.J.; Williams, B.K.
1984-01-01
A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
ERIC Educational Resources Information Center
Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike
2011-01-01
It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…
Pike, R.J.; Graymer, R.W.; Roberts, Sebastian; Kalman, N.B.; Sobieszczyk, Steven
2001-01-01
Map data that predict the varying likelihood of landsliding can help public agencies make informed decisions on land use and zoning. This map, prepared in a geographic information system from a statistical model, estimates the relative likelihood of local slopes to fail by two processes common to an area of diverse geology, terrain, and land use centered on metropolitan Oakland. The model combines the following spatial data: (1) 120 bedrock and surficial geologic-map units, (2) ground slope calculated from a 30-m digital elevation model, (3) an inventory of 6,714 old landslide deposits (not distinguished by age or type of movement and excluding debris flows), and (4) the locations of 1,192 post-1970 landslides that damaged the built environment. The resulting index of likelihood, or susceptibility, plotted as a 1:50,000-scale map, is computed as a continuous variable over a large area (872 km2) at a comparatively fine (30 m) resolution. This new model complements landslide inventories by estimating susceptibility between existing landslide deposits, and improves upon prior susceptibility maps by quantifying the degree of susceptibility within those deposits. Susceptibility is defined for each geologic-map unit as the spatial frequency (areal percentage) of terrain occupied by old landslide deposits, adjusted locally by steepness of the topography. Susceptibility of terrain between the old landslide deposits is read directly from a slope histogram for each geologic-map unit, as the percentage (0.00 to 0.90) of 30-m cells in each one-degree slope interval that coincides with the deposits. Susceptibility within landslide deposits (0.00 to 1.33) is this same percentage raised by a multiplier (1.33) derived from the comparative frequency of recent failures within and outside the old deposits. Positive results from two evaluations of the model encourage its extension to the 10-county San Francisco Bay region and elsewhere. A similar map could be prepared for any area where the three basic constituents, a geologic map, a landslide inventory, and a slope map, are available in digital form. Added predictive power of the new susceptibility model may reside in attributes that remain to be explored?among them seismic shaking, distance to nearest road, and terrain elevation, aspect, relief, and curvature.
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
Bias correction for estimated QTL effects using the penalized maximum likelihood method.
Zhang, J; Yue, C; Zhang, Y-M
2012-04-01
A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.
NASA Technical Reports Server (NTRS)
Schmahl, Edward J.; Kundu, Mukul R.
1998-01-01
We have continued our previous efforts in studies of fourier imaging methods applied to hard X-ray flares. We have performed physical and theoretical analysis of rotating collimator grids submitted to GSFC(Goddard Space Flight Center) for the High Energy Solar Spectroscopic Imager (HESSI). We have produced simulation algorithms which are currently being used to test imaging software and hardware for HESSI. We have developed Maximum-Entropy, Maximum-Likelihood, and "CLEAN" methods for reconstructing HESSI images from count-rate profiles. This work is expected to continue through the launch of HESSI in July, 2000. Section 1 shows a poster presentation "Image Reconstruction from HESSI Photon Lists" at the Solar Physics Division Meeting, June 1998; Section 2 shows the text and viewgraphs prepared for "Imaging Simulations" at HESSI's Preliminary Design Review on July 30, 1998.
Identification and Mapping of Tree Species in Urban Areas Using WORLDVIEW-2 Imagery
NASA Astrophysics Data System (ADS)
Mustafa, Y. T.; Habeeb, H. N.; Stein, A.; Sulaiman, F. Y.
2015-10-01
Monitoring and mapping of urban trees are essential to provide urban forestry authorities with timely and consistent information. Modern techniques increasingly facilitate these tasks, but require the development of semi-automatic tree detection and classification methods. In this article, we propose an approach to delineate and map the crown of 15 tree species in the city of Duhok, Kurdistan Region of Iraq using WorldView-2 (WV-2) imagery. A tree crown object is identified first and is subsequently delineated as an image object (IO) using vegetation indices and texture measurements. Next, three classification methods: Maximum Likelihood, Neural Network, and Support Vector Machine were used to classify IOs using selected IO features. The best results are obtained with Support Vector Machine classification that gives the best map of urban tree species in Duhok. The overall accuracy was between 60.93% to 88.92% and κ-coefficient was between 0.57 to 0.75. We conclude that fifteen tree species were identified and mapped at a satisfactory accuracy in urban areas of this study.
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Jones, Douglas H.
The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…
Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15
ERIC Educational Resources Information Center
Zhang, Jinming
2005-01-01
Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…
An integrated Landsat/ancillary data classification of desert rangeland
NASA Technical Reports Server (NTRS)
Price, K. P.; Ridd, M. K.; Merola, J. A.
1985-01-01
Range inventorying methods using Landsat MSS data, coupled with ancillary data were examined. The study area encompassed nearly 20,000 acres in Rush Valley, UT. The vegetation is predominately desert shrub and annual grasses, with same annual forbs. Three Landsat scenes were evaluated using a Kauth-Thomas brightness/greenness data transformation (May, June, and August dates). The data was classified using a four-band maximum-likelihood classifier. A print map was taken into the field to determine the relationship between print symbols and vegetation. It was determined that classification confusion could be greatly reduced by incorporating geomorphic units and soil texture (coarse vs fine) into the classification. Spectral data, geomorphic units, and soil texture were combined in a GIS format to produce a final vegetation map identifying 12 vegetation types.
An integrated LANDSAT/ancillary data classification of desert rangeland
NASA Technical Reports Server (NTRS)
Price, K. P.; Ridd, M. K.; Merola, J. A.
1984-01-01
Range inventorying methods using LANDSAT MSS data, coupled with ancillary data were examined. The study area encompassed nearly 20,000 acres in Rush Valley, Utah. The vegetation is predominately desert shrub and annual grasses, with some annual forbs. Three LANDSAT scenes were evaluated using a Kauth-Thomas brightness/greenness data transformation (May, June, and August dates). The data was classified using a four-band maximum-likelihood classifier. A print map was taken into the field to determine the relationship between print symbols and vegetation. It was determined that classification confusion could be greatly reduced by incorporating geomorphic units and soil texture (coarse vs fine) into the classification. Spectral data, geomorphic units, and soil texture were combined in a GIS format to produce a final vegetation map identifying 12 vegetation types.
Fitting power-laws in empirical data with estimators that work for all exponents
Hanel, Rudolf; Corominas-Murtra, Bernat; Liu, Bo; Thurner, Stefan
2017-01-01
Most standard methods based on maximum likelihood (ML) estimates of power-law exponents can only be reliably used to identify exponents smaller than minus one. The argument that power laws are otherwise not normalizable, depends on the underlying sample space the data is drawn from, and is true only for sample spaces that are unbounded from above. Power-laws obtained from bounded sample spaces (as is the case for practically all data related problems) are always free of such limitations and maximum likelihood estimates can be obtained for arbitrary powers without restrictions. Here we first derive the appropriate ML estimator for arbitrary exponents of power-law distributions on bounded discrete sample spaces. We then show that an almost identical estimator also works perfectly for continuous data. We implemented this ML estimator and discuss its performance with previous attempts. We present a general recipe of how to use these estimators and present the associated computer codes. PMID:28245249
Estimating Interaction Effects With Incomplete Predictor Variables
Enders, Craig K.; Baraldi, Amanda N.; Cham, Heining
2014-01-01
The existing missing data literature does not provide a clear prescription for estimating interaction effects with missing data, particularly when the interaction involves a pair of continuous variables. In this article, we describe maximum likelihood and multiple imputation procedures for this common analysis problem. We outline 3 latent variable model specifications for interaction analyses with missing data. These models apply procedures from the latent variable interaction literature to analyses with a single indicator per construct (e.g., a regression analysis with scale scores). We also discuss multiple imputation for interaction effects, emphasizing an approach that applies standard imputation procedures to the product of 2 raw score predictors. We thoroughly describe the process of probing interaction effects with maximum likelihood and multiple imputation. For both missing data handling techniques, we outline centering and transformation strategies that researchers can implement in popular software packages, and we use a series of real data analyses to illustrate these methods. Finally, we use computer simulations to evaluate the performance of the proposed techniques. PMID:24707955
Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method
NASA Astrophysics Data System (ADS)
Ardianti, Fitri; Sutarman
2018-01-01
In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.
Closed-loop carrier phase synchronization techniques motivated by likelihood functions
NASA Technical Reports Server (NTRS)
Tsou, H.; Hinedi, S.; Simon, M.
1994-01-01
This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.
Fast maximum likelihood estimation of mutation rates using a birth-death process.
Wu, Xiaowei; Zhu, Hongxiao
2015-02-07
Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.
Evaluation of two methods for using MR information in PET reconstruction
NASA Astrophysics Data System (ADS)
Caldeira, L.; Scheins, J.; Almeida, P.; Herzog, H.
2013-02-01
Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed.
Low-complexity approximations to maximum likelihood MPSK modulation classification
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2004-01-01
We present a new approximation to the maximum likelihood classifier to discriminate between M-ary and M'-ary phase-shift-keying transmitted on an additive white Gaussian noise (AWGN) channel and received noncoherentl, partially coherently, or coherently.
Maximum likelihood decoding analysis of accumulate-repeat-accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, A.; Divsalar, D.; Yao, K.
2004-01-01
In this paper, the performance of the repeat-accumulate codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. Some simple codes are shown that perform very close to Shannon limit with maximum likelihood decoding.
NASA Technical Reports Server (NTRS)
Thadani, S. G.
1977-01-01
The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.
Maximum likelihood clustering with dependent feature trees
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.
Maximum safe speed estimation using planar quintic Bezier curve with C2 continuity
NASA Astrophysics Data System (ADS)
Ibrahim, Mohamad Fakharuddin; Misro, Md Yushalify; Ramli, Ahmad; Ali, Jamaludin Md
2017-08-01
This paper describes an alternative way in estimating design speed or the maximum speed allowed for a vehicle to drive safely on a road using curvature information from Bezier curve fitting on a map. We had tested on some route in Tun Sardon Road, Balik Pulau, Penang, Malaysia. We had proposed to use piecewise planar quintic Bezier curve while satisfying the curvature continuity between joined curves in the process of mapping the road. By finding the derivatives of quintic Bezier curve, the value of curvature was calculated and design speed was derived. In this paper, a higher order of Bezier Curve had been used. A higher degree of curve will give more freedom for users to control the shape of the curve compared to curve in lower degree.
A Semiparametric Approach for Composite Functional Mapping of Dynamic Quantitative Traits
Yang, Runqing; Gao, Huijiang; Wang, Xin; Zhang, Ji; Zeng, Zhao-Bang; Wu, Rongling
2007-01-01
Functional mapping has emerged as a powerful tool for mapping quantitative trait loci (QTL) that control developmental patterns of complex dynamic traits. Original functional mapping has been constructed within the context of simple interval mapping, without consideration of separate multiple linked QTL for a dynamic trait. In this article, we present a statistical framework for mapping QTL that affect dynamic traits by capitalizing on the strengths of functional mapping and composite interval mapping. Within this so-called composite functional-mapping framework, functional mapping models the time-dependent genetic effects of a QTL tested within a marker interval using a biologically meaningful parametric function, whereas composite interval mapping models the time-dependent genetic effects of the markers outside the test interval to control the genome background using a flexible nonparametric approach based on Legendre polynomials. Such a semiparametric framework was formulated by a maximum-likelihood model and implemented with the EM algorithm, allowing for the estimation and the test of the mathematical parameters that define the QTL effects and the regression coefficients of the Legendre polynomials that describe the marker effects. Simulation studies were performed to investigate the statistical behavior of composite functional mapping and compare its advantage in separating multiple linked QTL as compared to functional mapping. We used the new mapping approach to analyze a genetic mapping example in rice, leading to the identification of multiple QTL, some of which are linked on the same chromosome, that control the developmental trajectory of leaf age. PMID:17947431
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
NASA Astrophysics Data System (ADS)
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.
A Real-World Study of Switching From Allopurinol to Febuxostat in a Health Plan Database.
Altan, Aylin; Shiozawa, Aki; Bancroft, Tim; Singh, Jasvinder A
2015-12-01
The objective of this study was to assess the real-world comparative effectiveness of continuing on allopurinol versus switching to febuxostat. In a retrospective claims data study of enrollees in health plans affiliated with Optum, we evaluated patients from February 1, 2009, to May 31, 2012, with a gout diagnosis, a pharmacy claim for allopurinol or febuxostat, and at least 1 serum uric acid (SUA) result available during the follow-up period. Univariate and multivariable-adjusted analyses (controlling for patient demographics and clinical factors) assessed the likelihood of SUA lowering and achievement of target SUA of less than 6.0 mg/dL or less than 5.0 mg/dL in allopurinol continuers versus febuxostat switchers. The final study population included 748 subjects who switched to febuxostat from allopurinol and 4795 continuing users of allopurinol. The most common doses of allopurinol were 300 mg/d or less in 95% of allopurinol continuers and 93% of febuxostat switchers (prior to switching); the most common dose of febuxostat was 40 mg/d, in 77% of febuxostat switchers (after switching). Compared with allopurinol continuers, febuxostat switchers had greater (1) mean preindex SUA, 8.0 mg/dL versus 6.6 mg/dL (P < 0.001); (2) likelihood of postindex SUA of less than 6.0 mg/dL, 62.2% versus 58.7% (P = 0.072); (3) likelihood of postindex SUA of less than 5.0 mg/dL, 38.9% versus 29.6% (P < 0.001); and (4) decrease in SUA, 1.8 (SD, 2.2) mg/dL versus 0.4 (SD, 1.7) mg/dL (P < 0.001). In multivariable-adjusted analyses, compared with allopurinol continuers, febuxostat switchers had significantly higher likelihood of achieving SUA of less than 6.0 mg/dL (40% higher) and SUA of less than 5.0 mg/dL (83% higher). In this "real-world" setting, many patients with gout not surprisingly were not treated with maximum permitted doses of allopurinol. Patients switched to febuxostat were more likely to achieve target SUA levels than those who continued on generally stable doses of allopurinol.
Schelle, E; Rawlins, B G; Lark, R M; Webster, R; Staton, I; McLeod, C W
2008-09-01
We investigated the use of metals accumulated on tree bark for mapping their deposition across metropolitan Sheffield by sampling 642 trees of three common species. Mean concentrations of metals were generally an order of magnitude greater than in samples from a remote uncontaminated site. We found trivially small differences among tree species with respect to metal concentrations on bark, and in subsequent statistical analyses did not discriminate between them. We mapped the concentrations of As, Cd and Ni by lognormal universal kriging using parameters estimated by residual maximum likelihood (REML). The concentrations of Ni and Cd were greatest close to a large steel works, their probable source, and declined markedly within 500 m of it and from there more gradually over several kilometres. Arsenic was much more evenly distributed, probably as a result of locally mined coal burned in domestic fires for many years. Tree bark seems to integrate airborne pollution over time, and our findings show that sampling and analysing it are cost-effective means of mapping and identifying sources.
The Atacama Cosmology Telescope (ACT): Beam Profiles and First SZ Cluster Maps
NASA Technical Reports Server (NTRS)
Hincks, A. D.; Acquaviva, V.; Ade, P. A.; Aguirre, P.; Amiri, M.; Appel, J. W.; Barrientos, L. F.; Battistelli, E. S.; Bond, J. R.; Brown, B.;
2010-01-01
The Atacama Cosmology Telescope (ACT) is currently observing the cosmic microwave background with arcminute resolution at 148 GHz, 218 GHz, and 277 GHz, In this paper, we present ACT's first results. Data have been analyzed using a maximum-likelihood map-making method which uses B-splines to model and remove the atmospheric signal. It has been used to make high-precision beam maps from which we determine the experiment's window functions, This beam information directly impacts all subsequent analyses of the data. We also used the method to map a sample of galaxy clusters via the Sunyaev-Ze1'dovich (SZ) effect, and show five clusters previously detected with X-ray or SZ observations, We provide integrated Compton-y measurements for each cluster. Of particular interest is our detection of the z = 0.44 component of A3128 and our current non-detection of the low-redshift part, providing strong evidence that the further cluster is more massive as suggested by X-ray measurements. This is a compelling example of the redshift-independent mass selection of the SZ effect.
Mapping proteins in the presence of paralogs using units of coevolution
2013-01-01
Background We study the problem of mapping proteins between two protein families in the presence of paralogs. This problem occurs as a difficult subproblem in coevolution-based computational approaches for protein-protein interaction prediction. Results Similar to prior approaches, our method is based on the idea that coevolution implies equal rates of sequence evolution among the interacting proteins, and we provide a first attempt to quantify this notion in a formal statistical manner. We call the units that are central to this quantification scheme the units of coevolution. A unit consists of two mapped protein pairs and its score quantifies the coevolution of the pairs. This quantification allows us to provide a maximum likelihood formulation of the paralog mapping problem and to cast it into a binary quadratic programming formulation. Conclusion CUPID, our software tool based on a Lagrangian relaxation of this formulation, makes it, for the first time, possible to compute state-of-the-art quality pairings in a few minutes of runtime. In summary, we suggest a novel alternative to the earlier available approaches, which is statistically sound and computationally feasible. PMID:24564758
Rouppe van der Voort, J N; van Eck, H J; van Zandvoort, P M; Overmars, H; Helder, J; Bakker, J
1999-07-01
A mapping strategy is described for the construction of a linkage map of a non-inbred species in which individual offspring genotypes are not amenable to marker analysis. After one extra generation of random mating, the segregating progeny was propagated, and bulked populations of offspring were analyzed. Although the resulting population structure is different from that of commonly used mapping populations, we show that the maximum likelihood formula for a normal F2 is applicable for the estimation of recombination. This "pseudo-F2" mapping strategy, in combination with the development of an AFLP assay for single cysts, facilitated the construction of a linkage map for the potato cyst nematode Globodera rostochiensis. Using 12 pre-selected AFLP primer combinations, a total of 66 segregating markers were identified, 62 of which were mapped to nine linkage groups. These 62 AFLP markers are randomly distributed and cover about 65% of the genome. An estimate of the physical size of the Globodera genome was obtained from comparisons of the number of AFLP fragments obtained with the values for Caenorhabditis elegans. The methodology presented here resulted in the first genomic map for a cyst nematode. The low value of the kilobase/centimorgan (kb/cM) ratio for the Globodera genome will facilitate map-based cloning of genes that mediate the interaction between the nematode and its host plant.
Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.
ERIC Educational Resources Information Center
Ramsay, J. O.
1980-01-01
Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)
NASA Technical Reports Server (NTRS)
Switzer, Eric Ryan; Watts, Duncan J.
2016-01-01
The B-mode polarization of the cosmic microwave background provides a unique window into tensor perturbations from inflationary gravitational waves. Survey effects complicate the estimation and description of the power spectrum on the largest angular scales. The pixel-space likelihood yields parameter distributions without the power spectrum as an intermediate step, but it does not have the large suite of tests available to power spectral methods. Searches for primordial B-modes must rigorously reject and rule out contamination. Many forms of contamination vary or are uncorrelated across epochs, frequencies, surveys, or other data treatment subsets. The cross power and the power spectrum of the difference of subset maps provide approaches to reject and isolate excess variance. We develop an analogous joint pixel-space likelihood. Contamination not modeled in the likelihood produces parameter-dependent bias and complicates the interpretation of the difference map. We describe a null test that consistently weights the difference map. Excess variance should either be explicitly modeled in the covariance or be removed through reprocessing the data.
The Maximum Likelihood Solution for Inclination-only Data
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2006-12-01
The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
NASA Astrophysics Data System (ADS)
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Functional mapping of quantitative trait loci associated with rice tillering.
Liu, G F; Li, M; Wen, J; Du, Y; Zhang, Y-M
2010-10-01
Several biologically significant parameters that are related to rice tillering are closely associated with rice grain yield. Although identification of the genes that control rice tillering and therefore influence crop yield would be valuable for rice production management and genetic improvement, these genes remain largely unidentified. In this study, we carried out functional mapping of quantitative trait loci (QTLs) for rice tillering in 129 doubled haploid lines, which were derived from a cross between IR64 and Azucena. We measured the average number of tillers in each plot at seven developmental stages and fit the growth trajectory of rice tillering with the Wang-Lan-Ding mathematical model. Four biologically meaningful parameters in this model--the potential maximum for tiller number (K), the optimum tiller time (t(0)), and the increased rate (r), or the reduced rate (c) at the time of deviation from t(0)--were our defined variables for multi-marker joint analysis under the framework of penalized maximum likelihood, as well as composite interval mapping. We detected a total of 27 QTLs that accounted for 2.49-8.54% of the total phenotypic variance. Nine common QTLs across multi-marker joint analysis and composite interval mapping showed high stability, while one QTL was environment-specific and three were epistatic. We also identified several genomic segments that are associated with multiple traits. Our results describe the genetic basis of rice tiller development, enable further marker-assisted selection in rice cultivar development, and provide useful information for rice production management.
NASA Technical Reports Server (NTRS)
Gramenopoulos, N. (Principal Investigator)
1974-01-01
The author has identified the following significant results. A diffraction pattern analysis of MSS images led to the development of spatial signatures for farm land, urban areas and mountains. Four spatial features are employed to describe the spatial characteristics of image cells in the digital data. Three spectral features are combined with the spatial features to form a seven dimensional vector describing each cell. Then, the classification of the feature vectors is accomplished by using the maximum likelihood criterion. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month, but vary substantially between seasons. Three ERTS-1 images from the Phoenix, Arizona area were processed, and recognition rates between 85% and 100% were obtained for the terrain classes of desert, farms, mountains, and urban areas. To eliminate the need for training data, a new clustering algorithm has been developed. Seven ERTS-1 images from four test sites have been processed through the clustering algorithm, and high recognition rates have been achieved for all terrain classes.
Algorithms of maximum likelihood data clustering with applications
NASA Astrophysics Data System (ADS)
Giada, Lorenzo; Marsili, Matteo
2002-12-01
We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.
NASA Technical Reports Server (NTRS)
Mccallister, R. D.; Crawford, J. J.
1981-01-01
It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.
PAMLX: a graphical user interface for PAML.
Xu, Bo; Yang, Ziheng
2013-12-01
This note announces pamlX, a graphical user interface/front end for the paml (for Phylogenetic Analysis by Maximum Likelihood) program package (Yang Z. 1997. PAML: a program package for phylogenetic analysis by maximum likelihood. Comput Appl Biosci. 13:555-556; Yang Z. 2007. PAML 4: Phylogenetic analysis by maximum likelihood. Mol Biol Evol. 24:1586-1591). pamlX is written in C++ using the Qt library and communicates with paml programs through files. It can be used to create, edit, and print control files for paml programs and to launch paml runs. The interface is available for free download at http://abacus.gene.ucl.ac.uk/software/paml.html.
Maximum Likelihood Estimation of Nonlinear Structural Equation Models.
ERIC Educational Resources Information Center
Lee, Sik-Yum; Zhu, Hong-Tu
2002-01-01
Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2003-01-01
Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)
Maximum likelihood phase-retrieval algorithm: applications.
Nahrstedt, D A; Southwell, W H
1984-12-01
The maximum likelihood estimator approach is shown to be effective in determining the wave front aberration in systems involving laser and flow field diagnostics and optical testing. The robustness of the algorithm enables convergence even in cases of severe wave front error and real, nonsymmetrical, obscured amplitude distributions.
Simultaneous maximum a posteriori longitudinal PET image reconstruction
NASA Astrophysics Data System (ADS)
Ellis, Sam; Reader, Andrew J.
2017-09-01
Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.
Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.
Li, Yuhong; Jia, Fucang; Qin, Jing
2016-10-01
Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.
A low-cost drone based application for identifying and mapping of coastal fish nursery grounds
NASA Astrophysics Data System (ADS)
Ventura, Daniele; Bruno, Michele; Jona Lasinio, Giovanna; Belluscio, Andrea; Ardizzone, Giandomenico
2016-03-01
Acquiring seabed, landform or other topographic data in the field of marine ecology has a pivotal role in defining and mapping key marine habitats. However, accessibility for this kind of data with a high level of detail for very shallow and inaccessible marine habitats has been often challenging, time consuming. Spatial and temporal coverage often has to be compromised to make more cost effective the monitoring routine. Nowadays, emerging technologies, can overcome many of these constraints. Here we describe a recent development in remote sensing based on a small unmanned drone (UAVs) that produce very fine scale maps of fish nursery areas. This technology is simple to use, inexpensive, and timely in producing aerial photographs of marine areas. Both technical details regarding aerial photos acquisition (drone and camera settings) and post processing workflow (3D model generation with Structure From Motion algorithm and photo-stitching) are given. Finally by applying modern algorithm of semi-automatic image analysis and classification (Maximum Likelihood, ECHO and Object-based Image Analysis) we compared the results of three thematic maps of nursery area for juvenile sparid fishes, highlighting the potential of this method in mapping and monitoring coastal marine habitats.
Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2012-01-01
We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.
Wu, Yufeng
2012-03-01
Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard, C.W. III; Berg, D.J.; Meeker, T.C.
1993-05-01
The authors describe a high-resolution radiation hybrid (RH) map of the distal short arm of human chromosome 11 containing the Beckwith-Weidemann gene and the associated embryonal tumor disease loci. Thirteen human 11p15 genes and 17 new anonymous probes were mapped by a statistical analysis of the cosegregation of markers in 102 rodent-human radiation hybrids retaining fragments of human chromosome 11. The 17 anonymous probes were generated from lambda phage containing human 11p15.5 inserts, by using ALU-PCR. A comprehensive map of all 30 loci and a framework map of nine clusters of loci ordered at odds of 1,000:1 were constructed bymore » a multipoint maximum-likelihood approach by using the computer program RHMAP. This RH map localizes one new gene to chromosome 11p15 (WEE1), provides more precise order information for several 11p15 genes (CTSD, H19, HPX,.ST5, RNH, and SMPD1), confirms previous map orders for other 11p15 genes (CALCA, PTH, HBBC, TH, HRAS, and DRD4), and maps 17 new anonymous probes within the 11p15.5 region. This RH map should prove useful in better defining the positions of the Beckwith-Weidemann and associated embryonal tumor disease-gene loci. 41 refs., 1 fig., 2 tabs.« less
Cha, Kenny H.; Hadjiiski, Lubomir; Samala, Ravi K.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.
2016-01-01
Purpose: The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. Methods: A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. Results: With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13.5%, respectively. Conclusions: The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder. PMID:27036584
Bleka, Øyvind; Storvik, Geir; Gill, Peter
2016-03-01
We have released a software named EuroForMix to analyze STR DNA profiles in a user-friendly graphical user interface. The software implements a model to explain the allelic peak height on a continuous scale in order to carry out weight-of-evidence calculations for profiles which could be from a mixture of contributors. Through a properly parameterized model we are able to do inference on mixture proportions, the peak height properties, stutter proportion and degradation. In addition, EuroForMix includes models for allele drop-out, allele drop-in and sub-population structure. EuroForMix supports two inference approaches for likelihood ratio calculations. The first approach uses maximum likelihood estimation of the unknown parameters. The second approach is Bayesian based which requires prior distributions to be specified for the parameters involved. The user may specify any number of known and unknown contributors in the model, however we find that there is a practical computing time limit which restricts the model to a maximum of four unknown contributors. EuroForMix is the first freely open source, continuous model (accommodating peak height, stutter, drop-in, drop-out, population substructure and degradation), to be reported in the literature. It therefore serves an important purpose to act as an unrestricted platform to compare different solutions that are available. The implementation of the continuous model used in the software showed close to identical results to the R-package DNAmixtures, which requires a HUGIN Expert license to be used. An additional feature in EuroForMix is the ability for the user to adapt the Bayesian inference framework by incorporating their own prior information. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
Aghanim, N.; Ashdown, M.; Aumont, J.; ...
2016-12-12
This study describes the identification, modelling, and removal of previously unexplained systematic effects in the polarization data of the Planck High Frequency Instrument (HFI) on large angular scales, including new mapmaking and calibration procedures, new and more complete end-to-end simulations, and a set of robust internal consistency checks on the resulting maps. These maps, at 100, 143, 217, and 353 GHz, are early versions of those that will be released in final form later in 2016. The improvements allow us to determine the cosmic reionization optical depth τ using, for the first time, the low-multipole EE data from HFI, reducingmore » significantly the central value and uncertainty, and hence the upper limit. Two different likelihood procedures are used to constrain τ from two estimators of the CMB E- and B-mode angular power spectra at 100 and 143 GHz, after debiasing the spectra from a small remaining systematic contamination. These all give fully consistent results. A further consistency test is performed using cross-correlations derived from the Low Frequency Instrument maps of the Planck 2015 data release and the new HFI data. For this purpose, end-to-end analyses of systematic effects from the two instruments are used to demonstrate the near independence of their dominant systematic error residuals. The tightest result comes from the HFI-based τ posterior distribution using the maximum likelihood power spectrum estimator from EE data only, giving a value 0.055 ± 0.009. Finally, in a companion paper these results are discussed in the context of the best-fit PlanckΛCDM cosmological model and recent models of reionization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghanim, N.; Ashdown, M.; Aumont, J.
This study describes the identification, modelling, and removal of previously unexplained systematic effects in the polarization data of the Planck High Frequency Instrument (HFI) on large angular scales, including new mapmaking and calibration procedures, new and more complete end-to-end simulations, and a set of robust internal consistency checks on the resulting maps. These maps, at 100, 143, 217, and 353 GHz, are early versions of those that will be released in final form later in 2016. The improvements allow us to determine the cosmic reionization optical depth τ using, for the first time, the low-multipole EE data from HFI, reducingmore » significantly the central value and uncertainty, and hence the upper limit. Two different likelihood procedures are used to constrain τ from two estimators of the CMB E- and B-mode angular power spectra at 100 and 143 GHz, after debiasing the spectra from a small remaining systematic contamination. These all give fully consistent results. A further consistency test is performed using cross-correlations derived from the Low Frequency Instrument maps of the Planck 2015 data release and the new HFI data. For this purpose, end-to-end analyses of systematic effects from the two instruments are used to demonstrate the near independence of their dominant systematic error residuals. The tightest result comes from the HFI-based τ posterior distribution using the maximum likelihood power spectrum estimator from EE data only, giving a value 0.055 ± 0.009. Finally, in a companion paper these results are discussed in the context of the best-fit PlanckΛCDM cosmological model and recent models of reionization.« less
NASA Astrophysics Data System (ADS)
Planck Collaboration; Aghanim, N.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battye, R.; Benabed, K.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Carron, J.; Challinor, A.; Chiang, H. C.; Colombo, L. P. L.; Combet, C.; Comis, B.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Di Valentino, E.; Dickinson, C.; Diego, J. M.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fantaye, Y.; Finelli, F.; Forastieri, F.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frolov, A.; Galeotta, S.; Galli, S.; Ganga, K.; Génova-Santos, R. T.; Gerbino, M.; Ghosh, T.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Helou, G.; Henrot-Versillé, S.; Herranz, D.; Hivon, E.; Huang, Z.; Ilić, S.; Jaffe, A. H.; Jones, W. C.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knox, L.; Krachmalnicoff, N.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Langer, M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Leahy, J. P.; Levrier, F.; Liguori, M.; Lilje, P. B.; López-Caniego, M.; Ma, Y.-Z.; Macías-Pérez, J. F.; Maggio, G.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Matarrese, S.; Mauri, N.; McEwen, J. D.; Meinhold, P. R.; Melchiorri, A.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Moss, A.; Mottet, S.; Naselsky, P.; Natoli, P.; Oxborrow, C. A.; Pagano, L.; Paoletti, D.; Partridge, B.; Patanchon, G.; Patrizii, L.; Perdereau, O.; Perotto, L.; Pettorino, V.; Piacentini, F.; Plaszczynski, S.; Polastri, L.; Polenta, G.; Puget, J.-L.; Rachen, J. P.; Racine, B.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Rocha, G.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Ruiz-Granados, B.; Salvati, L.; Sandri, M.; Savelainen, M.; Scott, D.; Sirri, G.; Sunyaev, R.; Suur-Uski, A.-S.; Tauber, J. A.; Tenti, M.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Valiviita, J.; Van Tent, F.; Vibert, L.; Vielva, P.; Villa, F.; Vittorio, N.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; White, M.; Zacchei, A.; Zonca, A.
2016-12-01
This paper describes the identification, modelling, and removal of previously unexplained systematic effects in the polarization data of the Planck High Frequency Instrument (HFI) on large angular scales, including new mapmaking and calibration procedures, new and more complete end-to-end simulations, and a set of robust internal consistency checks on the resulting maps. These maps, at 100, 143, 217, and 353 GHz, are early versions of those that will be released in final form later in 2016. The improvements allow us to determine the cosmic reionization optical depth τ using, for the first time, the low-multipole EE data from HFI, reducing significantly the central value and uncertainty, and hence the upper limit. Two different likelihood procedures are used to constrain τ from two estimators of the CMB E- and B-mode angular power spectra at 100 and 143 GHz, after debiasing the spectra from a small remaining systematic contamination. These all give fully consistent results. A further consistency test is performed using cross-correlations derived from the Low Frequency Instrument maps of the Planck 2015 data release and the new HFI data. For this purpose, end-to-end analyses of systematic effects from the two instruments are used to demonstrate the near independence of their dominant systematic error residuals. The tightest result comes from the HFI-based τ posterior distribution using the maximum likelihood power spectrum estimator from EE data only, giving a value 0.055 ± 0.009. In a companion paper these results are discussed in the context of the best-fit PlanckΛCDM cosmological model and recent models of reionization.
On Muthen's Maximum Likelihood for Two-Level Covariance Structure Models
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Hayashi, Kentaro
2005-01-01
Data in social and behavioral sciences are often hierarchically organized. Special statistical procedures that take into account the dependence of such observations have been developed. Among procedures for 2-level covariance structure analysis, Muthen's maximum likelihood (MUML) has the advantage of easier computation and faster convergence. When…
Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data
ERIC Educational Resources Information Center
Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.
2003-01-01
The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…
Mixture Rasch Models with Joint Maximum Likelihood Estimation
ERIC Educational Resources Information Center
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
ERIC Educational Resources Information Center
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
A Study of Item Bias for Attitudinal Measurement Using Maximum Likelihood Factor Analysis.
ERIC Educational Resources Information Center
Mayberry, Paul W.
A technique for detecting item bias that is responsive to attitudinal measurement considerations is a maximum likelihood factor analysis procedure comparing multivariate factor structures across various subpopulations, often referred to as SIFASP. The SIFASP technique allows for factorial model comparisons in the testing of various hypotheses…
The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.
ERIC Educational Resources Information Center
Baldwin, Beatrice
The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
NASA Technical Reports Server (NTRS)
Molusis, J. A.
1982-01-01
An on line technique is presented for the identification of rotor blade modal damping and frequency from rotorcraft random response test data. The identification technique is based upon a recursive maximum likelihood (RML) algorithm, which is demonstrated to have excellent convergence characteristics in the presence of random measurement noise and random excitation. The RML technique requires virtually no user interaction, provides accurate confidence bands on the parameter estimates, and can be used for continuous monitoring of modal damping during wind tunnel or flight testing. Results are presented from simulation random response data which quantify the identified parameter convergence behavior for various levels of random excitation. The data length required for acceptable parameter accuracy is shown to depend upon the amplitude of random response and the modal damping level. Random response amplitudes of 1.25 degrees to .05 degrees are investigated. The RML technique is applied to hingeless rotor test data. The inplane lag regressing mode is identified at different rotor speeds. The identification from the test data is compared with the simulation results and with other available estimates of frequency and damping.
Chan, Aaron C.; Srinivasan, Vivek J.
2013-01-01
In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044
Dziak, John J.; Bray, Bethany C.; Zhang, Jieting; Zhang, Minqiang; Lanza, Stephanie T.
2016-01-01
Several approaches are available for estimating the relationship of latent class membership to distal outcomes in latent profile analysis (LPA). A three-step approach is commonly used, but has problems with estimation bias and confidence interval coverage. Proposed improvements include the correction method of Bolck, Croon, and Hagenaars (BCH; 2004), Vermunt’s (2010) maximum likelihood (ML) approach, and the inclusive three-step approach of Bray, Lanza, & Tan (2015). These methods have been studied in the related case of latent class analysis (LCA) with categorical indicators, but not as well studied for LPA with continuous indicators. We investigated the performance of these approaches in LPA with normally distributed indicators, under different conditions of distal outcome distribution, class measurement quality, relative latent class size, and strength of association between latent class and the distal outcome. The modified BCH implemented in Latent GOLD had excellent performance. The maximum likelihood and inclusive approaches were not robust to violations of distributional assumptions. These findings broadly agree with and extend the results presented by Bakk and Vermunt (2016) in the context of LCA with categorical indicators. PMID:28630602
Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy
NASA Astrophysics Data System (ADS)
Tang, Jing; Rahmim, Arman
2015-01-01
A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE-MAP algorithm resulted in comparable regional mean values to those from the maximum likelihood algorithm while reducing noise. Achieving robust performance in various noise-level simulation and patient studies, the WJE-MAP algorithm demonstrates its potential in clinical quantitative PET imaging.
NASA Astrophysics Data System (ADS)
Dunkley, J.; Spergel, D. N.; Komatsu, E.; Hinshaw, G.; Larson, D.; Nolta, M. R.; Odegard, N.; Page, L.; Bennett, C. L.; Gold, B.; Hill, R. S.; Jarosik, N.; Weiland, J. L.; Halpern, M.; Kogut, A.; Limon, M.; Meyer, S. S.; Tucker, G. S.; Wollack, E.; Wright, E. L.
2009-08-01
We describe a sampling method to estimate the polarized cosmic microwave background (CMB) signal from observed maps of the sky. We use a Metropolis-within-Gibbs algorithm to estimate the polarized CMB map, containing Q and U Stokes parameters at each pixel, and its covariance matrix. These can be used as inputs for cosmological analyses. The polarized sky signal is parameterized as the sum of three components: CMB, synchrotron emission, and thermal dust emission. The polarized Galactic components are modeled with spatially varying power-law spectral indices for the synchrotron, and a fixed power law for the dust, and their component maps are estimated as by-products. We apply the method to simulated low-resolution maps with pixels of side 7.2 deg, using diagonal and full noise realizations drawn from the WMAP noise matrices. The CMB maps are recovered with goodness of fit consistent with errors. Computing the likelihood of the E-mode power in the maps as a function of optical depth to reionization, τ, for fixed temperature anisotropy power, we recover τ = 0.091 ± 0.019 for a simulation with input τ = 0.1, and mean τ = 0.098 averaged over 10 simulations. A "null" simulation with no polarized CMB signal has maximum likelihood consistent with τ = 0. The method is applied to the five-year WMAP data, using the K, Ka, Q, and V channels. We find τ = 0.090 ± 0.019, compared to τ = 0.086 ± 0.016 from the template-cleaned maps used in the primary WMAP analysis. The synchrotron spectral index, β, averaged over high signal-to-noise pixels with standard deviation σ(β) < 0.25, but excluding ~6% of the sky masked in the Galactic plane, is -3.03 ± 0.04. This estimate does not vary significantly with Galactic latitude, although includes an informative prior. WMAP is the result of a partnership between Princeton University and NASA's Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.
1989-09-01
cedcecer.amymil. §1 Introducticm -5- -5- Dnomet Ge~ de GRASS continues its development with several key objectives as a guide. Tie programmer should be awme of...They do NOT assurne anything about byte ordering in the cpu. : This neans that tie value is stored using as nmy bytes as required by an integer on de ...mimum likelihood classifier to produce a landcover map. 2 Derived cell files can be the results of image classfication procedues such as clustering
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Elad, M; Feuer, A
1997-01-01
The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.
NASA Astrophysics Data System (ADS)
Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.
2014-08-01
We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for testing periods of 10-20 yr. The testing results suggest that our model is a viable candidate model to serve for long-term forecasting on timescales of years to decades for the European region.
Maximum-likelihood soft-decision decoding of block codes using the A* algorithm
NASA Technical Reports Server (NTRS)
Ekroot, L.; Dolinar, S.
1994-01-01
The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.
Mugleston, Joseph D; Song, Hojun; Whiting, Michael F
2013-12-01
The phylogenetic relationships of Tettigoniidae (katydids and bush-crickets) were inferred using molecular sequence data. Six genes (18S rDNA, 28S rDNA, Cytochrome Oxidase II, Histone 3, Tubulin Alpha I, and Wingless) were sequenced for 135 ingroup taxa representing 16 of the 19 extant katydid subfamilies. Five subfamilies (Tettigoniinae, Pseudophyllinae, Mecopodinae, Meconematinae, and Listroscelidinae) were found to be paraphyletic under various tree reconstruction methods (Maximum Likelihood, Bayesisan Inference and Maximum Parsimony). Seven subfamilies - Conocephalinae, Hetrodinae, Hexacentrinae, Saginae, Phaneropterinae, Phyllophorinae, and Lipotactinae - were each recovered as well-supported monophyletic groups. We mapped the small and exposed thoracic auditory spiracle (a defining character of the subfamily Pseudophyllinae) and found it to be homoplasious. We also found the leaf-like wings of katydids have been derived independently in at least six lineages. Copyright © 2013 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Georges, M.; Nielsen, D.; Mackinnon, M.
1995-02-01
We have exploited {open_quotes}progeny testing{close_quotes} to map quantitative trait loci (QTL) underlying the genetic variation of milk production in a selected dairy cattle population. A total of 1,518 sires, with progeny tests based on the milking performances of >150,000 daughters jointly, was genotyped for 159 autosomal microsatellites bracketing 1645 centimorgan or approximately two thirds of the bovine genome. Using a maximum likelihood multilocus linkage analysis accounting for variance heterogeneity of the phenotypes, we identified five chromosomes giving very strong evidence (LOD score {ge} 3) for the presence of a QTL controlling milk production: chromosomes 1, 6, 9, 10 and 20.more » These findings demonstrate that loci with considerable effects on milk production are still segregating in highly selected populations and pave the way toward marker-assisted selection in dairy cattle breeding. 44 refs., 4 figs., 3 tabs.« less
Potts glass reflection of the decoding threshold for qudit quantum error correcting codes
NASA Astrophysics Data System (ADS)
Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.
We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).
A family of chaotic pure analog coding schemes based on baker's map function
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun
2015-12-01
This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.
An evaluation of percentile and maximum likelihood estimators of weibull paremeters
Stanley J. Zarnoch; Tommy R. Dell
1985-01-01
Two methods of estimating the three-parameter Weibull distribution were evaluated by computer simulation and field data comparison. Maximum likelihood estimators (MLB) with bias correction were calculated with the computer routine FITTER (Bailey 1974); percentile estimators (PCT) were those proposed by Zanakis (1979). The MLB estimators had superior smaller bias and...
ERIC Educational Resources Information Center
Klein, Andreas G.; Muthen, Bengt O.
2007-01-01
In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…
Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2005-01-01
In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…
Unclassified Publications of Lincoln Laboratory, 1 January - 31 December 1990. Volume 16
1990-12-31
Apr. 1990 ADA223419 Hopped Communication Systems with Nonuniform Hopping Distributions 880 Bistatic Radar Cross Section of a Fenn, A.J. 2 May1990...EXPERIMENT JA-6241 MS-8424 LUNAR PERTURBATION MAXIMUM LIKELIHOOD ALGORITHM JA-6241 JA-6467 LWIR SPECTRAL BAND MAXIMUM LIKELIHOOD ESTIMATOR JA-6476 MS-8466
Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data
ERIC Educational Resources Information Center
Savalei, Victoria
2010-01-01
Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…
ERIC Educational Resources Information Center
Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.
2006-01-01
The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…
Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods
ERIC Educational Resources Information Center
Zhong, Xiaoling; Yuan, Ke-Hai
2011-01-01
In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…
High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm
ERIC Educational Resources Information Center
Cai, Li
2010-01-01
A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…
John Hogland; Nedret Billor; Nathaniel Anderson
2013-01-01
Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...
NASA Technical Reports Server (NTRS)
Grove, R. D.; Bowles, R. L.; Mayhew, S. C.
1972-01-01
A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-06-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.
Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1985-01-01
Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.
Efficient Bit-to-Symbol Likelihood Mappings
NASA Technical Reports Server (NTRS)
Moision, Bruce E.; Nakashima, Michael A.
2010-01-01
This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.
Contamination of food products with Mycobacterium avium paratuberculosis: a systematic review.
Eltholth, M M; Marsh, V R; Van Winden, S; Guitian, F J
2009-10-01
Although a causal link between Mycobacterium avium subspecies paratuberculosis (MAP) and Crohn's disease has not been proved, previous studies suggest that the potential routes of human exposure to MAP should be investigated. We conducted a systematic review of literature concerning the likelihood of contamination of food products with MAP and the likely changes in the quantity of MAP in dairy and meat products along their respective production chains. Relevant data were extracted from 65 research papers and synthesized qualitatively. Although estimates of the prevalence of Johne's disease are scarce, particularly for non-dairy herds, the available data suggest that the likelihood of contamination of raw milk with MAP in most studied regions is substantial. The presence of MAP in raw and pasteurized milk has been the subject of several studies which show that pasteurized milk is not always MAP-free and that the effectiveness of pasteurization in inactivating MAP depends on the initial concentration of the agent in raw milk. The most recent studies indicated that beef can be contaminated with MAP via dissemination of the pathogen in the tissues of infected animals. Currently available data suggests that the likelihood of dairy and meat products being contaminated with MAP on retail sale should not be ignored.
Use of collateral information to improve LANDSAT classification accuracies
NASA Technical Reports Server (NTRS)
Strahler, A. H. (Principal Investigator)
1981-01-01
Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.
Stamatakis, Alexandros; Ott, Michael
2008-12-27
The continuous accumulation of sequence data, for example, due to novel wet-laboratory techniques such as pyrosequencing, coupled with the increasing popularity of multi-gene phylogenies and emerging multi-core processor architectures that face problems of cache congestion, poses new challenges with respect to the efficient computation of the phylogenetic maximum-likelihood (ML) function. Here, we propose two approaches that can significantly speed up likelihood computations that typically represent over 95 per cent of the computational effort conducted by current ML or Bayesian inference programs. Initially, we present a method and an appropriate data structure to efficiently compute the likelihood score on 'gappy' multi-gene alignments. By 'gappy' we denote sampling-induced gaps owing to missing sequences in individual genes (partitions), i.e. not real alignment gaps. A first proof-of-concept implementation in RAXML indicates that this approach can accelerate inferences on large and gappy alignments by approximately one order of magnitude. Moreover, we present insights and initial performance results on multi-core architectures obtained during the transition from an OpenMP-based to a Pthreads-based fine-grained parallelization of the ML function.
The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions
NASA Astrophysics Data System (ADS)
Loaiciga, Hugo A.; MariñO, Miguel A.
1987-01-01
The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.
Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes
NASA Astrophysics Data System (ADS)
Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen
2016-06-01
Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.
2013-03-21
instruments where frequency estimates are calcu- lated from coherently detected fields, e.g., coherent Doppler LIDAR . Our CRB results reveal that the best...wave coherent lidar using an optical field correlation detection method,” Opt. Rev. 5, 310–314 (1998). 8. H. P. Yuen and V. W. S. Chan, “Noise in...2170–2180 (2007). 13. T. J. Karr, “Atmospheric phase error in coherent laser radar,” IEEE Trans. Antennas Propag. 55, 1122–1133 (2007). 14. Throughout
Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.
Böhning, Dankmar; Kuhnert, Ronny
2006-12-01
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Approximated maximum likelihood estimation in multifractal random walks
NASA Astrophysics Data System (ADS)
Løvsletten, O.; Rypdal, M.
2012-04-01
We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.
Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates
ERIC Educational Resources Information Center
Lee, Sik-Yum; Song, Xin-Yuan
2005-01-01
In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…
12-mode OFDM transmission using reduced-complexity maximum likelihood detection.
Lobato, Adriana; Chen, Yingkan; Jung, Yongmin; Chen, Haoshuo; Inan, Beril; Kuschnerov, Maxim; Fontaine, Nicolas K; Ryf, Roland; Spinnler, Bernhard; Lankl, Berthold
2015-02-01
We report the transmission of 163-Gb/s MDM-QPSK-OFDM and 245-Gb/s MDM-8QAM-OFDM transmission over 74 km of few-mode fiber supporting 12 spatial and polarization modes. A low-complexity maximum likelihood detector is employed to enhance the performance of a system impaired by mode-dependent loss.
ERIC Educational Resources Information Center
Han, Kyung T.; Guo, Fanmin
2014-01-01
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
ERIC Educational Resources Information Center
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
ERIC Educational Resources Information Center
Kelderman, Henk
1992-01-01
Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…
ERIC Educational Resources Information Center
Penfield, Randall D.; Bergeron, Jennifer M.
2005-01-01
This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…
ERIC Educational Resources Information Center
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
ERIC Educational Resources Information Center
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager
NASA Astrophysics Data System (ADS)
Lowell, A. W.; Boggs, S. E.; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C.; Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y.; Jean, P.; von Ballmoos, P.; Lin, C.-H.; Amman, M.
2017-10-01
Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ˜21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.
Bayesian B-spline mapping for dynamic quantitative traits.
Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong
2012-04-01
Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.
High-confidence coding and noncoding transcriptome maps
2017-01-01
The advent of high-throughput RNA sequencing (RNA-seq) has led to the discovery of unprecedentedly immense transcriptomes encoded by eukaryotic genomes. However, the transcriptome maps are still incomplete partly because they were mostly reconstructed based on RNA-seq reads that lack their orientations (known as unstranded reads) and certain boundary information. Methods to expand the usability of unstranded RNA-seq data by predetermining the orientation of the reads and precisely determining the boundaries of assembled transcripts could significantly benefit the quality of the resulting transcriptome maps. Here, we present a high-performing transcriptome assembly pipeline, called CAFE, that significantly improves the original assemblies, respectively assembled with stranded and/or unstranded RNA-seq data, by orienting unstranded reads using the maximum likelihood estimation and by integrating information about transcription start sites and cleavage and polyadenylation sites. Applying large-scale transcriptomic data comprising 230 billion RNA-seq reads from the ENCODE, Human BodyMap 2.0, The Cancer Genome Atlas, and GTEx projects, CAFE enabled us to predict the directions of about 220 billion unstranded reads, which led to the construction of more accurate transcriptome maps, comparable to the manually curated map, and a comprehensive lncRNA catalog that includes thousands of novel lncRNAs. Our pipeline should not only help to build comprehensive, precise transcriptome maps from complex genomes but also to expand the universe of noncoding genomes. PMID:28396519
NASA Astrophysics Data System (ADS)
Juniati, E.; Arrofiqoh, E. N.
2017-09-01
Information extraction from remote sensing data especially land cover can be obtained by digital classification. In practical some people are more comfortable using visual interpretation to retrieve land cover information. However, it is highly influenced by subjectivity and knowledge of interpreter, also takes time in the process. Digital classification can be done in several ways, depend on the defined mapping approach and assumptions on data distribution. The study compared several classifiers method for some data type at the same location. The data used Landsat 8 satellite imagery, SPOT 6 and Orthophotos. In practical, the data used to produce land cover map in 1:50,000 map scale for Landsat, 1:25,000 map scale for SPOT and 1:5,000 map scale for Orthophotos, but using visual interpretation to retrieve information. Maximum likelihood Classifiers (MLC) which use pixel-based and parameters approach applied to such data, and also Artificial Neural Network classifiers which use pixel-based and non-parameters approach applied too. Moreover, this study applied object-based classifiers to the data. The classification system implemented is land cover classification on Indonesia topographic map. The classification applied to data source, which is expected to recognize the pattern and to assess consistency of the land cover map produced by each data. Furthermore, the study analyse benefits and limitations the use of methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khosla, D.; Singh, M.
The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible imagesmore » which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.« less
Talbot, S. S.; Shasby, M.B.; Bailey, T.N.
1985-01-01
A Landsat-based vegetation map was prepared for Kenai National Wildlife Refuge and adjacent lands, 2 million and 2.5 million acres respectively. The refuge lies within the middle boreal sub zone of south central Alaska. Seven major classes and sixteen subclasses were recognized: forest (closed needleleaf, needleleaf woodland, mixed); deciduous scrub (lowland and montane, subalpine); dwarf scrub (dwarf shrub tundra, lichen tundra, dwarf shrub and lichen tundra, dwarf shrub peatland, string bog/wetlands); herbaceous (graminoid meadows and marshes); scarcely vegetated areas ; water (clear, moderately turbid, highly turbid); and glaciers. The methodology employed a cluster-block technique. Sample areas were described based on a combination of helicopter-ground survey, aerial photo interpretation, and digital Landsat data. Major steps in the Landsat analysis involved: preprocessing (geometric connection), spectral class labeling of sample areas, derivation of statistical parameters for spectral classes, preliminary classification of the entree study area using a maximum-likelihood algorithm, and final classification through ancillary information such as digital elevation data. The vegetation map (scale 1:250,000) was a pioneering effort since there were no intermediate-sclae maps of the area. Representative of distinctive regional patterns, the map was suitable for use in comprehensive conservation planning and wildlife management.
Input-output mapping reconstruction of spike trains at dorsal horn evoked by manual acupuncture
NASA Astrophysics Data System (ADS)
Wei, Xile; Shi, Dingtian; Yu, Haitao; Deng, Bin; Lu, Meili; Han, Chunxiao; Wang, Jiang
2016-12-01
In this study, a generalized linear model (GLM) is used to reconstruct mapping from acupuncture stimulation to spike trains driven by action potential data. The electrical signals are recorded in spinal dorsal horn after manual acupuncture (MA) manipulations with different frequencies being taken at the “Zusanli” point of experiment rats. Maximum-likelihood method is adopted to estimate the parameters of GLM and the quantified value of assumed model input. Through validating the accuracy of firings generated from the established GLM, it is found that the input-output mapping of spike trains evoked by acupuncture can be successfully reconstructed for different frequencies. Furthermore, via comparing the performance of several GLMs based on distinct inputs, it suggests that input with the form of half-sine with noise can well describe the generator potential induced by acupuncture mechanical action. Particularly, the comparison of reproducing the experiment spikes for five selected inputs is in accordance with the phenomenon found in Hudgkin-Huxley (H-H) model simulation, which indicates the mapping from half-sine with noise input to experiment spikes meets the real encoding scheme to some extent. These studies provide us a new insight into coding processes and information transfer of acupuncture.
NASA Astrophysics Data System (ADS)
Said, Yahia A.; Petropoulos, George; Srivastava, Prashant K.
2014-05-01
Information on burned area estimates is of key importance in environmental and ecological studies as well as in fire management including damage assessment and planning of post-fire recovery of affected areas. Earth Observation (EO) provides today the most efficient way in obtaining such information in a rapid, consistent and cost-effective manner. The present study aimed at exploring the effect of topographic correction to the burnt area delineation in conditions characteristic of a Mediterranean environment using ASTER high resolution multispectral remotely sensed imagery. A further objective was to investigate the potential added-value of the inclusion of the shortwave infrared (SWIR) bands in improving the retrievals of burned area cartography from the ASTER data. In particular the capability of the Maximum Likelihood (ML), the Support Vector Machines (SVMs) and Object-based Image Analysis (OBIA) classification techniques has been examined herein for the purposes of our study. As a case study is used a typical Mediterranean site on which a fire event occurred in Greece during the summer of 2007, for which post-fire ASTER imagery has been acquired. Our results indicated that the combination of topographic correction (ortho-rectification) with the inclusion of the SWIR bands returned the most accurate results in terms of burnt area mapping. In terms of image processing methods, OBIA showed the best results and found as the most promising approach for burned area mapping with least absolute difference from the validation polygon followed by SVM and ML. All in all, our study provides an important contribution to the understanding of the capability of high resolution imagery such as that from ASTER sensor and corroborates the usefulness particularly of the topographic correction as an image processing step when in delineating the burnt areas from such data. It also provides further evidence that use of EO technology can offer an effective practical tool for the extent of ecosystem destruction from wildfires, providing extremely useful information in co-ordinating efforts for the recovery of fire-affected ecosystems after wildfire. Keywords: Remote Sensing, ASTER, Burned area mapping, Maximum Likelihood, Support Vector Machines, Object-based image analysis, Greece
A Real-World Study of Switching From Allopurinol to Febuxostat in a Health Plan Database
Altan, Aylin; Shiozawa, Aki; Bancroft, Tim; Singh, Jasvinder A.
2015-01-01
Objective The objective of this study was to assess the real-world comparative effectiveness of continuing on allopurinol versus switching to febuxostat. Methods In a retrospective claims data study of enrollees in health plans affiliated with Optum, we evaluated patients from February 1, 2009, to May 31, 2012, with a gout diagnosis, a pharmacy claim for allopurinol or febuxostat, and at least 1 serum uric acid (SUA) result available during the follow-up period. Univariate and multivariable-adjusted analyses (controlling for patient demographics and clinical factors) assessed the likelihood of SUA lowering and achievement of target SUA of less than 6.0 mg/dL or less than 5.0 mg/dL in allopurinol continuers versus febuxostat switchers. Results The final study population included 748 subjects who switched to febuxostat from allopurinol and 4795 continuing users of allopurinol. The most common doses of allopurinol were 300 mg/d or less in 95% of allopurinol continuers and 93% of febuxostat switchers (prior to switching); the most common dose of febuxostat was 40 mg/d, in 77% of febuxostat switchers (after switching). Compared with allopurinol continuers, febuxostat switchers had greater (1) mean preindex SUA, 8.0 mg/dL versus 6.6 mg/dL (P < 0.001); (2) likelihood of postindex SUA of less than 6.0 mg/dL, 62.2% versus 58.7% (P = 0.072); (3) likelihood of postindex SUA of less than 5.0 mg/dL, 38.9% versus 29.6% (P < 0.001); and (4) decrease in SUA, 1.8 (SD, 2.2) mg/dL versus 0.4 (SD, 1.7) mg/dL (P < 0.001). In multivariable-adjusted analyses, compared with allopurinol continuers, febuxostat switchers had significantly higher likelihood of achieving SUA of less than 6.0 mg/dL (40% higher) and SUA of less than 5.0 mg/dL (83% higher). Conclusions In this “real-world” setting, many patients with gout not surprisingly were not treated with maximum permitted doses of allopurinol. Patients switched to febuxostat were more likely to achieve target SUA levels than those who continued on generally stable doses of allopurinol. PMID:26580304
Anatomy of the Dead Sea transform: Does it reflect continuous changes in plate motion?
ten Brink, Uri S.; Rybakov, M.; Al-Zoubi, A. S.; Hassouneh, M.; Frieslander, U.; Batayneh, A.T.; Goldschmidt, V.; Daoud, M.N.; Rotstein, Y.; Hall, J.K.
1999-01-01
A new gravity map of the southern half of the Dead Sea transform offers the first regional view of the anatomy of this plate boundary. Interpreted together with auxiliary seismic and well data, the map reveals a string of subsurface basins of widely varying size, shape, and depth along the plate boundary and relatively short (25-55 km) and discontinuous fault segments. We argue that this structure is a result of continuous small changes in relative plate motion. However, several segments must have ruptured simultaneously to produce the inferred maximum magnitude of historical earthquakes.
Lee, Soohyun; Seo, Chae Hwa; Alver, Burak Han; Lee, Sanghyuk; Park, Peter J
2015-09-03
RNA-seq has been widely used for genome-wide expression profiling. RNA-seq data typically consists of tens of millions of short sequenced reads from different transcripts. However, due to sequence similarity among genes and among isoforms, the source of a given read is often ambiguous. Existing approaches for estimating expression levels from RNA-seq reads tend to compromise between accuracy and computational cost. We introduce a new approach for quantifying transcript abundance from RNA-seq data. EMSAR (Estimation by Mappability-based Segmentation And Reclustering) groups reads according to the set of transcripts to which they are mapped and finds maximum likelihood estimates using a joint Poisson model for each optimal set of segments of transcripts. The method uses nearly all mapped reads, including those mapped to multiple genes. With an efficient transcriptome indexing based on modified suffix arrays, EMSAR minimizes the use of CPU time and memory while achieving accuracy comparable to the best existing methods. EMSAR is a method for quantifying transcripts from RNA-seq data with high accuracy and low computational cost. EMSAR is available at https://github.com/parklab/emsar.
Object based technique for delineating and mapping 15 tree species using VHR WorldView-2 imagery
NASA Astrophysics Data System (ADS)
Mustafa, Yaseen T.; Habeeb, Hindav N.
2014-10-01
Monitoring and analyzing forests and trees are required task to manage and establish a good plan for the forest sustainability. To achieve such a task, information and data collection of the trees are requested. The fastest way and relatively low cost technique is by using satellite remote sensing. In this study, we proposed an approach to identify and map 15 tree species in the Mangish sub-district, Kurdistan Region-Iraq. Image-objects (IOs) were used as the tree species mapping unit. This is achieved using the shadow index, normalized difference vegetation index and texture measurements. Four classification methods (Maximum Likelihood, Mahalanobis Distance, Neural Network, and Spectral Angel Mapper) were used to classify IOs using selected IO features derived from WorldView-2 imagery. Results showed that overall accuracy was increased 5-8% using the Neural Network method compared with other methods with a Kappa coefficient of 69%. This technique gives reasonable results of various tree species classifications by means of applying the Neural Network method with IOs techniques on WorldView-2 imagery.
Vegetation mapping of Nowitna National Wildlife Reguge, Alaska using Landsat MSS digital data
Talbot, S. S.; Markon, Carl J.
1986-01-01
A Landsat-derived vegetation map was prepared for Nowitna National Wildlife Refuge. The refuge lies within the middle boreal subzone of north central Alaska. Seven major vegetation classes and sixteen subclasses were recognized: forest (closed needleleaf, open needleleaf, needleleaf woodland, mixed, and broadleaf); broadleaf scrub (lowland, alluvial, subalpine); dwarf scrub (prostrate dwarf shrub tundra, dwarf shrub-graminoid tussock peatland); herbaceous (graminoid bog, marsh and meadow); scarcely vegetated areas (scarcely vegetated scree and floodplain); water (clear, turbid); and other areas (mountain shadow). The methodology employed a cluster-block technique. Sample areas were described based on a combination of helicopter-ground survey, aerial photointerpretation, and digital Landsat data. Major steps in the Landsat analysis involved preprocessing (geometric correction), derivation of statistical parameters for spectral classes, spectral class labeling of sample areas, preliminary classification of the entire study area using a maximum-likelihood algorithm, and final classification utilizing ancillary information such as digital elevation data. The final product is a 1:250,000-scale vegetation map representative of distinctive regional patterns and suitable for use in comprehensive conservation planning.
Markov-random-field-based super-resolution mapping for identification of urban trees in VHR images
NASA Astrophysics Data System (ADS)
Ardila, Juan P.; Tolpekin, Valentyn A.; Bijker, Wietske; Stein, Alfred
2011-11-01
Identification of tree crowns from remote sensing requires detailed spectral information and submeter spatial resolution imagery. Traditional pixel-based classification techniques do not fully exploit the spatial and spectral characteristics of remote sensing datasets. We propose a contextual and probabilistic method for detection of tree crowns in urban areas using a Markov random field based super resolution mapping (SRM) approach in very high resolution images. Our method defines an objective energy function in terms of the conditional probabilities of panchromatic and multispectral images and it locally optimizes the labeling of tree crown pixels. Energy and model parameter values are estimated from multiple implementations of SRM in tuning areas and the method is applied in QuickBird images to produce a 0.6 m tree crown map in a city of The Netherlands. The SRM output shows an identification rate of 66% and commission and omission errors in small trees and shrub areas. The method outperforms tree crown identification results obtained with maximum likelihood, support vector machines and SRM at nominal resolution (2.4 m) approaches.
Mapping Quantitative Field Resistance Against Apple Scab in a 'Fiesta' x 'Discovery' Progeny.
Liebhard, R; Koller, B; Patocchi, A; Kellerhals, M; Pfammatter, W; Jermini, M; Gessler, C
2003-04-01
ABSTRACT Breeding of resistant apple cultivars (Malus x domestica) as a disease management strategy relies on the knowledge and understanding of the underlying genetics. The availability of molecular markers and genetic linkage maps enables the detection and the analysis of major resistance genes as well as of quantitative trait loci (QTL) contributing to the resistance of a genotype. Such a genetic linkage map was constructed, based on a segregating population of the cross between apple cvs. Fiesta (syn. Red Pippin) and Discovery. The progeny was observed for 3 years at three different sites in Switzerland and field resistance against apple scab (Venturia inaequalis) was assessed. Only a weak correlation was detected between leaf scab and fruit scab. A QTL analysis was performed, based on the genetic linkage map consisting of 804 molecular markers and covering all 17 chromosomes of apple. With the maximum likelihood-based interval mapping method, eight genomic regions were identified, six conferring resistance against leaf scab and two conferring fruit scab resistance. Although cv. Discovery showed a much stronger resistance against scab in the field, most QTL identified were attributed to the more susceptible parent 'Fiesta'. This indicated a high degree of homozygosity at the scab resistance loci in 'Discovery', preventing their detection in the progeny due to the lack of segregation.
Maximum likelihood estimation for Cox's regression model under nested case-control sampling.
Scheike, Thomas H; Juul, Anders
2004-04-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
NASA Technical Reports Server (NTRS)
Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.
1976-01-01
The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.
ERIC Educational Resources Information Center
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun
2002-01-01
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
ERIC Educational Resources Information Center
Khattab, Ali-Maher; And Others
1982-01-01
A causal modeling system, using confirmatory maximum likelihood factor analysis with the LISREL IV computer program, evaluated the construct validity underlying the higher order factor structure of a given correlation matrix of 46 structure-of-intellect tests emphasizing the product of transformations. (Author/PN)
NASA Astrophysics Data System (ADS)
Sutawanir
2015-12-01
Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.
Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions
Barrett, Harrison H.; Dainty, Christopher; Lara, David
2008-01-01
Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255
On non-parametric maximum likelihood estimation of the bivariate survivor function.
Prentice, R L
The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.
Elevation maps of the San Francisco Bay region, California, a digital database
Graham, Scott E.; Pike, Richard J.
1998-01-01
PREFACE: Topography, the configuration of the land surface, plays a major role in various natural processes that have helped shape the ten-county San Francisco Bay region and continue to affect its development. Such processes include a dangerous type of landslide, the debris flow (Ellen and others, 1997) as well as other modes of slope failure that damage property but rarely threaten life directly?slumping, translational sliding, and earthflow (Wentworth and others, 1997). Different types of topographic information at both local and regional scales are helpful in assessing the likelihood of slope failure and the mapping the extent of its past activity, as well as addressing other issues in hazard mitigation and land-use policy. The most useful information is quantitative.
Bayesian logistic regression approaches to predict incorrect DRG assignment.
Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural
2018-05-07
Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.
Han, Qiyang; Wellner, Jon A
2016-01-01
In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES
Han, Qiyang; Wellner, Jon A.
2017-01-01
In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Xiao, Yong; Cheng, Xinyi; Li, Deng; Wang, Liwei
2016-02-01
For the continuous crystal-based positron emission tomography (PET) detector built in our lab, a maximum likelihood algorithm adapted for implementation on a field programmable gate array (FPGA) is proposed to estimate the three-dimensional (3D) coordinate of interaction position with the single-end detected scintillation light response. The row-sum and column-sum readout scheme organizes the 64 channels of photomultiplier (PMT) into eight row signals and eight column signals to be readout for X- and Y-coordinates estimation independently. By the reference events irradiated in a known oblique angle, the probability density function (PDF) for each depth-of-interaction (DOI) segment is generated, by which the reference events in perpendicular irradiation are assigned to DOI segments for generating the PDFs for X and Y estimation in each DOI layer. Evaluated by the experimental data, the algorithm achieves an average X resolution of 1.69 mm along the central X-axis, and DOI resolution of 3.70 mm over the whole thickness (0-10 mm) of crystal. The performance improvements from 2D estimation to the 3D algorithm are also presented. Benefiting from abundant resources of FPGA and a hierarchical storage arrangement, the whole algorithm can be implemented into a middle-scale FPGA. By a parallel structure in pipelines, the 3D position estimator on the FPGA can achieve a processing throughput of 15 M events/s, which is sufficient for the requirement of real-time PET imaging.
NASA Technical Reports Server (NTRS)
Tranter, W. H.; Turner, M. D.
1977-01-01
Techniques are developed to estimate power gain, delay, signal-to-noise ratio, and mean square error in digital computer simulations of lowpass and bandpass systems. The techniques are applied to analog and digital communications. The signal-to-noise ratio estimates are shown to be maximum likelihood estimates in additive white Gaussian noise. The methods are seen to be especially useful for digital communication systems where the mapping from the signal-to-noise ratio to the error probability can be obtained. Simulation results show the techniques developed to be accurate and quite versatile in evaluating the performance of many systems through digital computer simulation.
Arai, Satoru; Gu, Se Hun; Baek, Luck Ju; Tabara, Kenji; Bennett, Shannon; Oh, Hong-Shik; Takada, Nobuhiro; Kang, Hae Ji; Tanaka-Taya, Keiko; Morikawa, Shigeru; Okabe, Nobuhiko; Yanagihara, Richard; Song, Jin-Won
2012-01-01
Spurred by the recent isolation of a novel hantavirus, named Imjin virus (MJNV), from the Ussuri white-toothed shrew (Crocidura lasiura), targeted trapping was conducted for the phylogenetically related Asian lesser white-toothed shrew (Crocidura shantungensis). Pair-wise alignment and comparison of the S, M and L segments of a newfound hantavirus, designated Jeju virus (JJUV), indicated remarkably low nucleotide and amino acid sequence similarity with MJNV. Phylogenetic analyses, using maximum likelihood and Bayesian methods, showed divergent ancestral lineages for JJUV and MJNV, despite the close phylogenetic relationship of their reservoir soricid hosts. Also, no evidence of host switching was apparent in tanglegrams, generated by TreeMap 2.0β. PMID:22230701
Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowell, A. W.; Boggs, S. E; Chiu, C. L.
2017-10-20
Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. Wemore » find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.« less
Dai, Huanping; Micheyl, Christophe
2015-05-01
Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.
A random walk rule for phase I clinical trials.
Durham, S D; Flournoy, N; Rosenberger, W F
1997-06-01
We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.
Construction Theory and Noise Analysis Method of Global CGCS2000 Coordinate Frame
NASA Astrophysics Data System (ADS)
Jiang, Z.; Wang, F.; Bai, J.; Li, Z.
2018-04-01
The definition, renewal and maintenance of geodetic datum has been international hot issue. In recent years, many countries have been studying and implementing modernization and renewal of local geodetic reference coordinate frame. Based on the precise result of continuous observation for recent 15 years from state CORS (continuously operating reference system) network and the mainland GNSS (Global Navigation Satellite System) network between 1999 and 2007, this paper studies the construction of mathematical model of the Global CGCS2000 frame, mainly analyzes the theory and algorithm of two-step method for Global CGCS2000 Coordinate Frame formulation. Finally, the noise characteristic of the coordinate time series are estimated quantitatively with the criterion of maximum likelihood estimation.
Efficient crop type mapping based on remote sensing in the Central Valley, California
NASA Astrophysics Data System (ADS)
Zhong, Liheng
Most agricultural systems in California's Central Valley are purposely flexible and intentionally designed to meet the demands of dynamic markets. Agricultural land use is also impacted by climate change and urban development. As a result, crops change annually and semiannually, which makes estimating agricultural water use difficult, especially given the existing method by which agricultural land use is identified and mapped. A minor portion of agricultural land is surveyed annually for land-use type, and every 5 to 8 years the entire valley is completely evaluated. So far no effort has been made to effectively and efficiently identify specific crop types on an annual basis in this area. The potential of satellite imagery to map agricultural land cover and estimate water usage in the Central Valley is explored. Efforts are made to minimize the cost and reduce the time of production during the mapping process. The land use change analysis shows that a remote sensing based mapping method is the only means to map the frequent change of major crop types. The traditional maximum likelihood classification approach is first utilized to map crop types to test the classification capacity of existing algorithms. High accuracy is achieved with sufficient ground truth data for training, and crop maps of moderate quality can be timely produced to facilitate a near-real-time water use estimate. However, the large set of ground truth data required by this method results in high costs in data collection. It is difficult to reduce the cost because a trained classification algorithm is not transferable between different years or different regions. A phenology based classification (PBC) approach is developed which extracts phenological metrics from annual vegetation index profiles and identifies crop types based on these metrics using decision trees. According to the comparison with traditional maximum likelihood classification, this phenology-based approach shows great advantages when the size of the training set is limited by ground truth availability. Once developed, the classifier is able to be applied to different years and a vast area with only a few adjustments according to local agricultural and annual weather conditions. 250 m MODIS imagery is utilized as the main input to the PBC algorithm and displays promising capacity in crop identification in several counties in the Central Valley. A time series of Landsat TM/ETM+ images at a 30 m resolution is necessary in the crop mapping of counties with smaller land parcels, although the processing time is longer. Spectral characteristics are also employed to identify crops in PBC. Spectral signatures are associated with phenological stages instead of imaging dates, which highly increases the stability of the classifier performance and overcomes the problem of over-fitting. Moderate accuracies are achieved by PBC, with confusions mostly within the same crop categories. Based on a quantitative analysis, misclassification in PBC has very trivial impacts on the accuracy of agricultural water use estimate. The cost of the entire PBC procedure is controlled to a very low level, which will enable its usage in routine annual crop mapping in the Central Valley.
Coolen, Bram F; Poot, Dirk H J; Liem, Madieke I; Smits, Loek P; Gao, Shan; Kotek, Gyula; Klein, Stefan; Nederveen, Aart J
2016-03-01
A novel three-dimensional (3D) T1 and T2 mapping protocol for the carotid artery is presented. A 3D black-blood imaging sequence was adapted allowing carotid T1 and T2 mapping using multiple flip angles and echo time (TE) preparation times. B1 mapping was performed to correct for spatially varying deviations from the nominal flip angle. The protocol was optimized using simulations and phantom experiments. In vivo scans were performed on six healthy volunteers in two sessions, and in a patient with advanced atherosclerosis. Compensation for patient motion was achieved by 3D registration of the inter/intrasession scans. Subsequently, T1 and T2 maps were obtained by maximum likelihood estimation. Simulations and phantom experiments showed that the bias in T1 and T2 estimation was < 10% within the range of physiological values. In vivo T1 and T2 values for carotid vessel wall were 844 ± 96 and 39 ± 5 ms, with good repeatability across scans. Patient data revealed altered T1 and T2 values in regions of atherosclerotic plaque. The 3D T1 and T2 mapping of the carotid artery is feasible using variable flip angle and variable TE preparation acquisitions. We foresee application of this technique for plaque characterization and monitoring plaque progression in atherosclerotic patients. © 2015 Wiley Periodicals, Inc.
On the Existence and Uniqueness of JML Estimates for the Partial Credit Model
ERIC Educational Resources Information Center
Bertoli-Barsotti, Lucio
2005-01-01
A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…
ERIC Educational Resources Information Center
Paek, Insu; Wilson, Mark
2011-01-01
This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…
Bayesian image reconstruction for improving detection performance of muon tomography.
Wang, Guobao; Schultz, Larry J; Qi, Jinyi
2009-05-01
Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.
Molecular surface mesh generation by filtering electron density map.
Giard, Joachim; Macq, Benoît
2010-01-01
Bioinformatics applied to macromolecules are now widely spread and in continuous expansion. In this context, representing external molecular surface such as the Van der Waals Surface or the Solvent Excluded Surface can be useful for several applications. We propose a fast and parameterizable algorithm giving good visual quality meshes representing molecular surfaces. It is obtained by isosurfacing a filtered electron density map. The density map is the result of the maximum of Gaussian functions placed around atom centers. This map is filtered by an ideal low-pass filter applied on the Fourier Transform of the density map. Applying the marching cubes algorithm on the inverse transform provides a mesh representation of the molecular surface.
Comparison of wheat classification accuracy using different classifiers of the image-100 system
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.
1981-01-01
Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
Nagelkerke, Nico; Fidler, Vaclav
2015-01-01
The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.
Subramaniam, Dhananjay Radhakrishnan; Stoddard, William A; Mortensen, Kristian H; Ringgaard, Steffen; Trolle, Christian; Gravholt, Claus H; Gutmark, Ephraim J; Mylavarapu, Goutham; Backeljauw, Philippe F; Gutmark-Little, Iris
2017-02-24
Severity of thoracic aortic disease in Turner syndrome (TS) patients is currently described through measures of aorta size and geometry at discrete locations. The objective of this study is to develop an improved measurement tool that quantifies changes in size and geometry over time, continuously along the length of the thoracic aorta. Cardiovascular magnetic resonance (CMR) scans for 15 TS patients [41 ± 9 years (mean age ± standard deviation (SD))] were acquired over a 10-year period and compared with ten healthy gender and age-matched controls. Three-dimensional aortic geometries were reconstructed, smoothed and clipped, which was followed by identification of centerlines and planes normal to the centerlines. Geometric variables, including maximum diameter and cross-sectional area, were evaluated continuously along the thoracic aorta. Distance maps were computed for TS and compared to the corresponding maps for controls, to highlight any asymmetry and dimensional differences between diseased and normal aortae. Furthermore, a registration scheme was proposed to estimate localized changes in aorta geometry between visits. The estimated maximum diameter from the continuous method was then compared with corresponding manual measurements at 7 discrete locations for each visit and for changes between visits. Manual measures at the seven positions and the corresponding continuous measurements of maximum diameter for all visits considered, correlated highly (R-value = 0.77, P < 0.01). There was good agreement between manual and continuous measurement methods for visit-to-visit changes in maximum diameter. The continuous method was less sensitive to inter-user variability [0.2 ± 2.3 mm (mean difference in diameters ± SD)] and choice of smoothing software [0.3 ± 1.3 mm]. Aortic diameters were larger in TS than controls in the ascending [TS: 13.4 ± 2.1 mm (mean distance ± SD), Controls: 12.6 ± 1 mm] and descending [TS: 10.2 ± 1.3 mm (mean distance ± SD), Controls: 9.5 ± 0.9 mm] thoracic aorta as observed from the distance maps. An automated methodology is presented that enables rapid and precise three-dimensional measurement of thoracic aortic geometry, which can serve as an improved tool to define disease severity and monitor disease progression. ClinicalTrials.gov Identifier - NCT01678274 . Registered - 08.30.2012.
NASA Astrophysics Data System (ADS)
Becker, Brian L.
Great Lakes wetlands are increasingly being recognized as vital ecosystem components that provide valuable functions such as sediment retention, wildlife habitat, and nutrient removal. Aerial photography has traditionally provided a cost effective means to inventory and monitor coastal wetlands, but is limited by its broad spectral sensitivity and non-digital format. Airborne sensor advancements have now made the acquisition of digital imagery with high spatial and spectral resolution a reality. In this investigation, we selected two Lake Huron coastal wetlands, each from a distinct eco-region, over which, digital, airborne imagery (AISA or CASI-II) was acquired. The 1-meter images contain approximately twenty, 10-nanometer-wide spectral bands strategically located throughout the visible and near-infrared. The 4-meter hyperspectral imagery contains 48 contiguous bands across the visible and short-wavelength near-infrared. Extensive, in-situ, reflectance spectra (SE-590) and sub-meter GPS locations were acquired for the dominant botanical and substrate classes field-delineated at each location. Normalized in-situ spectral signatures were subjected to Principal Components and 2nd Derivative analyses in order to identify the most botanically explanative image bands. Three image-based investigations were implemented in order to evaluate the ability of three classification algorithms (ISODATA, Spectral Angle Mapper and Maximum-Likelihood) to differentiate botanical regions-of-interest. Two additional investigations were completed in order to assess classification changes associated with the independent manipulation of both spatial and spectral resolution. Of the three algorithms tested, the Maximum-Likelihood classifier best differentiated (89%) the regions-of-interest in both study sites. Covariance-based PCA rotation consistently enhanced the performance of the Maximum-Likelihood classifier. Seven non-overlapping bands (425.4, 514.9, 560.1, 685.5, 731.5, 812.3 and 916.7 nanometers) were identified that represented the best performing bands with respect to classification performance. A spatial resolution of 2 meters or less was determined to be the as being most appropriate in Great Lakes coastal wetland environments. This research represents the first step in evaluating the effectiveness of applying high-resolution, narrow-band imagery to the detailed mapping of coastal wetlands in the Great Lakes region.
The Rule of Three for Prizes in Science and the Bold Triptychs of Francis Bacon.
Goldstein, Joseph L
2016-09-22
For many scientific awards, such as Nobels and Laskers, the maximum number of recipients is three. This Rule of Three forces selection committees to make difficult decisions that increase the likelihood of singling out those individuals who open a new field and continue to lead it. The Rule of Three is reminiscent of art's three-panel triptych, a form that the modern master Francis Bacon used to distill complex stories in a bold way. Copyright © 2016 Elsevier Inc. All rights reserved.
A Fast and Scalable Radiation Hybrid Map Construction and Integration Strategy
Agarwala, Richa; Applegate, David L.; Maglott, Donna; Schuler, Gregory D.; Schäffer, Alejandro A.
2000-01-01
This paper describes a fast and scalable strategy for constructing a radiation hybrid (RH) map from data on different RH panels. The maps on each panel are then integrated to produce a single RH map for the genome. Recurring problems in using maps from several sources are that the maps use different markers, the maps do not place the overlapping markers in same order, and the objective functions for map quality are incomparable. We use methods from combinatorial optimization to develop a strategy that addresses these issues. We show that by the standard objective functions of obligate chromosome breaks and maximum likelihood, software for the traveling salesman problem produces RH maps with better quality much more quickly than using software specifically tailored for RH mapping. We use known algorithms for the longest common subsequence problem as part of our map integration strategy. We demonstrate our methods by reconstructing and integrating maps for markers typed on the Genebridge 4 (GB4) and the Stanford G3 panels publicly available from the RH database. We compare map quality of our integrated map with published maps for GB4 panel and G3 panel by considering whether markers occur in the same order on a map and in DNA sequence contigs submitted to GenBank. We find that all of the maps are inconsistent with the sequence data for at least 50% of the contigs, but our integrated maps are more consistent. The map integration strategy not only scales to multiple RH maps but also to any maps that have comparable criteria for measuring map quality. Our software improves on current technology for doing RH mapping in areas of computation time and algorithms for considering a large number of markers for mapping. The essential impediments to producing dense high-quality RH maps are data quality and panel size, not computation. PMID:10720576
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
1998-01-01
Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…
Statistical Bias in Maximum Likelihood Estimators of Item Parameters.
1982-04-01
34 a> E r’r~e r ,C Ie I# ne,..,.rVi rnd Id.,flfv b1 - bindk numb.r) I; ,t-i i-cd I ’ tiie bias in the maximum likelihood ,st i- i;, ’ t iIeiIrs in...NTC, IL 60088 Psychometric Laboratory University of North Carolina I ERIC Facility-Acquisitions Davie Hall 013A 4833 Rugby Avenue Chapel Hill, NC
ERIC Educational Resources Information Center
Beauducel, Andre; Herzberg, Philipp Yorck
2006-01-01
This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
Hyperspectral image reconstruction for x-ray fluorescence tomography
Gürsoy, Doǧa; Biçer, Tekin; Lanzirotti, Antonio; ...
2015-01-01
A penalized maximum-likelihood estimation is proposed to perform hyperspectral (spatio-spectral) image reconstruction for X-ray fluorescence tomography. The approach minimizes a Poisson-based negative log-likelihood of the observed photon counts, and uses a penalty term that has the effect of encouraging local continuity of model parameter estimates in both spatial and spectral dimensions simultaneously. The performance of the reconstruction method is demonstrated with experimental data acquired from a seed of arabidopsis thaliana collected at the 13-ID-E microprobe beamline at the Advanced Photon Source. The resulting element distribution estimates with the proposed approach show significantly better reconstruction quality than the conventional analytical inversionmore » approaches, and allows for a high data compression factor which can reduce data acquisition times remarkably. In particular, this technique provides the capability to tomographically reconstruct full energy dispersive spectra without compromising reconstruction artifacts that impact the interpretation of results.« less
Di Gaspero, G; Cipriani, G; Adam-Blondon, A-F; Testolin, R
2007-05-01
Genetic maps functionally oriented towards disease resistance have been constructed in grapevine by analysing with a simultaneous maximum-likelihood estimation of linkage 502 markers including microsatellites and resistance gene analogs (RGAs). Mapping material consisted of two pseudo-testcrosses, 'Chardonnay' x 'Bianca' and 'Cabernet Sauvignon' x '20/3' where the seed parents were Vitis vinifera genotypes and the male parents were Vitis hybrids carrying resistance to mildew diseases. Individual maps included 320-364 markers each. The simultaneous use of two mapping crosses made with two pairs of distantly related parents allowed mapping as much as 91% of the markers tested. The integrated map included 420 Simple Sequence Repeat (SSR) markers that identified 536 SSR loci and 82 RGA markers that identified 173 RGA loci. This map consisted of 19 linkage groups (LGs) corresponding to the grape haploid chromosome number, had a total length of 1,676 cM and a mean distance between adjacent loci of 3.6 cM. Single-locus SSR markers were randomly distributed over the map (CD = 1.12). RGA markers were found in 18 of the 19 LGs but most of them (83%) were clustered on seven LGs, namely groups 3, 7, 9, 12, 13, 18 and 19. Several RGA clusters mapped to chromosomal regions where phenotypic traits of resistance to fungal diseases such as downy mildew and powdery mildew, bacterial diseases such as Pierce's disease, and pests such as dagger and root-knot nematode, were previously mapped in different segregating populations. The high number of RGA markers integrated into this new map will help find markers linked to genetic determinants of different pest and disease resistances in grape.
Phylogenetic Relationships of American Willows (Salix L., Salicaceae)
Lauron-Moreau, Aurélien; Pitre, Frédéric E.; Argus, George W.; Labrecque, Michel; Brouillet, Luc
2015-01-01
Salix L. is the largest genus in the family Salicaceae (450 species). Several classifications have been published, but taxonomic subdivision has been under continuous revision. Our goal is to establish the phylogenetic structure of the genus using molecular data on all American willows, using three DNA markers. This complete phylogeny of American willows allows us to propose a biogeographic framework for the evolution of the genus. Material was obtained for the 122 native and introduced willow species of America. Sequences were obtained from the ITS (ribosomal nuclear DNA) and two plastid regions, matK and rbcL. Phylogenetic analyses (parsimony, maximum likelihood, Bayesian inference) were performed on the data. Geographic distribution was mapped onto the tree. The species tree provides strong support for a division of the genus into two subgenera, Salix and Vetrix. Subgenus Salix comprises temperate species from the Americas and Asia, and their disjunction may result from Tertiary events. Subgenus Vetrix is composed of boreo-arctic species of the Northern Hemisphere and their radiation may coincide with the Quaternary glaciations. Sixteen species have ambiguous positions; genetic diversity is lower in subg. Vetrix. A molecular phylogeny of all species of American willows has been inferred. It needs to be tested and further resolved using other molecular data. Nonetheless, the genus clearly has two clades that have distinct biogeographic patterns. PMID:25880993
Robust Multipoint Water-Fat Separation Using Fat Likelihood Analysis
Yu, Huanzhou; Reeder, Scott B.; Shimakawa, Ann; McKenzie, Charles A.; Brittain, Jean H.
2016-01-01
Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. PMID:21842498
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Exact likelihood evaluations and foreground marginalization in low resolution WMAP data
NASA Astrophysics Data System (ADS)
Slosar, Anže; Seljak, Uroš; Makarov, Alexey
2004-06-01
The large scale anisotropies of Wilkinson Microwave Anisotropy Probe (WMAP) data have attracted a lot of attention and have been a source of controversy, with many favorite cosmological models being apparently disfavored by the power spectrum estimates at low l. All the existing analyses of theoretical models are based on approximations for the likelihood function, which are likely to be inaccurate on large scales. Here we present exact evaluations of the likelihood of the low multipoles by direct inversion of the theoretical covariance matrix for low resolution WMAP maps. We project out the unwanted galactic contaminants using the WMAP derived maps of these foregrounds. This improves over the template based foreground subtraction used in the original analysis, which can remove some of the cosmological signal and may lead to a suppression of power. As a result we find an increase in power at low multipoles. For the quadrupole the maximum likelihood values are rather uncertain and vary between 140 and 220 μK2. On the other hand, the probability distribution away from the peak is robust and, assuming a uniform prior between 0 and 2000 μK2, the probability of having the true value above 1200 μK2 (as predicted by the simplest cold dark matter model with a cosmological constant) is 10%, a factor of 2.5 higher than predicted by the WMAP likelihood code. We do not find the correlation function to be unusual beyond the low quadrupole value. We develop a fast likelihood evaluation routine that can be used instead of WMAP routines for low l values. We apply it to the Markov chain Monte Carlo analysis to compare the cosmological parameters between the two cases. The new analysis of WMAP either alone or jointly with the Sloan Digital Sky Survey (SDSS) and the Very Small Array (VSA) data reduces the evidence for running to less than 1σ, giving αs=-0.022±0.033 for the combined case. The new analysis prefers about a 1σ lower value of Ωm, a consequence of an increased integrated Sachs-Wolfe (ISW) effect contribution required by the increase in the spectrum at low l. These results suggest that the details of foreground removal and full likelihood analysis are important for parameter estimation from the WMAP data. They are robust in the sense that they do not change significantly with frequency, mask, or details of foreground template marginalization. The marginalization approach presented here is the most conservative method to remove the foregrounds and should be particularly useful in the analysis of polarization, where foreground contamination may be much more severe.
Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.
Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan
2016-04-28
This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.
Combining Radar and Optical Data for Forest Disturbance Studies
NASA Technical Reports Server (NTRS)
Ranson, K. Jon; Smith, David E. (Technical Monitor)
2002-01-01
Disturbance is an important factor in determining the carbon balance and succession of forests. Until the early 1990's researchers have focused on using optical or thermal sensors to detect and map forest disturbances from wild fires, logging or insect outbreaks. As part of a NASA Siberian mapping project, a study evaluated the capability of three different radar sensors (ERS, JERS and Radarsat) and an optical sensor (Landsat 7) to detect fire scars, logging and insect damage in the boreal forest. This paper describes the data sets and techniques used to evaluate the use of remote sensing to detect disturbance in central Siberian forests. Using images from each sensor individually and combined an assessment of the utility of using these sensors was developed. Transformed Divergence analysis and maximum likelihood classification revealed that Landsat data was the single best data type for this purpose. However, the combined use of the three radar and optical sensors did improve the results of discriminating these disturbances.
NASA Technical Reports Server (NTRS)
Butera, M. K. (Principal Investigator)
1978-01-01
The author has identified the following significant results. A technique was used to determine the optimum time for classifying marsh vegetation from computer-processed LANDSAT MSS data. The technique depended on the analysis of data derived from supervised pattern recognition by maximum likelihood theory. A dispersion index, created by the ratio of separability among the class spectral means to variability within the classes, defined the optimum classification time. Data compared from seven LANDSAT passes acquired over the same area of Louisiana marsh indicated that June and September were optimum marsh mapping times to collectively classify Baccharis halimifolia, Spartina patens, Spartina alterniflora, Juncus roemericanus, and Distichlis spicata. The same technique was used to determine the optimum classification time for individual species. April appeared to be the best month to map Juncus roemericanus; May, Spartina alterniflora; June, Baccharis halimifolia; and September, Spartina patens and Distichlis spicata. This information is important, for instance, when a single species is recognized to indicate a particular environmental condition.
Multi-Contrast Multi-Atlas Parcellation of Diffusion Tensor Imaging of the Human Brain
Tang, Xiaoying; Yoshida, Shoko; Hsu, John; Huisman, Thierry A. G. M.; Faria, Andreia V.; Oishi, Kenichi; Kutten, Kwame; Poretti, Andrea; Li, Yue; Miller, Michael I.; Mori, Susumu
2014-01-01
In this paper, we propose a novel method for parcellating the human brain into 193 anatomical structures based on diffusion tensor images (DTIs). This was accomplished in the setting of multi-contrast diffeomorphic likelihood fusion using multiple DTI atlases. DTI images are modeled as high dimensional fields, with each voxel exhibiting a vector valued feature comprising of mean diffusivity (MD), fractional anisotropy (FA), and fiber angle. For each structure, the probability distribution of each element in the feature vector is modeled as a mixture of Gaussians, the parameters of which are estimated from the labeled atlases. The structure-specific feature vector is then used to parcellate the test image. For each atlas, a likelihood is iteratively computed based on the structure-specific vector feature. The likelihoods from multiple atlases are then fused. The updating and fusing of the likelihoods is achieved based on the expectation-maximization (EM) algorithm for maximum a posteriori (MAP) estimation problems. We first demonstrate the performance of the algorithm by examining the parcellation accuracy of 18 structures from 25 subjects with a varying degree of structural abnormality. Dice values ranging 0.8–0.9 were obtained. In addition, strong correlation was found between the volume size of the automated and the manual parcellation. Then, we present scan-rescan reproducibility based on another dataset of 16 DTI images – an average of 3.73%, 1.91%, and 1.79% for volume, mean FA, and mean MD respectively. Finally, the range of anatomical variability in the normal population was quantified for each structure. PMID:24809486
Huang, Chiung-Yu; Qin, Jing
2013-01-01
The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265
A Solution to Separation and Multicollinearity in Multiple Logistic Regression
Shen, Jianzhao; Gao, Sujuan
2010-01-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286
A Solution to Separation and Multicollinearity in Multiple Logistic Regression.
Shen, Jianzhao; Gao, Sujuan
2008-10-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.
NASA Astrophysics Data System (ADS)
Simmonds, Sara E.; Chou, Vincent; Cheng, Samantha H.; Rachmawati, Rita; Calumpong, Hilconida P.; Ngurah Mahardika, G.; Barber, Paul H.
2018-06-01
We studied how host-associations and geography shape the genetic structure of sister species of marine snails Coralliophila radula (A. Adams, 1853) and C. violacea (Kiener, 1836). These obligate ectoparasites prey upon corals and are sympatric throughout much of their ranges in coral reefs of the tropical and subtropical Indo-Pacific. We tested for population genetic structure of snails in relation to geography and their host corals using mtDNA (COI) sequences in minimum spanning trees and AMOVAs. We also examined the evolutionary relationships of their Porites host coral species using maximum likelihood trees of RAD-seq (restriction site-associated DNA sequencing) loci mapped to a reference transcriptome. A maximum likelihood tree of host corals revealed three distinct clades. Coralliophila radula showed a pronounced genetic break across the Sunda Shelf ( Φ CT = 0.735) but exhibited no genetic structure with respect to host. C. violacea exhibited significant geographic structure ( Φ CT = 0.427), with divergence among Hawaiian populations, the Coral Triangle and the Indian Ocean. Notably, C. violacea showed evidence of ecological divergence; two lineages were associated with different groups of host coral species, one widespread found at all sites, and the other restricted to the Coral Triangle. Sympatric populations of C. violacea found on different suites of coral species were highly divergent ( Φ CT = 0.561, d = 5.13%), suggesting that symbiotic relationships may contribute to lineage diversification in the Coral Triangle.
Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam
2011-01-01
One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.
Lirio, R B; Dondériz, I C; Pérez Abalo, M C
1992-08-01
The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.
ERIC Educational Resources Information Center
Kelderman, Henk
In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.
Bayesian structural equation modeling in sport and exercise psychology.
Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus
2015-08-01
Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0
Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.
2014-01-01
We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072
Towards the Optimal Pixel Size of dem for Automatic Mapping of Landslide Areas
NASA Astrophysics Data System (ADS)
Pawłuszek, K.; Borkowski, A.; Tarolli, P.
2017-05-01
Determining appropriate spatial resolution of digital elevation model (DEM) is a key step for effective landslide analysis based on remote sensing data. Several studies demonstrated that choosing the finest DEM resolution is not always the best solution. Various DEM resolutions can be applicable for diverse landslide applications. Thus, this study aims to assess the influence of special resolution on automatic landslide mapping. Pixel-based approach using parametric and non-parametric classification methods, namely feed forward neural network (FFNN) and maximum likelihood classification (ML), were applied in this study. Additionally, this allowed to determine the impact of used classification method for selection of DEM resolution. Landslide affected areas were mapped based on four DEMs generated at 1 m, 2 m, 5 m and 10 m spatial resolution from airborne laser scanning (ALS) data. The performance of the landslide mapping was then evaluated by applying landslide inventory map and computation of confusion matrix. The results of this study suggests that the finest scale of DEM is not always the best fit, however working at 1 m DEM resolution on micro-topography scale, can show different results. The best performance was found at 5 m DEM-resolution for FFNN and 1 m DEM resolution for results. The best performance was found to be using 5 m DEM-resolution for FFNN and 1 m DEM resolution for ML classification.
NASA Technical Reports Server (NTRS)
Thomson, F. J.; Polcyn, F. C.; Bryan, M. L.; Sattinger, I. J.; Malila, W. A.; Nalepka, R. F.; Wezernak, C. T.; Horvath, R.; Vincent, R. K. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Depth mapping's for a portion of Lake Michigan and at the Little Bahama Bank test site have been verified by use of navigation charts and on-site visits. A thirteen category recognition map of Yellowstone Park has been prepared. Model calculation of atmospheric effects for various altitudes have been prepared. Radar, SLAR, and ERTS-1 data for flooded areas of Monroe County, Michigan are being studied. Water bodies can be reliably recognized and mapped using maximum likelihood processing of ERTS-1 digital data. Wetland mapping has been accomplished by slicing of single band and/or ratio processing of two bands for a single observation date. Both analog and digital processing have been used to map the Lake Ontario basin using ERTS-1 data. Operating characteristic curves were developed for the proportion estimation algorithm to determine its performance in the measurement of surface water area. The signal in band MSS-5 was related to sediment content of waters by modelling approach and by relating surface measurements of water to processed ERTS data. Radiance anomalies in ERTS-1 data could be associated with the presence of oil on water in San Francisco Bay, but the anomalies were of the same order as those caused by variations in sediment concentration and tidal flushing.
NASA Technical Reports Server (NTRS)
Morrison, D. B. (Editor); Scherer, D. J.
1977-01-01
Papers are presented on a variety of techniques for the machine processing of remotely sensed data. Consideration is given to preprocessing methods such as the correction of Landsat data for the effects of haze, sun angle, and reflectance and to the maximum likelihood estimation of signature transformation algorithm. Several applications of machine processing to agriculture are identified. Various types of processing systems are discussed such as ground-data processing/support systems for sensor systems and the transfer of remotely sensed data to operational systems. The application of machine processing to hydrology, geology, and land-use mapping is outlined. Data analysis is considered with reference to several types of classification methods and systems.
Lung nodule malignancy prediction using multi-task convolutional neural network
NASA Astrophysics Data System (ADS)
Li, Xiuli; Kao, Yueying; Shen, Wei; Li, Xiang; Xie, Guotong
2017-03-01
In this paper, we investigated the problem of diagnostic lung nodule malignancy prediction using thoracic Computed Tomography (CT) screening. Unlike most existing studies classify the nodules into two types benign and malignancy, we interpreted the nodule malignancy prediction as a regression problem to predict continuous malignancy level. We proposed a joint multi-task learning algorithm using Convolutional Neural Network (CNN) to capture nodule heterogeneity by extracting discriminative features from alternatingly stacked layers. We trained a CNN regression model to predict the nodule malignancy, and designed a multi-task learning mechanism to simultaneously share knowledge among 9 different nodule characteristics (Subtlety, Calcification, Sphericity, Margin, Lobulation, Spiculation, Texture, Diameter and Malignancy), and improved the final prediction result. Each CNN would generate characteristic-specific feature representations, and then we applied multi-task learning on the features to predict the corresponding likelihood for that characteristic. We evaluated the proposed method on 2620 nodules CT scans from LIDC-IDRI dataset with the 5-fold cross validation strategy. The multitask CNN regression result for regression RMSE and mapped classification ACC were 0.830 and 83.03%, while the results for single task regression RMSE 0.894 and mapped classification ACC 74.9%. Experiments show that the proposed method could predict the lung nodule malignancy likelihood effectively and outperforms the state-of-the-art methods. The learning framework could easily be applied in other anomaly likelihood prediction problem, such as skin cancer and breast cancer. It demonstrated the possibility of our method facilitating the radiologists for nodule staging assessment and individual therapeutic planning.
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Mixture model based joint-MAP reconstruction of attenuation and activity maps in TOF-PET
NASA Astrophysics Data System (ADS)
Hemmati, H.; Kamali-Asl, A.; Ghafarian, P.; Ay, M. R.
2018-06-01
A challenge to have quantitative positron emission tomography (PET) images is to provide an accurate and patient-specific photon attenuation correction. In PET/MR scanners, the nature of MR signals and hardware limitations have led to a real challenge on the attenuation map extraction. Except for a constant factor, the activity and attenuation maps from emission data on TOF-PET system can be determined by the maximum likelihood reconstruction of attenuation and activity approach (MLAA) from emission data. The aim of the present study is to constrain the joint estimations of activity and attenuation approach for PET system using a mixture model prior based on the attenuation map histogram. This novel prior enforces non-negativity and its hyperparameters can be estimated using a mixture decomposition step from the current estimation of the attenuation map. The proposed method can also be helpful on the solving of scaling problem and is capable to assign the predefined regional attenuation coefficients with some degree of confidence to the attenuation map similar to segmentation-based attenuation correction approaches. The performance of the algorithm is studied with numerical and Monte Carlo simulations and a phantom experiment and was compared with MLAA algorithm with and without the smoothing prior. The results demonstrate that the proposed algorithm is capable of producing the cross-talk free activity and attenuation images from emission data. The proposed approach has potential to be a practical and competitive method for joint reconstruction of activity and attenuation maps from emission data on PET/MR and can be integrated on the other methods.
NASA Astrophysics Data System (ADS)
Lee, H.; Sheen, D.; Kim, S.
2013-12-01
The b-value in Gutenberg-Richter relation is an important parameter widely used not only in the interpretation of regional tectonic structure but in the seismic hazard analysis. In this study, we tested four methods for estimating the stable b-value in a small number of events using Monte-Carlo method. One is the Least-Squares method (LSM) which minimizes the observation error. Others are based on the Maximum Likelihood method (MLM) which maximizes the likelihood function: Utsu's (1965) method for continuous magnitudes and an infinite maximum magnitude, Page's (1968) for continuous magnitudes and a finite maximum magnitude, and Weichert's (1980) for interval magnitude and a finite maximum magnitude. A synthetic parent population of the earthquake catalog of million events from magnitude 2.0 to 7.0 with interval of 0.1 was generated for the Monte-Carlo simulation. The sample, the number of which was increased from 25 to 1000, was extracted from the parent population randomly. The resampling procedure was applied 1000 times with different random seed numbers. The mean and the standard deviation of the b-value were estimated for each sample group that has the same number of samples. As expected, the more samples were used, the more stable b-value was obtained. However, in a small number of events, the LSM gave generally low b-value with a large standard deviation while other MLMs gave more accurate and stable values. It was found that Utsu (1965) gives the most accurate and stable b-value even in a small number of events. It was also found that the selection of the minimum magnitude could be critical for estimating the correct b-value for Utsu's (1965) method and Page's (1968) if magnitudes were binned into an interval. Therefore, we applied Utsu (1965) to estimate the b-value using two instrumental earthquake catalogs, which have events occurred around the southern part of the Korean Peninsula from 1978 to 2011. By a careful choice of the minimum magnitude, the b-values of the earthquake catalogs of the Korea Meteorological Administration and Kim (2012) are estimated to be 0.72 and 0.74, respectively.
Talbot, Stephen S.; Markon, Carl J.
1988-01-01
A Landsat-derived vegetation map was prepared for lnnoko National Wildlife Refuge. The refuge lies within the northern boreal subzone of northwestern central Alaska. Six major vegetation classes and 21 subclasses were recognized: forest (closed needleleaf, open needleleaf, needleleaf woodland, mixed, and broadleaf); broadleaf scrub (lowland, upland burn regeneration, subalpine); dwarf scrub (prostrate dwarf shrub tundra, erect dwarf shrub heath, dwarf shrub-graminoid peatland, dwarf shrub-graminoid tussock peatland, dwarf shrub raised bog with scattered trees, dwarf shrub-graminoid marsh); herbaceous (graminoid bog, graminoid marsh, graminoid tussock-dwarf shrub peatland); scarcely vegetated areas (scarcely vegetated scree and floodplain); and water (clear, sedimented). The methodology employed a cluster-block technique. Sample areas were described based on a combination of helicopter-ground survey, aerial photo-interpretation, and digital Landsat data. Major steps in the Landsat analysis involved preprocessing (geometric correction), derivation of statistical parameters for spectral classes, spectral class labeling of sample areas, preliminary classification of the entire study area using a maximum-likelihood algorithm, and final classification utilizing ancillary information such as digital elevation data. The final product is 1:250,000-scale vegetation map representative of distinctive regional patterns and suitable for use in comprehensive conservation planning.
Inventory and mapping of flood inundation using interactive digital image analysis techniques
Rohde, Wayne G.; Nelson, Charles A.; Taranik, J.V.
1979-01-01
LANDSAT digital data and color infra-red photographs were used in a multiphase sampling scheme to estimate the area of agricultural land affected by a flood. The LANDSAT data were classified with a maximum likelihood algorithm. Stratification of the LANDSAT data, prior to classification, greatly reduced misclassification errors. The classification results were used to prepare a map overlay showing the areal extent of flooding. These data also provided statistics required to estimate sample size in a two phase sampling scheme, and provided quick, accurate estimates of areas flooded for the first phase. The measurements made in the second phase, based on ground data and photo-interpretation, were used with two phase sampling statistics to estimate the area of agricultural land affected by flooding These results show that LANDSAT digital data can be used to prepare map overlays showing the extent of flooding on agricultural land and, with two phase sampling procedures, can provide acreage estimates with sampling errors of about 5 percent. This procedure provides a technique for rapidly assessing the areal extent of flood conditions on agricultural land and would provide a basis for designing a sampling framework to estimate the impact of flooding on crop production.
Mapping of epistatic quantitative trait loci in four-way crosses.
He, Xiao-Hong; Qin, Hongde; Hu, Zhongli; Zhang, Tianzhen; Zhang, Yuan-Ming
2011-01-01
Four-way crosses (4WC) involving four different inbred lines often appear in plant and animal commercial breeding programs. Direct mapping of quantitative trait loci (QTL) in these commercial populations is both economical and practical. However, the existing statistical methods for mapping QTL in a 4WC population are built on the single-QTL genetic model. This simple genetic model fails to take into account QTL interactions, which play an important role in the genetic architecture of complex traits. In this paper, therefore, we attempted to develop a statistical method to detect epistatic QTL in 4WC population. Conditional probabilities of QTL genotypes, computed by the multi-point single locus method, were used to sample the genotypes of all putative QTL in the entire genome. The sampled genotypes were used to construct the design matrix for QTL effects. All QTL effects, including main and epistatic effects, were simultaneously estimated by the penalized maximum likelihood method. The proposed method was confirmed by a series of Monte Carlo simulation studies and real data analysis of cotton. The new method will provide novel tools for the genetic dissection of complex traits, construction of QTL networks, and analysis of heterosis.
Szantoi, Zoltan; Escobedo, Francisco J; Abd-Elrahman, Amr; Pearlstine, Leonard; Dewitt, Bon; Smith, Scot
2015-05-01
Mapping of wetlands (marsh vs. swamp vs. upland) is a common remote sensing application.Yet, discriminating between similar freshwater communities such as graminoid/sedge fromremotely sensed imagery is more difficult. Most of this activity has been performed using medium to low resolution imagery. There are only a few studies using highspatial resolutionimagery and machine learning image classification algorithms for mapping heterogeneouswetland plantcommunities. This study addresses this void by analyzing whether machine learning classifierssuch as decisiontrees (DT) and artificial neural networks (ANN) can accurately classify graminoid/sedgecommunities usinghigh resolution aerial imagery and image texture data in the Everglades National Park, Florida.In addition tospectral bands, the normalized difference vegetation index, and first- and second-order texturefeatures derivedfrom the near-infrared band were analyzed. Classifier accuracies were assessed using confusiontablesand the calculated kappa coefficients of the resulting maps. The results indicated that an ANN(multilayerperceptron based on backpropagation) algorithm produced a statistically significantly higheraccuracy(82.04%) than the DT (QUEST) algorithm (80.48%) or the maximum likelihood (80.56%)classifier (α<0.05). Findings show that using multiple window sizes provided the best results. First-ordertexture featuresalso provided computational advantages and results that were not significantly different fromthose usingsecond-order texture features.
Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete
2015-01-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.
Wide-cross whole-genome radiation hybrid mapping of cotton (Gossypium hirsutum L.).
Gao, Wenxiang; Chen, Z Jeffrey; Yu, John Z; Raska, Dwaine; Kohel, Russell J; Womack, James E; Stelly, David M
2004-01-01
We report the development and characterization of a "wide-cross whole-genome radiation hybrid" (WWRH) panel from cotton (Gossypium hirsutum L.). Chromosomes were segmented by gamma-irradiation of G. hirsutum (n = 26) pollen, and segmented chromosomes were rescued after in vivo fertilization of G. barbadense egg cells (n = 26). A 5-krad gamma-ray WWRH mapping panel (N = 93) was constructed and genotyped at 102 SSR loci. SSR marker retention frequencies were higher than those for animal systems and marker retention patterns were informative. Using the program RHMAP, 52 of 102 SSR markers were mapped into 16 syntenic groups. Linkage group 9 (LG 9) SSR markers BNL0625 and BNL2805 had been colocalized by linkage analysis, but their order was resolved by differential retention among WWRH plants. Two linkage groups, LG 13 and LG 9, were combined into one syntenic group, and the chromosome 1 linkage group marker BNL4053 was reassigned to chromosome 9. Analyses of cytogenetic stocks supported synteny of LG 9 and LG 13 and localized them to the short arm of chromosome 17. They also supported reassignment of marker BNL4053 to the long arm of chromosome 9. A WWRH map of the syntenic group composed of linkage groups 9 and 13 was constructed by maximum-likelihood analysis under the general retention model. The results demonstrate not only the feasibility of WWRH panel construction and mapping, but also complementarity to traditional linkage mapping and cytogenetic methods. PMID:15280245
Maximum likelihood convolutional decoding (MCD) performance due to system losses
NASA Technical Reports Server (NTRS)
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model
NASA Astrophysics Data System (ADS)
Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel
2011-03-01
This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.
40 CFR 1065.510 - Engine mapping.
Code of Federal Regulations, 2012 CFR
2012-07-01
... expected maximum power. Continue the warm-up until the engine coolant, block, or head absolute temperature... torque of zero on the engine's primary output shaft, and allow the engine to govern the speed. Measure... values. (ii) For engines without a low-speed governor, operate the engine at warm idle speed and zero...
40 CFR 1065.510 - Engine mapping.
Code of Federal Regulations, 2014 CFR
2014-07-01
... expected maximum power. Continue the warm-up until the engine coolant, block, or head absolute temperature... torque of zero on the engine's primary output shaft, and allow the engine to govern the speed. Measure... values. (ii) For engines without a low-speed governor, operate the engine at warm idle speed and zero...
40 CFR 1065.510 - Engine mapping.
Code of Federal Regulations, 2013 CFR
2013-07-01
... expected maximum power. Continue the warm-up until the engine coolant, block, or head absolute temperature... torque of zero on the engine's primary output shaft, and allow the engine to govern the speed. Measure... values. (ii) For engines without a low-speed governor, operate the engine at warm idle speed and zero...
Cosmological parameters from a re-analysis of the WMAP 7 year low-resolution maps
NASA Astrophysics Data System (ADS)
Finelli, F.; De Rosa, A.; Gruppuso, A.; Paoletti, D.
2013-06-01
Cosmological parameters from Wilkinson Microwave Anisotropy Probe (WMAP) 7 year data are re-analysed by substituting a pixel-based likelihood estimator to the one delivered publicly by the WMAP team. Our pixel-based estimator handles exactly intensity and polarization in a joint manner, allowing us to use low-resolution maps and noise covariance matrices in T, Q, U at the same resolution, which in this work is 3.6°. We describe the features and the performances of the code implementing our pixel-based likelihood estimator. We perform a battery of tests on the application of our pixel-based likelihood routine to WMAP publicly available low-resolution foreground-cleaned products, in combination with the WMAP high-ℓ likelihood, reporting the differences on cosmological parameters evaluated by the full WMAP likelihood public package. The differences are not only due to the treatment of polarization, but also to the marginalization over monopole and dipole uncertainties present in the WMAP pixel likelihood code for temperature. The credible central value for the cosmological parameters change below the 1σ level with respect to the evaluation by the full WMAP 7 year likelihood code, with the largest difference in a shift to smaller values of the scalar spectral index nS.
NASA Astrophysics Data System (ADS)
Langlois, Dominic; Cousineau, Denis; Thivierge, J. P.
2014-01-01
The coordination of activity amongst populations of neurons in the brain is critical to cognition and behavior. One form of coordinated activity that has been widely studied in recent years is the so-called neuronal avalanche, whereby ongoing bursts of activity follow a power-law distribution. Avalanches that follow a power law are not unique to neuroscience, but arise in a broad range of natural systems, including earthquakes, magnetic fields, biological extinctions, fluid dynamics, and superconductors. Here, we show that common techniques that estimate this distribution fail to take into account important characteristics of the data and may lead to a sizable misestimation of the slope of power laws. We develop an alternative series of maximum likelihood estimators for discrete, continuous, bounded, and censored data. Using numerical simulations, we show that these estimators lead to accurate evaluations of power-law distributions, improving on common approaches. Next, we apply these estimators to recordings of in vitro rat neocortical activity. We show that different estimators lead to marked discrepancies in the evaluation of power-law distributions. These results call into question a broad range of findings that may misestimate the slope of power laws by failing to take into account key aspects of the observed data.
Langlois, Dominic; Cousineau, Denis; Thivierge, J P
2014-01-01
The coordination of activity amongst populations of neurons in the brain is critical to cognition and behavior. One form of coordinated activity that has been widely studied in recent years is the so-called neuronal avalanche, whereby ongoing bursts of activity follow a power-law distribution. Avalanches that follow a power law are not unique to neuroscience, but arise in a broad range of natural systems, including earthquakes, magnetic fields, biological extinctions, fluid dynamics, and superconductors. Here, we show that common techniques that estimate this distribution fail to take into account important characteristics of the data and may lead to a sizable misestimation of the slope of power laws. We develop an alternative series of maximum likelihood estimators for discrete, continuous, bounded, and censored data. Using numerical simulations, we show that these estimators lead to accurate evaluations of power-law distributions, improving on common approaches. Next, we apply these estimators to recordings of in vitro rat neocortical activity. We show that different estimators lead to marked discrepancies in the evaluation of power-law distributions. These results call into question a broad range of findings that may misestimate the slope of power laws by failing to take into account key aspects of the observed data.
The early maximum likelihood estimation model of audiovisual integration in speech perception.
Andersen, Tobias S
2015-05-01
Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures favored more complex models. This difference between conventional error measures and cross-validation was found to be indicative of over-fitting in more complex models such as the FLMP.
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
Slope maps of the San Francisco Bay region, California a digital database
Graham, Scott E.; Pike, Richard J.
1998-01-01
PREFACE: Topography, the configuration of the land surface, plays a major role in various natural processes that have helped shape the ten-county San Francisco Bay region and continue to affect its development. Such processes include a dangerous type of landslide, the debris flow (Ellen and others, 1997) as well as other modes of slope failure that damage property but rarely threaten life directly?slumping, translational sliding, and earthflow (Wentworth and others, 1997). Different types of topographic information at both local and regional scales are helpful in assessing the likelihood of slope failure and the mapping the extent of its past activity, as well as addressing other issues in hazard mitigation and land-use policy. The most useful information is quantitative. This report provides detailed digital data and plottable map files that depict in detail the most important single measure of ground-surface form for the Bay region, slope angle. We computed slope data for the entire region and each of its constituent counties from a new set of 35,000,000 digital elevations assembled from 200 local contour maps.
8D likelihood effective Higgs couplings extraction framework in h → 4ℓ
Chen, Yi; Di Marco, Emanuele; Lykken, Joe; ...
2015-01-23
We present an overview of a comprehensive analysis framework aimed at performing direct extraction of all possible effective Higgs couplings to neutral electroweak gauge bosons in the decay to electrons and muons, the so called ‘golden channel’. Our framework is based primarily on a maximum likelihood method constructed from analytic expressions of the fully differential cross sections for h → 4l and for the dominant irreduciblemore » $$ q\\overline{q} $$ → 4l background, where 4l = 2e2μ, 4e, 4μ. Detector effects are included by an explicit convolution of these analytic expressions with the appropriate transfer function over all center of mass variables. Utilizing the full set of observables, we construct an unbinned detector-level likelihood which is continuous in the effective couplings. We consider possible ZZ, Zγ, and γγ couplings simultaneously, allowing for general CP odd/even admixtures. A broad overview is given of how the convolution is performed and we discuss the principles and theoretical basis of the framework. This framework can be used in a variety of ways to study Higgs couplings in the golden channel using data obtained at the LHC and other future colliders.« less
NASA Astrophysics Data System (ADS)
Shupe, Scott Marshall
2000-10-01
Vegetation mapping in and regions facilitates ecological studies, land management, and provides a record to which future land changes can be compared. Accurate and representative mapping of desert vegetation requires a sound field sampling program and a methodology to transform the data collected into a representative classification system. Time and cost constraints require that a remote sensing approach be used if such a classification system is to be applied on a regional scale. However, desert vegetation may be sparse and thus difficult to sense at typical satellite resolutions, especially given the problem of soil reflectance. This study was designed to address these concerns by conducting vegetation mapping research using field and satellite data from the US Army Yuma Proving Ground (USYPG) in Southwest Arizona. Line and belt transect data from the Army's Land Condition Trend Analysis (LCTA) Program were transformed into relative cover and relative density classification schemes using cluster analysis. Ordination analysis of the same data produced two and three-dimensional graphs on which the homogeneity of each vegetation class could be examined. It was found that the use of correspondence analysis (CA), detrended correspondence analysis (DCA), and non-metric multidimensional scaling (NMS) ordination methods was superior to the use of any single ordination method for helping to clarify between-class and within-class relationships in vegetation composition. Analysis of these between-class and within-class relationships were of key importance in examining how well relative cover and relative density schemes characterize the USYPG vegetation. Using these two classification schemes as reference data, maximum likelihood and artificial neural net classifications were then performed on a coregistered dataset consisting of a summer Landsat Thematic Mapper (TM) image, one spring and one summer ERS-1 microwave image, and elevation, slope, and aspect layers. Classifications using a combination of ERS-1 imagery and elevation, slope, and aspect data were superior to classifications carried out using Landsat TM data alone. In all classification iterations it was consistently found that the highest classification accuracy was obtained by using a combination of Landsat TM, ERS-1, and elevation, slope, and aspect data. Maximum likelihood classification accuracy was found to be higher than artificial neural net classification in all cases.
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation
Meyer, Karin
2016-01-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data
Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu
2012-01-01
SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840
Models and analysis for multivariate failure time data
NASA Astrophysics Data System (ADS)
Shih, Joanna Huang
The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.
Interval mapping of high growth (hg), a major locus that increases weight gain in mice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horvat, S.; Medrano, J.F.
1995-04-01
The high growth locus (hg) causes a major increase in weight gain and body size in mice. As a first step to map-based cloning of hg, we developed a genetic map of the hg-containing region using interval mapping of 403 F{sub 2} from a C57BL/6J-hghg x CAST/EiJ cross. The maximum likelihood position of hg was at the chromosome 10 marker D10Mit41 (LOD = 24.8) in the F{sub 2} females and 1.5 cM distal to D10Mit41 (LOD = 9.56) in the F{sub 2} males with corresponding LOD 2 support intervals of 3.7 and 5.4 cM, respectively. The peak LOD scores weremore » significantly higher than the estimated empirical threshold LOD values. The localization of hg by interval mapping was supported by a test cross of F{sub 2} mice recombinant between the LOD 2 support interval and the flanking marker. The interval mapping and test-cross indicate that hg is not allelic with candidate genes Igf1 or decorin (Dcn), a gene that was mapped close to hg in this study. The hg inheritance was recessive in females, although we could not reject recessive or additive inheritance in males. Possible causes for sex differences in peak LOD scores and for the distortion of transmission ratios observed in F{sub 2} males are discussed. The genetic map of the hg region will facilitate further fine mapping and cloning of hg, and allow searches for a homologous quantitative trait locus affecting growth in humans and domestic animals. 48 refs., 3 figs., 3 tabs.« less
Laba, M.; Downs, R.; Smith, S.; Welsh, S.; Neider, C.; White, S.; Richmond, M.; Philpot, W.; Baveye, P.
2008-01-01
The National Estuarine Research Reserve (NERR) program is a nationally coordinated research and monitoring program that identifies and tracks changes in ecological resources of representative estuarine ecosystems and coastal watersheds. In recent years, attention has focused on using high spatial and spectral resolution satellite imagery to map and monitor wetland plant communities in the NERRs, particularly invasive plant species. The utility of this technology for that purpose has yet to be assessed in detail. To that end, a specific high spatial resolution satellite imagery, QuickBird, was used to map plant communities and monitor invasive plants within the Hudson River NERR (HRNERR). The HRNERR contains four diverse tidal wetlands (Stockport Flats, Tivoli Bays, Iona Island, and Piermont), each with unique water chemistry (i.e., brackish, oligotrophic and fresh) and, consequently, unique assemblages of plant communities, including three invasive plants (Trapa natans, Phragmites australis, and Lythrum salicaria). A maximum-likelihood classification was used to produce 20-class land cover maps for each of the four marshes within the HRNERR. Conventional contingency tables and a fuzzy set analysis served as a basis for an accuracy assessment of these maps. The overall accuracies, as assessed by the contingency tables, were 73.6%, 68.4%, 67.9%, and 64.9% for Tivoli Bays, Stockport Flats, Piermont, and Iona Island, respectively. Fuzzy assessment tables lead to higher estimates of map accuracies of 83%, 75%, 76%, and 76%, respectively. In general, the open water/tidal channel class was the most accurately mapped class and Scirpus sp. was the least accurately mapped. These encouraging accuracies suggest that high-resolution satellite imagery offers significant potential for the mapping of invasive plant species in estuarine environments. ?? 2007 Elsevier Inc. All rights reserved.
de Klerk, Helen M; Gilbertson, Jason; Lück-Vogel, Melanie; Kemp, Jaco; Munch, Zahn
2016-11-01
Traditionally, to map environmental features using remote sensing, practitioners will use training data to develop models on various satellite data sets using a number of classification approaches and use test data to select a single 'best performer' from which the final map is made. We use a combination of an omission/commission plot to evaluate various results and compile a probability map based on consistently strong performing models across a range of standard accuracy measures. We suggest that this easy-to-use approach can be applied in any study using remote sensing to map natural features for management action. We demonstrate this approach using optical remote sensing products of different spatial and spectral resolution to map the endemic and threatened flora of quartz patches in the Knersvlakte, South Africa. Quartz patches can be mapped using either SPOT 5 (used due to its relatively fine spatial resolution) or Landsat8 imagery (used because it is freely accessible and has higher spectral resolution). Of the variety of classification algorithms available, we tested maximum likelihood and support vector machine, and applied these to raw spectral data, the first three PCA summaries of the data, and the standard normalised difference vegetation index. We found that there is no 'one size fits all' solution to the choice of a 'best fit' model (i.e. combination of classification algorithm or data sets), which is in agreement with the literature that classifier performance will vary with data properties. We feel this lends support to our suggestion that rather than the identification of a 'single best' model and a map based on this result alone, a probability map based on the range of consistently top performing models provides a rigorous solution to environmental mapping. Copyright © 2016 Elsevier Ltd. All rights reserved.
Maximum-Likelihood Parameter Estimation of a Generalized Gumbel Distribution
1989-03-24
Calculate the roots of g(.) and ht .) IERR = 0 CALL R0UTG( XROOT ,YROUT,IEHRJ IF (LERR.NE.0) GOTO 900 Calculate the value of b TEMP1 = OLOG( YROOT) - PSI(YRGOT...TEMP2 = XBAR - XROOT OVAL = TEMP2 / TEMP1 RLAM = XROOT - DSHIF1 ELSE C Scale 8VAL to match HOOTH routine IF (XC(NCL).GT.70.D0) THEN ALPHA - XC NCL...ICNT) 10 CONTINUE RETURN END c C *ROOTG( XROOT ,YROOT,IERR) C 31 Aug 88 C *Res. Dir., C E Hall, Jr. SUBROUTINE ROOTG( XRDOT , ROOT ,IERRJ IMPLICIT DOUBLE
Vector Antenna and Maximum Likelihood Imaging for Radio Astronomy
2016-03-05
Maximum Likelihood Imaging for Radio Astronomy Mary Knapp1, Frank Robey2, Ryan Volz3, Frank Lind3, Alan Fenn2, Alex Morris2, Mark Silver2, Sarah Klein2...haystack.mit.edu Abstract1— Radio astronomy using frequencies less than ~100 MHz provides a window into non-thermal processes in objects ranging from planets...observational astronomy . Ground-based observatories including LOFAR [1], LWA [2], [3], MWA [4], and the proposed SKA-Low [5], [6] are improving access to
A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.
2012-01-01
This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
NASA Astrophysics Data System (ADS)
Barbarossa, Valerio; Huijbregts, Mark A. J.; Beusen, Arthur H. W.; Beck, Hylke E.; King, Henry; Schipper, Aafke M.
2018-03-01
Streamflow data is highly relevant for a variety of socio-economic as well as ecological analyses or applications, but a high-resolution global streamflow dataset is yet lacking. We created FLO1K, a consistent streamflow dataset at a resolution of 30 arc seconds (~1 km) and global coverage. FLO1K comprises mean, maximum and minimum annual flow for each year in the period 1960-2015, provided as spatially continuous gridded layers. We mapped streamflow by means of artificial neural networks (ANNs) regression. An ensemble of ANNs were fitted on monthly streamflow observations from 6600 monitoring stations worldwide, i.e., minimum and maximum annual flows represent the lowest and highest mean monthly flows for a given year. As covariates we used the upstream-catchment physiography (area, surface slope, elevation) and year-specific climatic variables (precipitation, temperature, potential evapotranspiration, aridity index and seasonality indices). Confronting the maps with independent data indicated good agreement (R2 values up to 91%). FLO1K delivers essential data for freshwater ecology and water resources analyses at a global scale and yet high spatial resolution.
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures
Theobald, Douglas L.; Wuttke, Deborah S.
2008-01-01
Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herberger, Sarah M.; Boring, Ronald L.
Abstract Objectives: This paper discusses the differences between classical human reliability analysis (HRA) dependence and the full spectrum of probabilistic dependence. Positive influence suggests an error increases the likelihood of subsequent errors or success increases the likelihood of subsequent success. Currently the typical method for dependence in HRA implements the Technique for Human Error Rate Prediction (THERP) positive dependence equations. This assumes that the dependence between two human failure events varies at discrete levels between zero and complete dependence (as defined by THERP). Dependence in THERP does not consistently span dependence values between 0 and 1. In contrast, probabilistic dependencemore » employs Bayes Law, and addresses a continuous range of dependence. Methods: Using the laws of probability, complete dependence and maximum positive dependence do not always agree. Maximum dependence is when two events overlap to their fullest amount. Maximum negative dependence is the smallest amount that two events can overlap. When the minimum probability of two events overlapping is less than independence, negative dependence occurs. For example, negative dependence is when an operator fails to actuate Pump A, thereby increasing his or her chance of actuating Pump B. The initial error actually increases the chance of subsequent success. Results: Comparing THERP and probability theory yields different results in certain scenarios; with the latter addressing negative dependence. Given that most human failure events are rare, the minimum overlap is typically 0. And when the second event is smaller than the first event the max dependence is less than 1, as defined by Bayes Law. As such alternative dependence equations are provided along with a look-up table defining the maximum and maximum negative dependence given the probability of two events. Conclusions: THERP dependence has been used ubiquitously for decades, and has provided approximations of the dependencies between two events. Since its inception, computational abilities have increased exponentially, and alternative approaches that follow the laws of probability dependence need to be implemented. These new approaches need to consider negative dependence and identify when THERP output is not appropriate.« less
Maximum Likelihood Analysis in the PEN Experiment
NASA Astrophysics Data System (ADS)
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation
NASA Technical Reports Server (NTRS)
Tsou, Haiping; Yan, Tsun-Yee
2000-01-01
This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.
Comparisons of neural networks to standard techniques for image classification and correlation
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1994-01-01
Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.
Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda
2016-08-01
With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Methods for estimating drought streamflow probabilities for Virginia streams
Austin, Samuel H.
2014-01-01
Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.
DECONV-TOOL: An IDL based deconvolution software package
NASA Technical Reports Server (NTRS)
Varosi, F.; Landsman, W. B.
1992-01-01
There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.
F-8C adaptive flight control laws
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Harvey, C. A.; Stein, G.; Carlson, D. N.; Hendrick, R. C.
1977-01-01
Three candidate digital adaptive control laws were designed for NASA's F-8C digital flyby wire aircraft. Each design used the same control laws but adjusted the gains with a different adaptative algorithm. The three adaptive concepts were: high-gain limit cycle, Liapunov-stable model tracking, and maximum likelihood estimation. Sensors were restricted to conventional inertial instruments (rate gyros and accelerometers) without use of air-data measurements. Performance, growth potential, and computer requirements were used as criteria for selecting the most promising of these candidates for further refinement. The maximum likelihood concept was selected primarily because it offers the greatest potential for identifying several aircraft parameters and hence for improved control performance in future aircraft application. In terms of identification and gain adjustment accuracy, the MLE design is slightly superior to the other two, but this has no significant effects on the control performance achievable with the F-8C aircraft. The maximum likelihood design is recommended for flight test, and several refinements to that design are proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.
2013-10-15
Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. Themore » key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.« less
Washeleski, Robert L; Meyer, Edmond J; King, Lyon B
2013-10-01
Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.
Nakae, Ken; Ikegaya, Yuji; Ishikawa, Tomoe; Oba, Shigeyuki; Urakubo, Hidetoshi; Koyama, Masanori; Ishii, Shin
2014-01-01
Crosstalk between neurons and glia may constitute a significant part of information processing in the brain. We present a novel method of statistically identifying interactions in a neuron–glia network. We attempted to identify neuron–glia interactions from neuronal and glial activities via maximum-a-posteriori (MAP)-based parameter estimation by developing a generalized linear model (GLM) of a neuron–glia network. The interactions in our interest included functional connectivity and response functions. We evaluated the cross-validated likelihood of GLMs that resulted from the addition or removal of connections to confirm the existence of specific neuron-to-glia or glia-to-neuron connections. We only accepted addition or removal when the modification improved the cross-validated likelihood. We applied the method to a high-throughput, multicellular in vitro Ca2+ imaging dataset obtained from the CA3 region of a rat hippocampus, and then evaluated the reliability of connectivity estimates using a statistical test based on a surrogate method. Our findings based on the estimated connectivity were in good agreement with currently available physiological knowledge, suggesting our method can elucidate undiscovered functions of neuron–glia systems. PMID:25393874
NASA Technical Reports Server (NTRS)
Paradella, W. R. (Principal Investigator); Vitorello, I.; Monteiro, M. D.
1984-01-01
Enhancement techniques and thematic classifications were applied to the metasediments of Bambui Super Group (Upper Proterozoic) in the Region of Serra do Ramalho, SW of the state of Bahia. Linear contrast stretch, band-ratios with contrast stretch, and color-composites allow lithological discriminations. The effects of human activities and of vegetation cover mask and limit, in several ways, the lithological discrimination with digital MSS data. Principal component images and color composite of linear contrast stretch of these products, show lithological discrimination through tonal gradations. This set of products allows the delineations of several metasedimentary sequences to a level superior to reconnaissance mapping. Supervised (maximum likelihood classifier) and nonsupervised (K-Means classifier) classification of the limestone sequence, host to fluorite mineralization show satisfactory results.
NASA Technical Reports Server (NTRS)
Sung, Q. C.; Miller, L. D.
1977-01-01
Three methods were tested for collection of the training sets needed to establish the spectral signatures of the land uses/land covers sought due to the difficulties of retrospective collection of representative ground control data. Computer preprocessing techniques applied to the digital images to improve the final classification results were geometric corrections, spectral band or image ratioing and statistical cleaning of the representative training sets. A minimal level of statistical verification was made based upon the comparisons between the airphoto estimates and the classification results. The verifications provided a further support to the selection of MSS band 5 and 7. It also indicated that the maximum likelihood ratioing technique can achieve more agreeable classification results with the airphoto estimates than the stepwise discriminant analysis.
Detecting Water Bodies in LANDSAT8 Oli Image Using Deep Learning
NASA Astrophysics Data System (ADS)
Jiang, W.; He, G.; Long, T.; Ni, Y.
2018-04-01
Water body identifying is critical to climate change, water resources, ecosystem service and hydrological cycle. Multi-layer perceptron(MLP) is the popular and classic method under deep learning framework to detect target and classify image. Therefore, this study adopts this method to identify the water body of Landsat8. To compare the performance of classification, the maximum likelihood and water index are employed for each study area. The classification results are evaluated from accuracy indices and local comparison. Evaluation result shows that multi-layer perceptron(MLP) can achieve better performance than the other two methods. Moreover, the thin water also can be clearly identified by the multi-layer perceptron. The proposed method has the application potential in mapping global scale surface water with multi-source medium-high resolution satellite data.
NASA Technical Reports Server (NTRS)
Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong
2011-01-01
Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.
Inferring Phylogenetic Networks Using PhyloNet.
Wen, Dingqiao; Yu, Yun; Zhu, Jiafan; Nakhleh, Luay
2018-07-01
PhyloNet was released in 2008 as a software package for representing and analyzing phylogenetic networks. At the time of its release, the main functionalities in PhyloNet consisted of measures for comparing network topologies and a single heuristic for reconciling gene trees with a species tree. Since then, PhyloNet has grown significantly. The software package now includes a wide array of methods for inferring phylogenetic networks from data sets of unlinked loci while accounting for both reticulation (e.g., hybridization) and incomplete lineage sorting. In particular, PhyloNet now allows for maximum parsimony, maximum likelihood, and Bayesian inference of phylogenetic networks from gene tree estimates. Furthermore, Bayesian inference directly from sequence data (sequence alignments or biallelic markers) is implemented. Maximum parsimony is based on an extension of the "minimizing deep coalescences" criterion to phylogenetic networks, whereas maximum likelihood and Bayesian inference are based on the multispecies network coalescent. All methods allow for multiple individuals per species. As computing the likelihood of a phylogenetic network is computationally hard, PhyloNet allows for evaluation and inference of networks using a pseudolikelihood measure. PhyloNet summarizes the results of the various analyzes and generates phylogenetic networks in the extended Newick format that is readily viewable by existing visualization software.
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Soil Salinity Mapping in Everglades National Park Using Remote Sensing Techniques
NASA Astrophysics Data System (ADS)
Su, H.; Khadim, F. K.; Blankenship, J.; Sobhan, K.
2017-12-01
The South Florida Everglades is a vast subtropical wetland with a globally unique hydrology and ecology, and it is designated as an International Biosphere Reserve and a Wetland of International Importance. Everglades National Park (ENP) is a hydro-ecologically enriched wetland with varying salinity contents, which is a concern for terrestrial ecosystem balance and sustainability. As such, in this study, time series soil salinity mapping was carried out for the ENP area. The mapping first entailed a maximum likelihood classification of seven land cover classes for the ENP area—namely mangrove forest, mangrove scrub, low-density forest, sawgrass, prairies and marshes, barren lands with woodland hammock and water—for the years 1996, 2000, 2006, 2010 and 2015. The classifications for 1996-2010 yielded accuracies of 82%-94%, and the 2015 classification was supported through ground truthing. Afterwards, electric conductivity (EC) tolerance thresholds for each vegetation class were established,which yielded soil salinity maps comprising four soil salinity classes—i.e., the non- (EC = 0 2 dS/m), low- (EC = 2 4 dS/m), moderate- (EC = 4 8 dS/m) and high-saline (EC = >8 dS/m) areas. The soil salinity maps visualized the spatial distribution of soil salinity with no significant temporal variations. The innovative approach of "land cover identification to salinity estimation" used in the study is pragmatic and application oriented, and the study upshots are also useful, considering the diversifying ecological context of the ENP area.
Estimation for general birth-death processes
Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.
2013-01-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261
Estimation for general birth-death processes.
Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A
2014-04-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.
Marston, Louise; Peacock, Janet L; Yu, Keming; Brocklehurst, Peter; Calvert, Sandra A; Greenough, Anne; Marlow, Neil
2009-07-01
Studies of prematurely born infants contain a relatively large percentage of multiple births, so the resulting data have a hierarchical structure with small clusters of size 1, 2 or 3. Ignoring the clustering may lead to incorrect inferences. The aim of this study was to compare statistical methods which can be used to analyse such data: generalised estimating equations, multilevel models, multiple linear regression and logistic regression. Four datasets which differed in total size and in percentage of multiple births (n = 254, multiple 18%; n = 176, multiple 9%; n = 10 098, multiple 3%; n = 1585, multiple 8%) were analysed. With the continuous outcome, two-level models produced similar results in the larger dataset, while generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) produced divergent estimates using the smaller dataset. For the dichotomous outcome, most methods, except generalised least squares multilevel modelling (ML GH 'xtlogit' in Stata) gave similar odds ratios and 95% confidence intervals within datasets. For the continuous outcome, our results suggest using multilevel modelling. We conclude that generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) should be used with caution when the dataset is small. Where the outcome is dichotomous and there is a relatively large percentage of non-independent data, it is recommended that these are accounted for in analyses using logistic regression with adjusted standard errors or multilevel modelling. If, however, the dataset has a small percentage of clusters greater than size 1 (e.g. a population dataset of children where there are few multiples) there appears to be less need to adjust for clustering.
Fast maximum likelihood estimation using continuous-time neural point process models.
Lepage, Kyle Q; MacDonald, Christopher J
2015-06-01
A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.
Regression estimators for generic health-related quality of life and quality-adjusted life years.
Basu, Anirban; Manca, Andrea
2012-01-01
To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2012-01-01
When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282
Accurate Structural Correlations from Maximum Likelihood Superpositions
Theobald, Douglas L; Wuttke, Deborah S
2008-01-01
The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors
Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.
2009-01-01
In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527
2013-01-01
Background Rapid development of highly saturated genetic maps aids molecular breeding, which can accelerate gain per breeding cycle in woody perennial plants such as Rubus idaeus (red raspberry). Recently, robust genotyping methods based on high-throughput sequencing were developed, which provide high marker density, but result in some genotype errors and a large number of missing genotype values. Imputation can reduce the number of missing values and can correct genotyping errors, but current methods of imputation require a reference genome and thus are not an option for most species. Results Genotyping by Sequencing (GBS) was used to produce highly saturated maps for a R. idaeus pseudo-testcross progeny. While low coverage and high variance in sequencing resulted in a large number of missing values for some individuals, a novel method of imputation based on maximum likelihood marker ordering from initial marker segregation overcame the challenge of missing values, and made map construction computationally tractable. The two resulting parental maps contained 4521 and 2391 molecular markers spanning 462.7 and 376.6 cM respectively over seven linkage groups. Detection of precise genomic regions with segregation distortion was possible because of map saturation. Microsatellites (SSRs) linked these results to published maps for cross-validation and map comparison. Conclusions GBS together with genome-independent imputation provides a rapid method for genetic map construction in any pseudo-testcross progeny. Our method of imputation estimates the correct genotype call of missing values and corrects genotyping errors that lead to inflated map size and reduced precision in marker placement. Comparison of SSRs to published R. idaeus maps showed that the linkage maps constructed with GBS and our method of imputation were robust, and marker positioning reliable. The high marker density allowed identification of genomic regions with segregation distortion in R. idaeus, which may help to identify deleterious alleles that are the basis of inbreeding depression in the species. PMID:23324311
Species distribution modelling for conservation of an endangered endemic orchid
Wang, Hsiao-Hsuan; Wonkka, Carissa L.; Treglia, Michael L.; Grant, William E.; Smeins, Fred E.; Rogers, William E.
2015-01-01
Concerns regarding the long-term viability of threatened and endangered plant species are increasingly warranted given the potential impacts of climate change and habitat fragmentation on unstable and isolated populations. Orchidaceae is the largest and most diverse family of flowering plants, but it is currently facing unprecedented risks of extinction. Despite substantial conservation emphasis on rare orchids, populations continue to decline. Spiranthes parksii (Navasota ladies' tresses) is a federally and state-listed endangered terrestrial orchid endemic to central Texas. Hence, we aimed to identify potential factors influencing the distribution of the species, quantify the relative importance of each factor and determine suitable habitat for future surveys and targeted conservation efforts. We analysed several geo-referenced variables describing climatic conditions and landscape features to identify potential factors influencing the likelihood of occurrence of S. parksii using boosted regression trees. Our model classified 97 % of the cells correctly with regard to species presence and absence, and indicated that probability of existence was correlated with climatic conditions and landscape features. The most influential variables were mean annual precipitation, mean elevation, mean annual minimum temperature and mean annual maximum temperature. The most likely suitable range for S. parksii was the eastern portions of Leon and Madison Counties, the southern portion of Brazos County, a portion of northern Grimes County and along the borders between Burleson and Washington Counties. Our model can assist in the development of an integrated conservation strategy through: (i) focussing future survey and research efforts on areas with a high likelihood of occurrence, (ii) aiding in selection of areas for conservation and restoration and (iii) framing future research questions including those necessary for predicting responses to climate change. Our model could also incorporate new information on S. parksii as it becomes available to improve prediction accuracy, and our methodology could be adapted to develop distribution maps for other rare species of conservation concern. PMID:25900746
Species distribution modelling for conservation of an endangered endemic orchid.
Wang, Hsiao-Hsuan; Wonkka, Carissa L; Treglia, Michael L; Grant, William E; Smeins, Fred E; Rogers, William E
2015-04-21
Concerns regarding the long-term viability of threatened and endangered plant species are increasingly warranted given the potential impacts of climate change and habitat fragmentation on unstable and isolated populations. Orchidaceae is the largest and most diverse family of flowering plants, but it is currently facing unprecedented risks of extinction. Despite substantial conservation emphasis on rare orchids, populations continue to decline. Spiranthes parksii (Navasota ladies' tresses) is a federally and state-listed endangered terrestrial orchid endemic to central Texas. Hence, we aimed to identify potential factors influencing the distribution of the species, quantify the relative importance of each factor and determine suitable habitat for future surveys and targeted conservation efforts. We analysed several geo-referenced variables describing climatic conditions and landscape features to identify potential factors influencing the likelihood of occurrence of S. parksii using boosted regression trees. Our model classified 97 % of the cells correctly with regard to species presence and absence, and indicated that probability of existence was correlated with climatic conditions and landscape features. The most influential variables were mean annual precipitation, mean elevation, mean annual minimum temperature and mean annual maximum temperature. The most likely suitable range for S. parksii was the eastern portions of Leon and Madison Counties, the southern portion of Brazos County, a portion of northern Grimes County and along the borders between Burleson and Washington Counties. Our model can assist in the development of an integrated conservation strategy through: (i) focussing future survey and research efforts on areas with a high likelihood of occurrence, (ii) aiding in selection of areas for conservation and restoration and (iii) framing future research questions including those necessary for predicting responses to climate change. Our model could also incorporate new information on S. parksii as it becomes available to improve prediction accuracy, and our methodology could be adapted to develop distribution maps for other rare species of conservation concern. Published by Oxford University Press on behalf of the Annals of Botany Company.
A ground truth based comparative study on clustering of gene expression data.
Zhu, Yitan; Wang, Zuyi; Miller, David J; Clarke, Robert; Xuan, Jianhua; Hoffman, Eric P; Wang, Yue
2008-05-01
Given the variety of available clustering methods for gene expression data analysis, it is important to develop an appropriate and rigorous validation scheme to assess the performance and limitations of the most widely used clustering algorithms. In this paper, we present a ground truth based comparative study on the functionality, accuracy, and stability of five data clustering methods, namely hierarchical clustering, K-means clustering, self-organizing maps, standard finite normal mixture fitting, and a caBIG toolkit (VIsual Statistical Data Analyzer--VISDA), tested on sample clustering of seven published microarray gene expression datasets and one synthetic dataset. We examined the performance of these algorithms in both data-sufficient and data-insufficient cases using quantitative performance measures, including cluster number detection accuracy and mean and standard deviation of partition accuracy. The experimental results showed that VISDA, an interactive coarse-to-fine maximum likelihood fitting algorithm, is a solid performer on most of the datasets, while K-means clustering and self-organizing maps optimized by the mean squared compactness criterion generally produce more stable solutions than the other methods.
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
A maximum likelihood convolutional decoder model vs experimental data comparison
NASA Technical Reports Server (NTRS)
Chen, R. Y.
1979-01-01
This article describes the comparison of a maximum likelihood convolutional decoder (MCD) prediction model and the actual performance of the MCD at the Madrid Deep Space Station. The MCD prediction model is used to develop a subroutine that has been utilized by the Telemetry Analysis Program (TAP) to compute the MCD bit error rate for a given signal-to-noise ratio. The results indicate that that the TAP can predict quite well compared to the experimental measurements. An optimal modulation index also can be found through TAP.
Salje, Ekhard K H; Planes, Antoni; Vives, Eduard
2017-10-01
Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.
Levinson, Douglas F; Evgrafov, Oleg V; Knowles, James A; Potash, James B; Weissman, Myrna M; Scheftner, William A; Depaulo, J Raymond; Crowe, Raymond R; Murphy-Eberenz, Kathleen; Marta, Diana H; McInnis, Melvin G; Adams, Philip; Gladis, Madeline; Miller, Erin B; Thomas, Jo; Holmans, Peter
2007-02-01
The authors studied a dense map of single nucleotide polymorphism (SNP) DNA markers on chromosome 15q25-q26 to maximize the informativeness of genetic linkage analyses in a region where they previously reported suggestive evidence for linkage of recurrent early-onset major depressive disorder. In 631 European-ancestry families with multiple cases of recurrent early-onset major depressive disorder, 88 SNPs were genotyped, and multipoint allele-sharing linkage analyses were carried out. Marker-marker linkage disequilibrium was minimized, and a simulation study with founder haplotypes from these families suggested that linkage scores were not inflated by linkage disequilibrium. The dense SNP map increased the information content of the analysis from around 0.7 to over 0.9. The maximum evidence for linkage was the Z likelihood ratio score statistic of Kong and Cox (Z(LR))=4.69 at 109.8 cM. The exact p value was below the genomewide significance threshold. By contrast, in the genome scan with microsatellite markers at 9 cM spacing, the maximum Z(LR) for European-ancestry families was 3.43 (106.53 cM). It was estimated that the linked locus or loci in this region might account for a 20% or less populationwide increase in risk to siblings of cases. This region has produced modestly positive evidence for linkage to depression and related traits in other studies. These results suggest that DNA sequence variations in one or more genes in the 15q25-q26 region can increase susceptibility to major depression and that efforts are warranted to identify these genes.
Intravenous lipid emulsion alters the hemodynamic response to epinephrine in a rat model.
Carreiro, Stephanie; Blum, Jared; Jay, Gregory; Hack, Jason B
2013-09-01
Intravenous lipid emulsion (ILE) is an adjunctive antidote used in selected critically ill poisoned patients. These patients may also require administration of advanced cardiac life support (ACLS) drugs. Limited data is available to describe interactions of ILE with standard ACLS drugs, specifically epinephrine. Twenty rats with intra-arterial and intravenous access were sedated with isoflurane and split into ILE or normal saline (NS) pretreatment groups. All received epinephrine 15 μm/kg intravenously (IV). Continuous mean arterial pressure (MAP) and heart rate (HR) were monitored until both indices returned to baseline. Standardized t tests were used to compare peak MAP, time to peak MAP, maximum change in HR, time to maximum change in HR, and time to return to baseline MAP/HR. There was a significant difference (p = 0.023) in time to peak MAP in the ILE group (54 s, 95 % CI 44-64) versus the NS group (40 s, 95 % CI 32-48) and a significant difference (p = 0.004) in time to return to baseline MAP in ILE group (171 s, 95 % CI 148-194) versus NS group (130 s, 95 % CI 113-147). There were no significant differences in the peak change in MAP, peak change in HR, time to minimum HR, or time to return to baseline HR between groups. ILE-pretreated rats had a significant difference in MAP response to epinephrine; ILE delayed the peak effect and prolonged the duration of effect of epinephrine on MAP, but did not alter the peak increase in MAP or the HR response.
Peters, Ralph S; Meusemann, Karen; Petersen, Malte; Mayer, Christoph; Wilbrandt, Jeanne; Ziesmann, Tanja; Donath, Alexander; Kjer, Karl M; Aspöck, Ulrike; Aspöck, Horst; Aberer, Andre; Stamatakis, Alexandros; Friedrich, Frank; Hünefeld, Frank; Niehuis, Oliver; Beutel, Rolf G; Misof, Bernhard
2014-03-20
Despite considerable progress in systematics, a comprehensive scenario of the evolution of phenotypic characters in the mega-diverse Holometabola based on a solid phylogenetic hypothesis was still missing. We addressed this issue by de novo sequencing transcriptome libraries of representatives of all orders of holometabolan insects (13 species in total) and by using a previously published extensive morphological dataset. We tested competing phylogenetic hypotheses by analyzing various specifically designed sets of amino acid sequence data, using maximum likelihood (ML) based tree inference and Four-cluster Likelihood Mapping (FcLM). By maximum parsimony-based mapping of the morphological data on the phylogenetic relationships we traced evolutionary transformations at the phenotypic level and reconstructed the groundplan of Holometabola and of selected subgroups. In our analysis of the amino acid sequence data of 1,343 single-copy orthologous genes, Hymenoptera are placed as sister group to all remaining holometabolan orders, i.e., to a clade Aparaglossata, comprising two monophyletic subunits Mecopterida (Amphiesmenoptera + Antliophora) and Neuropteroidea (Neuropterida + Coleopterida). The monophyly of Coleopterida (Coleoptera and Strepsiptera) remains ambiguous in the analyses of the transcriptome data, but appears likely based on the morphological data. Highly supported relationships within Neuropterida and Antliophora are Raphidioptera + (Neuroptera + monophyletic Megaloptera), and Diptera + (Siphonaptera + Mecoptera). ML tree inference and FcLM yielded largely congruent results. However, FcLM, which was applied here for the first time to large phylogenomic supermatrices, displayed additional signal in the datasets that was not identified in the ML trees. Our phylogenetic results imply that an orthognathous larva belongs to the groundplan of Holometabola, with compound eyes and well-developed thoracic legs, externally feeding on plants or fungi. Ancestral larvae of Aparaglossata were prognathous, equipped with single larval eyes (stemmata), and possibly agile and predacious. Ancestral holometabolan adults likely resembled in their morphology the groundplan of adult neopteran insects. Within Aparaglossata, the adult's flight apparatus and ovipositor underwent strong modifications. We show that the combination of well-resolved phylogenies obtained by phylogenomic analyses and well-documented extensive morphological datasets is an appropriate basis for reconstructing complex morphological transformations and for the inference of evolutionary histories.
Sample sizes and model comparison metrics for species distribution models
B.B. Hanberry; H.S. He; D.C. Dey
2012-01-01
Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....
Wildlife tradeoffs based on landscape models of habitat
Loehle, C.; Mitchell, M.S.
2000-01-01
It is becoming increasingly clear that the spatial structure of landscapes affects the habitat choices and abundance of wildlife. In contrast to wildlife management based on preservation of critical habitat features such as nest sites on a beach or mast trees, it has not been obvious how to incorporate spatial structure into management plans. We present techniques to accomplish this goal. We used multiscale logistic regression models developed previously for neotropical migrant bird species habitat use in South Carolina (USA) as a basis for these techniques. Based on these models we used a spatial optimization technique to generate optimal maps (probability of occurrence, P = 1.0) for each of seven species. To emulate management of a forest for maximum species diversity, we defined the objective function of the algorithm as the sum of probabilities over the seven species, resulting in a complex map that allowed all seven species to coexist. The map that allowed for coexistence is not obvious, must be computed algorithmically, and would be difficult to realize using rules of thumb for habitat management. To assess how management of a forest for a single species of interest might affect other species, we analyzed tradeoffs by gradually increasing the weighting on a single species in the objective function over a series of simulations. We found that as habitat was increasingly modified to favor that species, the probability of presence for two of the other species was driven to zero. This shows that whereas it is not possible to simultaneously maximize the likelihood of presence for multiple species with divergent habitat preferences, compromise solutions are possible at less than maximal likelihood in many cases. Our approach suggests that efficiency of habitat management for species diversity can by maximized for even small landscapes by incorporating spatial context. The methods we present are suitable for wildlife management, endangered species conservation, and nature reserve design.
A Modularized Efficient Framework for Non-Markov Time Series Estimation
NASA Astrophysics Data System (ADS)
Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.
2018-06-01
We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.
MAP Reconstruction for Fourier Rebinned TOF-PET Data
Bai, Bing; Lin, Yanguang; Zhu, Wentao; Ren, Ran; Li, Quanzheng; Dahlbom, Magnus; DiFilippo, Frank; Leahy, Richard M.
2014-01-01
Time-of-flight (TOF) information improves signal to noise ratio in Positron Emission Tomography (PET). Computation cost in processing TOF-PET sinograms is substantially higher than for nonTOF data because the data in each line of response is divided among multiple time of flight bins. This additional cost has motivated research into methods for rebinning TOF data into lower dimensional representations that exploit redundancies inherent in TOF data. We have previously developed approximate Fourier methods that rebin TOF data into either 3D nonTOF or 2D nonTOF formats. We refer to these methods respectively as FORET-3D and FORET-2D. Here we describe maximum a posteriori (MAP) estimators for use with FORET rebinned data. We first derive approximate expressions for the variance of the rebinned data. We then use these results to rescale the data so that the variance and mean are approximately equal allowing us to use the Poisson likelihood model for MAP reconstruction. MAP reconstruction from these rebinned data uses a system matrix in which the detector response model accounts for the effects of rebinning. Using these methods we compare performance of FORET-2D and 3D with TOF and nonTOF reconstructions using phantom and clinical data. Our phantom results show a small loss in contrast recovery at matched noise levels using FORET compared to reconstruction from the original TOF data. Clinical examples show FORET images that are qualitatively similar to those obtained from the original TOF-PET data but a small increase in variance at matched resolution. Reconstruction time is reduced by a factor of 5 and 30 using FORET3D+MAP and FORET2D+MAP respectively compared to 3D TOF MAP, which makes these methods attractive for clinical applications. PMID:24504374
Modulation and coding for satellite and space communications
NASA Technical Reports Server (NTRS)
Yuen, Joseph H.; Simon, Marvin K.; Pollara, Fabrizio; Divsalar, Dariush; Miller, Warner H.; Morakis, James C.; Ryan, Carl R.
1990-01-01
Several modulation and coding advances supported by NASA are summarized. To support long-constraint-length convolutional code, a VLSI maximum-likelihood decoder, utilizing parallel processing techniques, which is being developed to decode convolutional codes of constraint length 15 and a code rate as low as 1/6 is discussed. A VLSI high-speed 8-b Reed-Solomon decoder which is being developed for advanced tracking and data relay satellite (ATDRS) applications is discussed. A 300-Mb/s modem with continuous phase modulation (CPM) and codings which is being developed for ATDRS is discussed. Trellis-coded modulation (TCM) techniques are discussed for satellite-based mobile communication applications.
Assessing Measles Transmission in the United States Following a Large Outbreak in California
Blumberg, Seth; Worden, Lee; Enanoria, Wayne; Ackley, Sarah; Deiner, Michael; Liu, Fengchen; Gao, Daozhou; Lietman, Thomas; Porco, Travis
2015-01-01
The recent increase in measles cases in California may raise questions regarding the continuing success of measles control. To determine whether the dynamics of measles is qualitatively different in comparison to previous years, we assess whether the 2014-2015 measles outbreak associated with an Anaheim theme park is consistent with subcriticality by calculating maximum-likelihood estimates for the effective reproduction numbe given this year’s outbreak, using the Galton-Watson branching process model. We find that the dynamics after the initial transmission event are consistent with prior transmission, but does not exclude the possibilty that the effective reproduction number has increased. PMID:26052471
Cao, Y; Adachi, J; Yano, T; Hasegawa, M
1994-07-01
Graur et al.'s (1991) hypothesis that the guinea pig-like rodents have an evolutionary origin within mammals that is separate from that of other rodents (the rodent-polyphyly hypothesis) was reexamined by the maximum-likelihood method for protein phylogeny, as well as by the maximum-parsimony and neighbor-joining methods. The overall evidence does not support Graur et al.'s hypothesis, which radically contradicts the traditional view of rodent monophyly. This work demonstrates that we must be careful in choosing a proper method for phylogenetic inference and that an argument based on a small data set (with respect to the length of the sequence and especially the number of species) may be unstable.
Global Ionosphere Perturbations Monitored by the Worldwide GPS Network
NASA Technical Reports Server (NTRS)
Ho, C. M.; Manucci, A. T.; Lindqwister, U. J.; Pi, X.
1996-01-01
For the first time, measurements from the Global Positioning System (GPS) worldwide network are employed to study the global ionospheric total electron content(TEC) changes during a magnetic storm (November 26, 1994). These measurements are obtained from more than 60 world-wide GPS stations which continuously receive dual-frequency signals. Based on the delays of the signals, we have generated high resolution global ionospheric maps (GIM) of TEC at 15 minute intervals. Using a differential method comparing storm time maps with quiet time maps, we find that significant TEC increases (the positive effect ) are the major feature in the winter hemisphere during this storm (the maximum percent change relative to quiet times is about 150 percent).
Task Performance with List-Mode Data
NASA Astrophysics Data System (ADS)
Caucci, Luca
This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.
NASA Astrophysics Data System (ADS)
Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.
1998-07-01
An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.
Testing students' e-learning via Facebook through Bayesian structural equation modeling.
Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad
2017-01-01
Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.
Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoneking, M.R.; Den Hartog, D.J.
1996-06-01
The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less
Lehmann, A; Scheffler, Ch; Hermanussen, M
2010-02-01
Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.
Collinear Latent Variables in Multilevel Confirmatory Factor Analysis
van de Schoot, Rens; Hox, Joop
2014-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827
Testing students’ e-learning via Facebook through Bayesian structural equation modeling
Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad
2017-01-01
Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. PMID:28886019
Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D
2006-01-01
Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943
NASA Astrophysics Data System (ADS)
Yang, Zhongyu
This thesis describes the design, experimental performance, and theoretical simulation of a novel time-of-flight analyzer that was integrated into a high resolution electron energy loss spectrometer (TOF-HREELS). First we examined the use of an interleaved comb chopper for chopping a continuous electron beam. Both static and dynamic behaviors were simulated theoretically and measured experimentally, with very good agreement. The finite penetration of the field beyond the plane of the chopper leads to non-ideal chopper response, which is characterized in terms of an "energy corruption" effect and a lead or lag in the time at which the beam responds to the chopper potential. Second we considered the recovery of spectra from pseudo-random binary sequence (PRBS) modulated TOF-HREELS data. The effects of the Poisson noise distribution and the non-ideal behavior of the "interleaved comb" chopper were simulated. We showed, for the first time, that maximum likelihood methods can be combined with PRBS modulation to achieve resolution enhancement, while properly accounting for the Poisson noise distribution and artifacts introduced by the chopper. Our results indicate that meV resolution, similar to that of modern high resolution electron energy loss spectrometers, can be achieved with a dramatic performance advantage over conventional, serial detection analyzers. To demonstrate the capabilities of the TOF-HREELS instrument, we made measurements on a highly oriented thin film polytetrafluoroethylene (PTFE) sample. We demonstrated that the TOF-HREELS can achieve a throughput advantage of a factor of 85 compared to the conventional HREELS instrument. Comparisons were made between the experimental results and theoretical simulations. We discuss various factors which affect inversion of PRBS modulated Time of Flight (TOF) data with the Lucy algorithm. Using simulations, we conclude that the convolution assumption was good under the conditions of our experiment. The chopper rise time, Poisson noise, and artifacts of the chopper response are evaluated. Finally, we conclude that the maximum likelihood algorithms are able to gain a multiplex advantage in PRBS modulation, despite the Poisson noise in the detector.
Prague, Mélanie; Commenges, Daniel; Guedj, Jérémie; Drylewicz, Julia; Thiébaut, Rodolphe
2013-08-01
Models based on ordinary differential equations (ODE) are widespread tools for describing dynamical systems. In biomedical sciences, data from each subject can be sparse making difficult to precisely estimate individual parameters by standard non-linear regression but information can often be gained from between-subjects variability. This makes natural the use of mixed-effects models to estimate population parameters. Although the maximum likelihood approach is a valuable option, identifiability issues favour Bayesian approaches which can incorporate prior knowledge in a flexible way. However, the combination of difficulties coming from the ODE system and from the presence of random effects raises a major numerical challenge. Computations can be simplified by making a normal approximation of the posterior to find the maximum of the posterior distribution (MAP). Here we present the NIMROD program (normal approximation inference in models with random effects based on ordinary differential equations) devoted to the MAP estimation in ODE models. We describe the specific implemented features such as convergence criteria and an approximation of the leave-one-out cross-validation to assess the model quality of fit. In pharmacokinetics models, first, we evaluate the properties of this algorithm and compare it with FOCE and MCMC algorithms in simulations. Then, we illustrate NIMROD use on Amprenavir pharmacokinetics data from the PUZZLE clinical trial in HIV infected patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Fuzzy multinomial logistic regression analysis: A multi-objective programming approach
NASA Astrophysics Data System (ADS)
Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan
2017-05-01
Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.
NASA Astrophysics Data System (ADS)
Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.
2015-12-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.
Patterns of vegetation in the Owens Valley, California
NASA Technical Reports Server (NTRS)
Ustin, S. L.; Rock, B. N.; Woodward, R. A.
1986-01-01
Spectral characteristics of semi-arid shrub communities were examined using Airborne Imaging Spectrometer (AIS) data collected in the tree mode on 23 May 1985. Mesic sites with relatively high vegetation density and distinct zonation patterns exhibited greater spectral signature variations than sites with more xeric shrub communities. Spectral signature patterns were not directly related to vegetation density or physiognomy, although spatial maps derived from an 8-channel maximum likelihood classification were supported by photo-interpreted surface features. In AIS data, the principal detected effect of shrub vegetation on the alluvial fans is to lower reflectance across the spectrum. These results are similar to those reported during a period of minimal physiological activity in autumn, indicating that shadows cast by vegetation canopies are an important element of soil-vegetation interaction under conditions of relatively low canopy cover.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mengel, S.K.; Morrison, D.B.
1985-01-01
Consideration is given to global biogeochemical issues, image processing, remote sensing of tropical environments, global processes, geology, landcover hydrology, and ecosystems modeling. Topics discussed include multisensor remote sensing strategies, geographic information systems, radars, and agricultural remote sensing. Papers are presented on fast feature extraction; a computational approach for adjusting TM imagery terrain distortions; the segmentation of a textured image by a maximum likelihood classifier; analysis of MSS Landsat data; sun angle and background effects on spectral response of simulated forest canopies; an integrated approach for vegetation/landcover mapping with digital Landsat images; geological and geomorphological studies using an image processing technique;more » and wavelength intensity indices in relation to tree conditions and leaf-nutrient content.« less
NASA Technical Reports Server (NTRS)
Clark, R. T.; Mccallister, R. D.
1982-01-01
The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.
Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification
NASA Technical Reports Server (NTRS)
Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,
Maximum likelihood estimation for life distributions with competing failure modes
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1979-01-01
Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.
Gyre and gimble: a maximum-likelihood replacement for Patterson correlation refinement.
McCoy, Airlie J; Oeffner, Robert D; Millán, Claudia; Sammito, Massimo; Usón, Isabel; Read, Randy J
2018-04-01
Descriptions are given of the maximum-likelihood gyre method implemented in Phaser for optimizing the orientation and relative position of rigid-body fragments of a model after the orientation of the model has been identified, but before the model has been positioned in the unit cell, and also the related gimble method for the refinement of rigid-body fragments of the model after positioning. Gyre refinement helps to lower the root-mean-square atomic displacements between model and target molecular-replacement solutions for the test case of antibody Fab(26-10) and improves structure solution with ARCIMBOLDO_SHREDDER.
Richards, V. M.; Dai, W.
2014-01-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826
Khairuzzaman, Md; Zhang, Chao; Igarashi, Koji; Katoh, Kazuhiro; Kikuchi, Kazuro
2010-03-01
We describe a successful introduction of maximum-likelihood-sequence estimation (MLSE) into digital coherent receivers together with finite-impulse response (FIR) filters in order to equalize both linear and nonlinear fiber impairments. The MLSE equalizer based on the Viterbi algorithm is implemented in the offline digital signal processing (DSP) core. We transmit 20-Gbit/s quadrature phase-shift keying (QPSK) signals through a 200-km-long standard single-mode fiber. The bit-error rate performance shows that the MLSE equalizer outperforms the conventional adaptive FIR filter, especially when nonlinear impairments are predominant.
F-8C adaptive flight control extensions. [for maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Stein, G.; Hartmann, G. L.
1977-01-01
An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.
NASA Technical Reports Server (NTRS)
Battin, R. H.; Croopnick, S. R.; Edwards, J. A.
1977-01-01
The formulation of a recursive maximum likelihood navigation system employing reference position and velocity vectors as state variables is presented. Convenient forms of the required variational equations of motion are developed together with an explicit form of the associated state transition matrix needed to refer measurement data from the measurement time to the epoch time. Computational advantages accrue from this design in that the usual forward extrapolation of the covariance matrix of estimation errors can be avoided without incurring unacceptable system errors. Simulation data for earth orbiting satellites are provided to substantiate this assertion.
A 3D approximate maximum likelihood localization solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-09-23
A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with acoustic transmitters and vocalizing marine mammals to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives and support Marine Renewable Energy. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.
Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano
2015-01-01
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2005-04-01
We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.
NASA Astrophysics Data System (ADS)
Higgins, C. T.; Clinkenbeard, J. P.; Churchill, R. K.
2006-12-01
Naturally occurring asbestos (NOA) is a term applied to the geologic occurrence of six types of silicate minerals that have asbestiform habit. These include the serpentine mineral chrysotile and the amphibole minerals actinolite, amosite, anthophyllite, crocidolite, and tremolite; all are classified as known human carcinogens. NOA, which is likely to be present in at least 50 of the 58 counties of California, is most commonly associated with serpentinite, but has been identified in other geologic settings as well. Because of health concerns, knowledge of where NOA may be present is important to regulatory agencies and the public. To improve this knowledge, the California Geological Survey (CGS) has prepared NOA maps of Placer County and eastern Sacramento County; both counties contain geologic settings where NOA has been observed. The maps are based primarily on geologic information compiled and interpreted from existing geologic and soils maps and on limited fieldwork. The system of map units is modified from an earlier one developed by the CGS for an NOA map of nearby western El Dorado County. In the current system, the counties are subdivided into different areas based on relative likelihood for the presence of NOA. Three types of areas are defined as most likely, moderately likely, and least likely to contain NOA. A fourth type is defined as areas of faulting and shearing; these geologic structures may locally increase the likelihood for the presence of NOA within or adjacent to areas most likely or moderately likely to contain NOA. The maps do not indicate if NOA is present or absent in bedrock or soils at any particular location. Local air pollution control districts are using the maps to help determine where to minimize generation of and exposure to dust that may contain NOA. The maps and accompanying reports can be viewed at http://www.consrv.ca.gov/cgs/ under Hazardous Minerals.
The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.
ERIC Educational Resources Information Center
Blackwood, Larry G.; Bradley, Edwin L.
1989-01-01
Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)
Using the β-binomial distribution to characterize forest health
S.J. Zarnoch; R.L. Anderson; R.M. Sheffield
1995-01-01
The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
ERIC Educational Resources Information Center
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
A Note on Three Statistical Tests in the Logistic Regression DIF Procedure
ERIC Educational Resources Information Center
Paek, Insu
2012-01-01
Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…
Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data
ERIC Educational Resources Information Center
Xi, Nuo; Browne, Michael W.
2014-01-01
A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…
Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation
ERIC Educational Resources Information Center
Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting
2011-01-01
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…
Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth
ERIC Educational Resources Information Center
Jeon, Minjeong
2012-01-01
Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…
A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...
Pettinger, L.R.
1982-01-01
This paper documents the procedures, results, and final products of a digital analysis of Landsat data used to produce a vegetation and landcover map of the Blackfoot River watershed in southeastern Idaho. Resource classes were identified at two levels of detail: generalized Level I classes (for example, forest land and wetland) and detailed Levels II and III classes (for example, conifer forest, aspen, wet meadow, and riparian hardwoods). Training set statistics were derived using a modified clustering approach. Environmental stratification that separated uplands from lowlands improved discrimination between resource classes having similar spectral signatures. Digital classification was performed using a maximum likelihood algorithm. Classification accuracy was determined on a single-pixel basis from a random sample of 25-pixel blocks. These blocks were transferred to small-scale color-infrared aerial photographs, and the image area corresponding to each pixel was interpreted. Classification accuracy, expressed as percent agreement of digital classification and photo-interpretation results, was 83.0:t 2.1 percent (0.95 probability level) for generalized (Level I) classes and 52.2:t 2.8 percent (0.95 probability level) for detailed (Levels II and III) classes. After the classified images were geometrically corrected, two types of maps were produced of Level I and Levels II and III resource classes: color-coded maps at a 1:250,000 scale, and flatbed-plotter overlays at a 1:24,000 scale. The overlays are more useful because of their larger scale, familiar format to users, and compatibility with other types of topographic and thematic maps of the same scale.
Predictive model of outcome of targeted nodal assessment in colorectal cancer.
Nissan, Aviram; Protic, Mladjan; Bilchik, Anton; Eberhardt, John; Peoples, George E; Stojadinovic, Alexander
2010-02-01
Improvement in staging accuracy is the principal aim of targeted nodal assessment in colorectal carcinoma. Technical factors independently predictive of false negative (FN) sentinel lymph node (SLN) mapping should be identified to facilitate operative decision making. To define independent predictors of FN SLN mapping and to develop a predictive model that could support surgical decisions. Data was analyzed from 2 completed prospective clinical trials involving 278 patients with colorectal carcinoma undergoing SLN mapping. Clinical outcome of interest was FN SLN(s), defined as one(s) with no apparent tumor cells in the presence of non-SLN metastases. To assess the independent predictive effect of a covariate for a nominal response (FN SLN), a logistic regression model was constructed and parameters estimated using maximum likelihood. A probabilistic Bayesian model was also trained and cross validated using 10-fold train-and-test sets to predict FN SLN mapping. Area under the curve (AUC) from receiver operating characteristics curves of these predictions was calculated to determine the predictive value of the model. Number of SLNs (<3; P = 0.03) and tumor-replaced nodes (P < 0.01) independently predicted FN SLN. Cross validation of the model created with Bayesian Network Analysis effectively predicted FN SLN (area under the curve = 0.84-0.86). The positive and negative predictive values of the model are 83% and 97%, respectively. This study supports a minimum threshold of 3 nodes for targeted nodal assessment in colorectal cancer, and establishes sufficient basis to conclude that SLN mapping and biopsy cannot be justified in the presence of clinically apparent tumor-replaced nodes.
Land use/land cover mapping using multi-scale texture processing of high resolution data
NASA Astrophysics Data System (ADS)
Wong, S. N.; Sarker, M. L. R.
2014-02-01
Land use/land cover (LULC) maps are useful for many purposes, and for a long time remote sensing techniques have been used for LULC mapping using different types of data and image processing techniques. In this research, high resolution satellite data from IKONOS was used to perform land use/land cover mapping in Johor Bahru city and adjacent areas (Malaysia). Spatial image processing was carried out using the six texture algorithms (mean, variance, contrast, homogeneity, entropy, and GLDV angular second moment) with five difference window sizes (from 3×3 to 11×11). Three different classifiers i.e. Maximum Likelihood Classifier (MLC), Artificial Neural Network (ANN) and Supported Vector Machine (SVM) were used to classify the texture parameters of different spectral bands individually and all bands together using the same training and validation samples. Results indicated that texture parameters of all bands together generally showed a better performance (overall accuracy = 90.10%) for land LULC mapping, however, single spectral band could only achieve an overall accuracy of 72.67%. This research also found an improvement of the overall accuracy (OA) using single-texture multi-scales approach (OA = 89.10%) and single-scale multi-textures approach (OA = 90.10%) compared with all original bands (OA = 84.02%) because of the complementary information from different bands and different texture algorithms. On the other hand, all of the three different classifiers have showed high accuracy when using different texture approaches, but SVM generally showed higher accuracy (90.10%) compared to MLC (89.10%) and ANN (89.67%) especially for the complex classes such as urban and road.
NASA Astrophysics Data System (ADS)
Khuluse-Makhanya, Sibusisiwe; Stein, Alfred; Breytenbach, André; Gxumisa, Athi; Dudeni-Tlhone, Nontembeko; Debba, Pravesh
2017-10-01
In urban areas the deterioration of air quality as a result of fugitive dust receives less attention than the more prominent traffic and industrial emissions. We assessed whether fugitive dust emission sources in the neighbourhood of an air quality monitor are predictors of ambient PM10 concentrations on days characterized by strong local winds. An ensemble maximum likelihood method is developed for land cover mapping in the vicinity of an air quality station using SPOT 6 multi-spectral images. The ensemble maximum likelihood classifier is developed through multiple training iterations for improved accuracy of the bare soil class. Five primary land cover classes are considered, namely built-up areas, vegetation, bare soil, water and 'mixed bare soil' which denotes areas where soil is mixed with either vegetation or synthetic materials. Preliminary validation of the ensemble classifier for the bare soil class results in an accuracy range of 65-98%. Final validation of all classes results in an overall accuracy of 78%. Next, cluster analysis and a varying intercepts regression model are used to assess the statistical association between land cover, a fugitive dust emissions proxy and observed PM10. We found that land cover patterns in the neighbourhood of an air quality station are significant predictors of observed average PM10 concentrations on days when wind speeds are conducive for dust emissions. This study concludes that in the absence of an emissions inventory for ambient particulate matter, PM10 emitted from dust reservoirs can be statistically accounted for by land cover characteristics. This supports the use of land cover data for improved prediction of PM10 at locations without air quality monitoring stations.
DeChaine, Eric G.; Anderson, Stacy A.; McNew, Jennifer M.; Wendling, Barry M.
2013-01-01
Arctic-alpine plants in the genus Saxifraga L. (Saxifragaceae Juss.) provide an excellent system for investigating the process of diversification in northern regions. Yet, sect. Trachyphyllum (Gaud.) Koch, which is comprised of about 8 to 26 species, has still not been explored by molecular systematists even though taxonomists concur that the section needs to be thoroughly re-examined. Our goals were to use chloroplast trnL-F and nuclear ITS DNA sequence data to circumscribe the section phylogenetically, test models of geographically-based population divergence, and assess the utility of morphological characters in estimating evolutionary relationships. To do so, we sequenced both genetic markers for 19 taxa within the section. The phylogenetic inferences of sect. Trachyphyllum using maximum likelihood and Bayesian analyses showed that the section is polyphyletic, with S. aspera L. and S bryoides L. falling outside the main clade. In addition, the analyses supported several taxonomic re-classifications to prior names. We used two approaches to test biogeographic hypotheses: i) a coalescent approach in Mesquite to test the fit of our reconstructed gene trees to geographically-based models of population divergence and ii) a maximum likelihood inference in Lagrange. These tests uncovered strong support for an origin of the clade in the Southern Rocky Mountains of North America followed by dispersal and divergence episodes across refugia. Finally we adopted a stochastic character mapping approach in SIMMAP to investigate the utility of morphological characters in estimating evolutionary relationships among taxa. We found that few morphological characters were phylogenetically informative and many were misleading. Our molecular analyses provide a foundation for the diversity and evolutionary relationships within sect. Trachyphyllum and hypotheses for better understanding the patterns and processes of divergence in this section, other saxifrages, and plants inhabiting the North Pacific Rim. PMID:23922810
Autosomal dominant retinitis pigmentosa: No evidence for nonallelic genetic heterogeneity on 3q
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar-Singh, R.; He Wang; Humphries, P.
1993-02-01
Since the initial report of linkage of autosomal dominant retinitis pigmentosa (adRP) to the long arm of chromosome 3, several mutations in the gene encoding rhodopsin, which also maps to 3q, have been reported in adRP pedigrees. However, there has been some discussion as to the possibility of a second adRP locus on 3q. This suggestion has important diagnostic and research implications and must raise doubts about the usefulness of linked markers for reliable diagnosis of RP patients. In order to address this issue the authors have performed an admixture test (A-test) on 10 D3S47-linked adRP pedigrees and have foundmore » a likelihood ratio of heterogeneity versus homogeneity of 4.90. They performed a second A-test, combining the data from all families with known rhodopsin mutations. In this test they obtained a reduced likelihood ratio of heterogeneity versus homogeneity, of 1.0. On the basis of these statistical analyses they have found no significant support for two adRP loci on chromosome 3q. Furthermore, using 40 CEPH families, they have localized the rhodopsin gene to the D3S47-D3S20 interval, with a maximum lod score (Z[sub m]) of 20 and have found that the order qter-D3S47-rhodopsin-D3S20-cen is significantly more likely than any other order. In addition, they have mapped (Z[sub m] = 30) the microsatellite marker D3S621 relative to other loci in this region of the genome. 27 refs., 3 figs., 3 tabs.« less
A Maximum-Likelihood Approach to Force-Field Calibration.
Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam
2015-09-28
A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-likelihood parameter estimation when the contributions to the maximum-likelihood function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions.
Analysis of Multipsectral Time Series for supporting Forest Management Plans
NASA Astrophysics Data System (ADS)
Simoniello, T.; Carone, M. T.; Costantini, G.; Frattegiani, M.; Lanfredi, M.; Macchiato, M.
2010-05-01
Adequate forest management requires specific plans based on updated and detailed mapping. Multispectral satellite time series have been largely applied to forest monitoring and studies at different scales tanks to their capability of providing synoptic information on some basic parameters descriptive of vegetation distribution and status. As a low expensive tool for supporting forest management plans in operative context, we tested the use of Landsat-TM/ETM time series (1987-2006) in the high Agri Valley (Southern Italy) for planning field surveys as well as for the integration of existing cartography. As preliminary activity to make all scenes radiometrically consistent the no-change regression normalization was applied to the time series; then all the data concerning available forest maps, municipal boundaries, water basins, rivers, and roads were overlapped in a GIS environment. From the 2006 image we elaborated the NDVI map and analyzed the distribution for each land cover class. To separate the physiological variability and identify the anomalous areas, a threshold on the distributions was applied. To label the non homogenous areas, a multitemporal analysis was performed by separating heterogeneity due to cover changes from that linked to basilar unit mapping and classification labelling aggregations. Then a map of priority areas was produced to support the field survey plan. To analyze the territorial evolution, the historical land cover maps were elaborated by adopting a hybrid classification approach based on a preliminary segmentation, the identification of training areas, and a subsequent maximum likelihood categorization. Such an analysis was fundamental for the general assessment of the territorial dynamics and in particular for the evaluation of the efficacy of past intervention activities.
NASA Astrophysics Data System (ADS)
Petropoulos, George P.; Kontoes, Charalambos C.; Keramitsoglou, Iphigenia
2012-08-01
In this study, the potential of EO-1 Advanced Land Imager (ALI) radiometer for land cover and especially burnt area mapping from a single image analysis is investigated. Co-orbital imagery from the Landsat Thematic Mapper (TM) was also utilised for comparison purposes. Both images were acquired shortly after the suppression of a fire occurred during the summer of 2009 North-East of Athens, the capital of Greece. The Maximum Likelihood (ML), Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs) classifiers were parameterised and subsequently applied to the acquired satellite datasets. Evaluation of the land use/cover mapping accuracy was based on the error matrix statistics. Also, the McNemar test was used to evaluate the statistical significance of the differences between the approaches tested. Derived burnt area estimates were validated against the operationally deployed Services and Applications For Emergency Response (SAFER) Burnt Scar Mapping service. All classifiers applied to either ALI or TM imagery proved flexible enough to map land cover and also to extract the burnt area from other land surface types. The highest total classification accuracy and burnt area detection capability was returned from the application of SVMs to ALI data. This was due to the SVMs ability to identify an optimal separating hyperplane for best classes' separation that was able to better utilise ALI's advanced technological characteristics in comparison to those of TM sensor. This study is to our knowledge the first of its kind, effectively demonstrating the benefits of the combined application of SVMs to ALI data further implying that ALI technology may prove highly valuable in mapping burnt areas and land use/cover if it is incorporated into the development of Landsat 8 mission, planned to be launched in the coming years.
Direct Reconstruction of CT-Based Attenuation Correction Images for PET With Cluster-Based Penalties
NASA Astrophysics Data System (ADS)
Kim, Soo Mee; Alessio, Adam M.; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Extremely low-dose (LD) CT acquisitions used for PET attenuation correction have high levels of noise and potential bias artifacts due to photon starvation. This paper explores the use of a priori knowledge for iterative image reconstruction of the CT-based attenuation map. We investigate a maximum a posteriori framework with cluster-based multinomial penalty for direct iterative coordinate decent (dICD) reconstruction of the PET attenuation map. The objective function for direct iterative attenuation map reconstruction used a Poisson log-likelihood data fit term and evaluated two image penalty terms of spatial and mixture distributions. The spatial regularization is based on a quadratic penalty. For the mixture penalty, we assumed that the attenuation map may consist of four material clusters: air + background, lung, soft tissue, and bone. Using simulated noisy sinogram data, dICD reconstruction was performed with different strengths of the spatial and mixture penalties. The combined spatial and mixture penalties reduced the root mean squared error (RMSE) by roughly two times compared with a weighted least square and filtered backprojection reconstruction of CT images. The combined spatial and mixture penalties resulted in only slightly lower RMSE compared with a spatial quadratic penalty alone. For direct PET attenuation map reconstruction from ultra-LD CT acquisitions, the combination of spatial and mixture penalties offers regularization of both variance and bias and is a potential method to reconstruct attenuation maps with negligible patient dose. The presented results, using a best-case histogram suggest that the mixture penalty does not offer a substantive benefit over conventional quadratic regularization and diminishes enthusiasm for exploring future application of the mixture penalty.
NASA Technical Reports Server (NTRS)
Spruce, Joseph P.; Ross, Kenton W.; Graham, William D.
2006-01-01
Hurricane Katrina inflicted widespread damage to vegetation in southwestern coastal Mississippi upon landfall on August 29, 2005. Storm damage to surface vegetation types at the NASA John C. Stennis Space Center (SSC) was mapped and quantified using IKONOS data originally acquired on September 2, 2005, and later obtained via a Department of Defense ClearView contract. NASA SSC management required an assessment of the hurricane s impact to the 125,000-acre buffer zone used to mitigate rocket engine testing noise and vibration impacts and to manage forestry and fire risk. This study employed ERDAS IMAGINE software to apply traditional classification techniques to the IKONOS data. Spectral signatures were collected from multiple ISODATA classifications of subset areas across the entire region and then appended to a master file representative of major targeted cover type conditions. The master file was subsequently used with the IKONOS data and with a maximum likelihood algorithm to produce a supervised classification later refined using GIS-based editing. The final results enabled mapped, quantitative areal estimates of hurricane-induced damage according to general surface cover type. The IKONOS classification accuracy was assessed using higher resolution aerial imagery and field survey data. In-situ data and GIS analysis indicate that the results compare well to FEMA maps of flooding extent. The IKONOS classification also mapped open areas with woody storm debris. The detection of such storm damage categories is potentially useful for government officials responsible for hurricane disaster mitigation.
Estimating Animal Abundance in Ground Beef Batches Assayed with Molecular Markers
Hu, Xin-Sheng; Simila, Janika; Platz, Sindey Schueler; Moore, Stephen S.; Plastow, Graham; Meghen, Ciaran N.
2012-01-01
Estimating animal abundance in industrial scale batches of ground meat is important for mapping meat products through the manufacturing process and for effectively tracing the finished product during a food safety recall. The processing of ground beef involves a potentially large number of animals from diverse sources in a single product batch, which produces a high heterogeneity in capture probability. In order to estimate animal abundance through DNA profiling of ground beef constituents, two parameter-based statistical models were developed for incidence data. Simulations were applied to evaluate the maximum likelihood estimate (MLE) of a joint likelihood function from multiple surveys, showing superiority in the presence of high capture heterogeneity with small sample sizes, or comparable estimation in the presence of low capture heterogeneity with a large sample size when compared to other existing models. Our model employs the full information on the pattern of the capture-recapture frequencies from multiple samples. We applied the proposed models to estimate animal abundance in six manufacturing beef batches, genotyped using 30 single nucleotide polymorphism (SNP) markers, from a large scale beef grinding facility. Results show that between 411∼1367 animals were present in six manufacturing beef batches. These estimates are informative as a reference for improving recall processes and tracing finished meat products back to source. PMID:22479559
Mapping chemicals in air using an environmental CAT scanning system: evaluation of algorithms
NASA Astrophysics Data System (ADS)
Samanta, A.; Todd, L. A.
A new technique is being developed which creates near real-time maps of chemical concentrations in air for environmental and occupational environmental applications. This technique, we call Environmental CAT Scanning, combines the real-time measuring technique of open-path Fourier transform infrared spectroscopy with the mapping capabilitites of computed tomography to produce two-dimensional concentration maps. With this system, a network of open-path measurements is obtained over an area; measurements are then processed using a tomographic algorithm to reconstruct the concentrations. This research focussed on the process of evaluating and selecting appropriate reconstruction algorithms, for use in the field, by using test concentration data from both computer simultation and laboratory chamber studies. Four algorithms were tested using three types of data: (1) experimental open-path data from studies that used a prototype opne-path Fourier transform/computed tomography system in an exposure chamber; (2) synthetic open-path data generated from maps created by kriging point samples taken in the chamber studies (in 1), and; (3) synthetic open-path data generated using a chemical dispersion model to create time seires maps. The iterative algorithms used to reconstruct the concentration data were: Algebraic Reconstruction Technique without Weights (ART1), Algebraic Reconstruction Technique with Weights (ARTW), Maximum Likelihood with Expectation Maximization (MLEM) and Multiplicative Algebraic Reconstruction Technique (MART). Maps were evaluated quantitatively and qualitatively. In general, MART and MLEM performed best, followed by ARTW and ART1. However, algorithm performance varied under different contaminant scenarios. This study showed the importance of using a variety of maps, particulary those generated using dispersion models. The time series maps provided a more rigorous test of the algorithms and allowed distinctions to be made among the algorithms. A comprehensive evaluation of algorithms, for the environmental application of tomography, requires the use of a battery of test concentration data before field implementation, which models reality and tests the limits of the algorithms.
Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model
ERIC Educational Resources Information Center
Roberts, James S.; Thompson, Vanessa M.
2011-01-01
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.
Theobald, Douglas L; Wuttke, Deborah S
2006-09-01
THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.
Kamneva, Olga K; Rosenberg, Noah A
2017-01-01
Hybridization events generate reticulate species relationships, giving rise to species networks rather than species trees. We report a comparative study of consensus, maximum parsimony, and maximum likelihood methods of species network reconstruction using gene trees simulated assuming a known species history. We evaluate the role of the divergence time between species involved in a hybridization event, the relative contributions of the hybridizing species, and the error in gene tree estimation. When gene tree discordance is mostly due to hybridization and not due to incomplete lineage sorting (ILS), most of the methods can detect even highly skewed hybridization events between highly divergent species. For recent divergences between hybridizing species, when the influence of ILS is sufficiently high, likelihood methods outperform parsimony and consensus methods, which erroneously identify extra hybridizations. The more sophisticated likelihood methods, however, are affected by gene tree errors to a greater extent than are consensus and parsimony. PMID:28469378
Free energy reconstruction from steered dynamics without post-processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athenes, Manuel, E-mail: Manuel.Athenes@cea.f; Condensed Matter and Materials Division, Physics and Life Sciences Directorate, LLNL, Livermore, CA 94551; Marinica, Mihai-Cosmin
2010-09-20
Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, wemore » accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.« less
Master teachers' responses to twenty literacy and science/mathematics practices in deaf education.
Easterbrooks, Susan R; Stephenson, Brenda; Mertens, Donna
2006-01-01
Under a grant to improve outcomes for students who are deaf or hard of hearing awarded to the Association of College Educators--Deaf/Hard of Hearing, a team identified content that all teachers of students who are deaf and hard of hearing must understand and be able to teach. Also identified were 20 practices associated with content standards (10 each, literacy and science/mathematics). Thirty-seven master teachers identified by grant agents rated the practices on a Likert-type scale indicating the maximum benefit of each practice and maximum likelihood that they would use the practice, yielding a likelihood-impact analysis. The teachers showed strong agreement on the benefits and likelihood of use of the rated practices. Concerns about implementation of many of the practices related to time constraints and mixed-ability classrooms were themes of the reviews. Actions for teacher preparation programs were recommended.
Maximum-likelihood estimation of parameterized wavefronts from multifocal data
Sakamoto, Julia A.; Barrett, Harrison H.
2012-01-01
A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282
Finite-size analysis of continuous-variable measurement-device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Zhang, Xueying; Zhang, Yichen; Zhao, Yijia; Wang, Xiangyu; Yu, Song; Guo, Hong
2017-10-01
We study the impact of the finite-size effect on the continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) protocol, mainly considering the finite-size effect on the parameter estimation procedure. The central-limit theorem and maximum likelihood estimation theorem are used to estimate the parameters. We also analyze the relationship between the number of exchanged signals and the optimal modulation variance in the protocol. It is proved that when Charlie's position is close to Bob, the CV-MDI QKD protocol has the farthest transmission distance in the finite-size scenario. Finally, we discuss the impact of finite-size effects related to the practical detection in the CV-MDI QKD protocol. The overall results indicate that the finite-size effect has a great influence on the secret-key rate of the CV-MDI QKD protocol and should not be ignored.
Regional Estimates of Drought-Induced Tree Canopy Loss across Texas
NASA Astrophysics Data System (ADS)
Schwantes, A.; Swenson, J. J.; González-Roglich, M.; Johnson, D. M.; Domec, J. C.; Jackson, R. B.
2015-12-01
The severe drought of 2011 killed millions of trees across the state of Texas. Drought-induced tree-mortality can have significant impacts to carbon cycling, regional biophysics, and community composition. We quantified canopy cover loss across the state using remotely sensed imagery from before and after the drought at multiple scales. First, we classified ~200 orthophotos (1-m spatial resolution) from the National Agriculture Imagery Program, using a supervised maximum likelihood classification. Area of canopy cover loss in these classifications was highly correlated (R2 = 0.8) with ground estimates of canopy cover loss, measured in 74 plots across 15 different sites in Texas. These 1-m orthophoto classifications were then used to calibrate and validate coarser scale (30-m) Landsat imagery to create wall-to-wall tree canopy cover loss maps across the state of Texas. We quantified percent dead and live canopy within each pixel of Landsat to create continuous maps of dead and live tree cover, using two approaches: (1) a zero-inflated beta distribution model and (2) a random forest algorithm. Widespread canopy loss occurred across all the major natural systems of Texas, with the Edwards Plateau region most affected. In this region, on average, 10% of the forested area was lost due to the 2011 drought. We also identified climatic thresholds that controlled the spatial distribution of tree canopy loss across the state. However, surprisingly, there were many local hot spots of canopy loss, suggesting that not only climatic factors could explain the spatial patterns of canopy loss, but rather other factors related to soil, landscape, management, and stand density also likely played a role. As increases in extreme droughts are predicted to occur with climate change, it will become important to define methods that can detect associated drought-induced tree mortality across large regions. These maps could then be used (1) to quantify impacts to carbon cycling and regional biophysics, (2) to better understand the spatiotemporal dynamics of tree mortality, and (3) to calibrate and/or validate mortality algorithms in regional models.
Errors in Seismic Hazard Assessment are Creating Huge Human Losses
NASA Astrophysics Data System (ADS)
Bela, J.
2015-12-01
The current practice of representing earthquake hazards to the public based upon their perceived likelihood or probability of occurrence is proven now by the global record of actual earthquakes to be not only erroneous and unreliable, but also too deadly! Earthquake occurrence is sporadic and therefore assumptions of earthquake frequency and return-period are both not only misleading, but also categorically false. More than 700,000 people have now lost their lives (2000-2011), wherein 11 of the World's Deadliest Earthquakes have occurred in locations where probability-based seismic hazard assessments had predicted only low seismic low hazard. Unless seismic hazard assessment and the setting of minimum earthquake design safety standards for buildings and bridges are based on a more realistic deterministic recognition of "what can happen" rather than on what mathematical models suggest is "most likely to happen" such future huge human losses can only be expected to continue! The actual earthquake events that did occur were at or near the maximum potential-size event that either already had occurred in the past; or were geologically known to be possible. Haiti's M7 earthquake, 2010 (with > 222,000 fatalities) meant the dead could not even be buried with dignity. Japan's catastrophic Tohoku earthquake, 2011; a M9 Megathrust earthquake, unleashed a tsunami that not only obliterated coastal communities along the northern Japanese coast, but also claimed > 20,000 lives. This tsunami flooded nuclear reactors at Fukushima, causing 4 explosions and 3 reactors to melt down. But while this history of huge human losses due to erroneous and misleading seismic hazard estimates, despite its wrenching pain, cannot be unlived; if faced with courage and a more realistic deterministic estimate of "what is possible", it need not be lived again. An objective testing of the results of global probability based seismic hazard maps against real occurrences has never been done by the GSHAP team; even though the obvious inadequacy of the GSHAP map could have been established in the course of a simple check before the project completion. The doctrine of "psha exceptionalism" that created the maps can only be esponged by carefully examining the facts . . . which unfortunately include huge human losses!
Wynant, Willy; Abrahamowicz, Michal
2016-11-01
Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.
Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R
2018-05-26
Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Lindley frailty model for a class of compound Poisson processes
NASA Astrophysics Data System (ADS)
Kadilar, Gamze Özel; Ata, Nihal
2013-10-01
The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.
The Scattered Kuiper Belt Objects
NASA Astrophysics Data System (ADS)
Trujillo, C. A.; Jewitt, D. C.; Luu, J. X.
1999-09-01
We describe a continuing survey of the Kuiper Belt conducted at the 3.6-m Canada France Hawaii Telescope on Mauna Kea, Hawaii. The survey employs a 12288 x 8192 pixel CCD mosaic to image the sky to red magnitude 24. All detected objects are targeted for systematic follow-up observations, allowing us to determine their orbital characteristics. Three new members of the rare Scattered Kuiper Belt Object class have been identified, bringing the known population of such objects to four. The SKBOs are thought to have been scattered outward by Neptune, and are a potential source of the short-period comets. Using a Maximum Likelihood method, we place observational constraints on the total number and mass of the SKBOs.
DSN telemetry system performance using a maximum likelihood convolutional decoder
NASA Technical Reports Server (NTRS)
Benjauthrit, B.; Kemp, R. P.
1977-01-01
Results are described of telemetry system performance testing using DSN equipment and a Maximum Likelihood Convolutional Decoder (MCD) for code rates 1/2 and 1/3, constraint length 7 and special test software. The test results confirm the superiority of the rate 1/3 over that of the rate 1/2. The overall system performance losses determined at the output of the Symbol Synchronizer Assembly are less than 0.5 db for both code rates. Comparison of the performance is also made with existing mathematical models. Error statistics of the decoded data are examined. The MCD operational threshold is found to be about 1.96 db.
Soft decoding a self-dual (48, 24; 12) code
NASA Technical Reports Server (NTRS)
Solomon, G.
1993-01-01
A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.
Effects of time-shifted data on flight determined stability and control derivatives
NASA Technical Reports Server (NTRS)
Steers, S. T.; Iliff, K. W.
1975-01-01
Flight data were shifted in time by various increments to assess the effects of time shifts on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there was a considerable time shift in the data. Time shifts degraded the estimates of the derivatives, but the degradation was in a consistent rather than a random pattern. Time shifts in the control variables caused the most degradation, and the lateral-directional rotary derivatives were affected the most by time shifts in any variable.
Minimum distance classification in remote sensing
NASA Technical Reports Server (NTRS)
Wacker, A. G.; Landgrebe, D. A.
1972-01-01
The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.
Maximum likelihood conjoint measurement of lightness and chroma.
Rogers, Marie; Knoblauch, Kenneth; Franklin, Anna
2016-03-01
Color varies along dimensions of lightness, hue, and chroma. We used maximum likelihood conjoint measurement to investigate how lightness and chroma influence color judgments. Observers judged lightness and chroma of stimuli that varied in both dimensions in a paired-comparison task. We modeled how changes in one dimension influenced judgment of the other. An additive model best fit the data in all conditions except for judgment of red chroma where there was a small but significant interaction. Lightness negatively contributed to perception of chroma for red, blue, and green hues but not for yellow. The method permits quantification of lightness and chroma contributions to color appearance.
Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.
2009-01-01
Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086
Williams, M S; Ebel, E D; Cao, Y
2013-01-01
The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.
Stamatakis, Alexandros
2006-11-01
RAxML-VI-HPC (randomized axelerated maximum likelihood for high performance computing) is a sequential and parallel program for inference of large phylogenies with maximum likelihood (ML). Low-level technical optimizations, a modification of the search algorithm, and the use of the GTR+CAT approximation as replacement for GTR+Gamma yield a program that is between 2.7 and 52 times faster than the previous version of RAxML. A large-scale performance comparison with GARLI, PHYML, IQPNNI and MrBayes on real data containing 1000 up to 6722 taxa shows that RAxML requires at least 5.6 times less main memory and yields better trees in similar times than the best competing program (GARLI) on datasets up to 2500 taxa. On datasets > or =4000 taxa it also runs 2-3 times faster than GARLI. RAxML has been parallelized with MPI to conduct parallel multiple bootstraps and inferences on distinct starting trees. The program has been used to compute ML trees on two of the largest alignments to date containing 25,057 (1463 bp) and 2182 (51,089 bp) taxa, respectively. icwww.epfl.ch/~stamatak
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level
Savalei, Victoria; Rhemtulla, Mijke
2017-01-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371
NASA Technical Reports Server (NTRS)
Zhuang, Xin
1990-01-01
LANDSAT Thematic Mapper (TM) data for March 23, 1987 with accompanying ground truth data for the study area in Miami County, IN were used to determine crop residue type and class. Principle components and spectral ratioing transformations were applied to the LANDSAT TM data. One graphic information system (GIS) layer of land ownership was added to each original image as the eighth band of data in an attempt to improve classification. Maximum likelihood, minimum distance, and neural networks were used to classify the original, transformed, and GIS-enhanced remotely sensed data. Crop residues could be separated from one another and from bare soil and other biomass. Two types of crop residue and four classes were identified from each LANDSAT TM image. The maximum likelihood classifier performed the best classification for each original image without need of any transformation. The neural network classifier was able to improve the classification by incorporating a GIS-layer of land ownership as an eighth band of data. The maximum likelihood classifier was unable to consider this eighth band of data and thus, its results could not be improved by its consideration.
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.
Savalei, Victoria; Rhemtulla, Mijke
2017-08-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.
Mapping Quantitative Traits in Unselected Families: Algorithms and Examples
Dupuis, Josée; Shi, Jianxin; Manning, Alisa K.; Benjamin, Emelia J.; Meigs, James B.; Cupples, L. Adrienne; Siegmund, David
2009-01-01
Linkage analysis has been widely used to identify from family data genetic variants influencing quantitative traits. Common approaches have both strengths and limitations. Likelihood ratio tests typically computed in variance component analysis can accommodate large families but are highly sensitive to departure from normality assumptions. Regression-based approaches are more robust but their use has primarily been restricted to nuclear families. In this paper, we develop methods for mapping quantitative traits in moderately large pedigrees. Our methods are based on the score statistic which in contrast to the likelihood ratio statistic, can use nonparametric estimators of variability to achieve robustness of the false positive rate against departures from the hypothesized phenotypic model. Because the score statistic is easier to calculate than the likelihood ratio statistic, our basic mapping methods utilize relatively simple computer code that performs statistical analysis on output from any program that computes estimates of identity-by-descent. This simplicity also permits development and evaluation of methods to deal with multivariate and ordinal phenotypes, and with gene-gene and gene-environment interaction. We demonstrate our methods on simulated data and on fasting insulin, a quantitative trait measured in the Framingham Heart Study. PMID:19278016
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.
2016-01-01
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L
2018-01-01
We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.
Analyzing repeated measures semi-continuous data, with application to an alcohol dependence study.
Liu, Lei; Strawderman, Robert L; Johnson, Bankole A; O'Quigley, John M
2016-02-01
Two-part random effects models (Olsen and Schafer,(1) Tooze et al.(2)) have been applied to repeated measures of semi-continuous data, characterized by a mixture of a substantial proportion of zero values and a skewed distribution of positive values. In the original formulation of this model, the natural logarithm of the positive values is assumed to follow a normal distribution with a constant variance parameter. In this article, we review and consider three extensions of this model, allowing the positive values to follow (a) a generalized gamma distribution, (b) a log-skew-normal distribution, and (c) a normal distribution after the Box-Cox transformation. We allow for the possibility of heteroscedasticity. Maximum likelihood estimation is shown to be conveniently implemented in SAS Proc NLMIXED. The performance of the methods is compared through applications to daily drinking records in a secondary data analysis from a randomized controlled trial of topiramate for alcohol dependence treatment. We find that all three models provide a significantly better fit than the log-normal model, and there exists strong evidence for heteroscedasticity. We also compare the three models by the likelihood ratio tests for non-nested hypotheses (Vuong(3)). The results suggest that the generalized gamma distribution provides the best fit, though no statistically significant differences are found in pairwise model comparisons. © The Author(s) 2012.
A case study for the integration of predictive mineral potential maps
NASA Astrophysics Data System (ADS)
Lee, Saro; Oh, Hyun-Joo; Heo, Chul-Ho; Park, Inhye
2014-09-01
This study aims to elaborate on the mineral potential maps using various models and verify the accuracy for the epithermal gold (Au) — silver (Ag) deposits in a Geographic Information System (GIS) environment assuming that all deposits shared a common genesis. The maps of potential Au and Ag deposits were produced by geological data in Taebaeksan mineralized area, Korea. The methodological framework consists of three main steps: 1) identification of spatial relationships 2) quantification of such relationships and 3) combination of multiple quantified relationships. A spatial database containing 46 Au-Ag deposits was constructed using GIS. The spatial association between training deposits and 26 related factors were identified and quantified by probabilistic and statistical modelling. The mineral potential maps were generated by integrating all factors using the overlay method and recombined afterwards using the likelihood ratio model. They were verified by comparison with test mineral deposit locations. The verification revealed that the combined mineral potential map had the greatest accuracy (83.97%), whereas it was 72.24%, 65.85%, 72.23% and 71.02% for the likelihood ratio, weight of evidence, logistic regression and artificial neural network models, respectively. The mineral potential map can provide useful information for the mineral resource development.
Phylogenetically marking the limits of the genus Fusarium for post-Article 59 usage
USDA-ARS?s Scientific Manuscript database
Fusarium (Hypocreales, Nectriaceae) is one of the most important and systematically challenging groups of mycotoxigenic, plant pathogenic, and human pathogenic fungi. We conducted maximum likelihood (ML), maximum parsimony (MP) and Bayesian (B) analyses on partial nucleotide sequences of genes encod...
Predicting protein-protein interactions from protein domains using a set cover approach.
Huang, Chengbang; Morcos, Faruck; Kanaan, Simon P; Wuchty, Stefan; Chen, Danny Z; Izaguirre, Jesús A
2007-01-01
One goal of contemporary proteome research is the elucidation of cellular protein interactions. Based on currently available protein-protein interaction and domain data, we introduce a novel method, Maximum Specificity Set Cover (MSSC), for the prediction of protein-protein interactions. In our approach, we map the relationship between interactions of proteins and their corresponding domain architectures to a generalized weighted set cover problem. The application of a greedy algorithm provides sets of domain interactions which explain the presence of protein interactions to the largest degree of specificity. Utilizing domain and protein interaction data of S. cerevisiae, MSSC enables prediction of previously unknown protein interactions, links that are well supported by a high tendency of coexpression and functional homogeneity of the corresponding proteins. Focusing on concrete examples, we show that MSSC reliably predicts protein interactions in well-studied molecular systems, such as the 26S proteasome and RNA polymerase II of S. cerevisiae. We also show that the quality of the predictions is comparable to the Maximum Likelihood Estimation while MSSC is faster. This new algorithm and all data sets used are accessible through a Web portal at http://ppi.cse.nd.edu.
AUV SLAM and Experiments Using a Mechanical Scanning Forward-Looking Sonar
He, Bo; Liang, Yan; Feng, Xiao; Nian, Rui; Yan, Tianhong; Li, Minghui; Zhang, Shujing
2012-01-01
Navigation technology is one of the most important challenges in the applications of autonomous underwater vehicles (AUVs) which navigate in the complex undersea environment. The ability of localizing a robot and accurately mapping its surroundings simultaneously, namely the simultaneous localization and mapping (SLAM) problem, is a key prerequisite of truly autonomous robots. In this paper, a modified-FastSLAM algorithm is proposed and used in the navigation for our C-Ranger research platform, an open-frame AUV. A mechanical scanning imaging sonar is chosen as the active sensor for the AUV. The modified-FastSLAM implements the update relying on the on-board sensors of C-Ranger. On the other hand, the algorithm employs the data association which combines the single particle maximum likelihood method with modified negative evidence method, and uses the rank-based resampling to overcome the particle depletion problem. In order to verify the feasibility of the proposed methods, both simulation experiments and sea trials for C-Ranger are conducted. The experimental results show the modified-FastSLAM employed for the navigation of the C-Ranger AUV is much more effective and accurate compared with the traditional methods. PMID:23012549
AUV SLAM and experiments using a mechanical scanning forward-looking sonar.
He, Bo; Liang, Yan; Feng, Xiao; Nian, Rui; Yan, Tianhong; Li, Minghui; Zhang, Shujing
2012-01-01
Navigation technology is one of the most important challenges in the applications of autonomous underwater vehicles (AUVs) which navigate in the complex undersea environment. The ability of localizing a robot and accurately mapping its surroundings simultaneously, namely the simultaneous localization and mapping (SLAM) problem, is a key prerequisite of truly autonomous robots. In this paper, a modified-FastSLAM algorithm is proposed and used in the navigation for our C-Ranger research platform, an open-frame AUV. A mechanical scanning imaging sonar is chosen as the active sensor for the AUV. The modified-FastSLAM implements the update relying on the on-board sensors of C-Ranger. On the other hand, the algorithm employs the data association which combines the single particle maximum likelihood method with modified negative evidence method, and uses the rank-based resampling to overcome the particle depletion problem. In order to verify the feasibility of the proposed methods, both simulation experiments and sea trials for C-Ranger are conducted. The experimental results show the modified-FastSLAM employed for the navigation of the C-Ranger AUV is much more effective and accurate compared with the traditional methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furlong, R.A.; Goudie, D.R.; Carter, N.P.
1993-06-01
Meiotic recombination in flow-sorted single sperm was used to analyze four highly polymorphic microsatellite markers on the long arm of chromosome 9. The microsatellites comprised three tightly linked markers: 9CMP1 (D9S109), 9CMP2 (D9S127), and D9S53, which map to 9q31, and a reference marker, ASS, which is located in 9q34.1. Haplotypes of single sperm were assessed by using PCR in a single-step multiplex reaction to amplify each locus. Recombinant haplotypes were identified by their relative infrequency and were analyzed using THREELOC, a maximum-likelihood-analysis program, and an adaptation of CRI-MAP. The most likely order of these markers was cen-D9S109-D9S127-D9S53-ASS-tel with D9S109, D9S127,more » and D9S53 being separated by a genetic distance of approximately 3%. The order of the latter three markers did not however achieve statistical significance using the THREELOC program. 21 refs., 2 figs., 4 tabs.« less
NASA Astrophysics Data System (ADS)
Saran, Sameer; Sterk, Geert; Kumar, Suresh
2007-10-01
Land use/cover is an important watershed surface characteristic that affects surface runoff and erosion. Many of the available hydrological models divide the watershed into Hydrological Response Units (HRU), which are spatial units with expected similar hydrological behaviours. The division into HRU's requires good-quality spatial data on land use/cover. This paper presents different approaches to attain an optimal land use/cover map based on remote sensing imagery for a Himalayan watershed in northern India. First digital classifications using maximum likelihood classifier (MLC) and a decision tree classifier were applied. The results obtained from the decision tree were better and even improved after post classification sorting. But the obtained land use/cover map was not sufficient for the delineation of HRUs, since the agricultural land use/cover class did not discriminate between the two major crops in the area i.e. paddy and maize. Therefore we adopted a visual classification approach using optical data alone and also fused with ENVISAT ASAR data. This second step with detailed classification system resulted into better classification accuracy within the 'agricultural land' class which will be further combined with topography and soil type to derive HRU's for physically-based hydrological modelling.
Hühn, M
1995-05-01
Some approaches to molecular marker-assisted linkage detection for a dominant disease-resistance trait based on a segregating F2 population are discussed. Analysis of two-point linkage is carried out by the traditional measure of maximum lod score. It depends on (1) the maximum-likelihood estimate of the recombination fraction between the marker and the disease-resistance gene locus, (2) the observed absolute frequencies, and (3) the unknown number of tested individuals. If one replaces the absolute frequencies by expressions depending on the unknown sample size and the maximum-likelihood estimate of recombination value, the conventional rule for significant linkage (maximum lod score exceeds a given linkage threshold) can be resolved for the sample size. For each sub-population used for linkage analysis [susceptible (= recessive) individuals, resistant (= dominant) individuals, complete F2] this approach gives a lower bound for the necessary number of individuals required for the detection of significant two-point linkage by the lod-score method.
Neural networks for satellite remote sensing and robotic sensor interpretation
NASA Astrophysics Data System (ADS)
Martens, Siegfried
Remote sensing of forests and robotic sensor fusion can be viewed, in part, as supervised learning problems, mapping from sensory input to perceptual output. This dissertation develops ARTMAP neural networks for real-time category learning, pattern recognition, and prediction tailored to remote sensing and robotics applications. Three studies are presented. The first two use ARTMAP to create maps from remotely sensed data, while the third uses an ARTMAP system for sensor fusion on a mobile robot. The first study uses ARTMAP to predict vegetation mixtures in the Plumas National Forest based on spectral data from the Landsat Thematic Mapper satellite. While most previous ARTMAP systems have predicted discrete output classes, this project develops new capabilities for multi-valued prediction. On the mixture prediction task, the new network is shown to perform better than maximum likelihood and linear mixture models. The second remote sensing study uses an ARTMAP classification system to evaluate the relative importance of spectral and terrain data for map-making. This project has produced a large-scale map of remotely sensed vegetation in the Sierra National Forest. Network predictions are validated with ground truth data, and maps produced using the ARTMAP system are compared to a map produced by human experts. The ARTMAP Sierra map was generated in an afternoon, while the labor intensive expert method required nearly a year to perform the same task. The robotics research uses an ARTMAP system to integrate visual information and ultrasonic sensory information on a B14 mobile robot. The goal is to produce a more accurate measure of distance than is provided by the raw sensors. ARTMAP effectively combines sensory sources both within and between modalities. The improved distance percept is used to produce occupancy grid visualizations of the robot's environment. The maps produced point to specific problems of raw sensory information processing and demonstrate the benefits of using a neural network system for sensor fusion.
Probabilistic tsunami hazard assessment at Seaside, Oregon, for near-and far-field seismic sources
Gonzalez, F.I.; Geist, E.L.; Jaffe, B.; Kanoglu, U.; Mofjeld, H.; Synolakis, C.E.; Titov, V.V.; Areas, D.; Bellomo, D.; Carlton, D.; Horning, T.; Johnson, J.; Newman, J.; Parsons, T.; Peters, R.; Peterson, C.; Priest, G.; Venturato, A.; Weber, J.; Wong, F.; Yalciner, A.
2009-01-01
The first probabilistic tsunami flooding maps have been developed. The methodology, called probabilistic tsunami hazard assessment (PTHA), integrates tsunami inundation modeling with methods of probabilistic seismic hazard assessment (PSHA). Application of the methodology to Seaside, Oregon, has yielded estimates of the spatial distribution of 100- and 500-year maximum tsunami amplitudes, i.e., amplitudes with 1% and 0.2% annual probability of exceedance. The 100-year tsunami is generated most frequently by far-field sources in the Alaska-Aleutian Subduction Zone and is characterized by maximum amplitudes that do not exceed 4 m, with an inland extent of less than 500 m. In contrast, the 500-year tsunami is dominated by local sources in the Cascadia Subduction Zone and is characterized by maximum amplitudes in excess of 10 m and an inland extent of more than 1 km. The primary sources of uncertainty in these results include those associated with interevent time estimates, modeling of background sea level, and accounting for temporal changes in bathymetry and topography. Nonetheless, PTHA represents an important contribution to tsunami hazard assessment techniques; viewed in the broader context of risk analysis, PTHA provides a method for quantifying estimates of the likelihood and severity of the tsunami hazard, which can then be combined with vulnerability and exposure to yield estimates of tsunami risk. Copyright 2009 by the American Geophysical Union.
Weak data do not make a free lunch, only a cheap meal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Zhipu; Rajashankar, Kanagalaghatta; Dauter, Zbigniew
2014-01-17
Four data sets were processed at resolutions significantly exceeding the criteria traditionally used for estimating the diffraction data resolution limit. The analysis of these data and the corresponding model-quality indicators suggests that the criteria of resolution limits widely adopted in the past may be somewhat conservative. Various parameters, such asR mergeandI/σ(I), optical resolution and the correlation coefficients CC 1/2and CC*, can be used for judging the internal data quality, whereas the reliability factorsRandR freeas well as the maximum-likelihood target values and real-space map correlation coefficients can be used to estimate the agreement between the data and the refined model. However,more » none of these criteria provide a reliable estimate of the data resolution cutoff limit. The analysis suggests that extension of the maximum resolution by about 0.2 Å beyond the currently adopted limit where theI/σ(I) value drops to 2.0 does not degrade the quality of the refined structural models, but may sometimes be advantageous. Such an extension may be particularly beneficial for significantly anisotropic diffraction. Extension of the maximum resolution at the stage of data collection and structure refinement is cheap in terms of the required effort and is definitely more advisable than accepting a too conservative resolution cutoff, which is unfortunately quite frequent among the crystal structures deposited in the Protein Data Bank.« less
NASA Astrophysics Data System (ADS)
Hale Topaloğlu, Raziye; Sertel, Elif; Musaoğlu, Nebiye
2016-06-01
This study aims to compare classification accuracies of land cover/use maps created from Sentinel-2 and Landsat-8 data. Istanbul metropolitan city of Turkey, with a population of around 14 million, having different landscape characteristics was selected as study area. Water, forest, agricultural areas, grasslands, transport network, urban, airport- industrial units and barren land- mine land cover/use classes adapted from CORINE nomenclature were used as main land cover/use classes to identify. To fulfil the aims of this research, recently acquired dated 08/02/2016 Sentinel-2 and dated 22/02/2016 Landsat-8 images of Istanbul were obtained and image pre-processing steps like atmospheric and geometric correction were employed. Both Sentinel-2 and Landsat-8 images were resampled to 30m pixel size after geometric correction and similar spectral bands for both satellites were selected to create a similar base for these multi-sensor data. Maximum Likelihood (MLC) and Support Vector Machine (SVM) supervised classification methods were applied to both data sets to accurately identify eight different land cover/ use classes. Error matrix was created using same reference points for Sentinel-2 and Landsat-8 classifications. After the classification accuracy, results were compared to find out the best approach to create current land cover/use map of the region. The results of MLC and SVM classification methods were compared for both images.
Evaluation of Aster Images for Characterization and Mapping of Amethyst Mining Residues
NASA Astrophysics Data System (ADS)
Markoski, P. R.; Rolim, S. B. A.
2012-07-01
The objective of this work was to evaluate the potential of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), subsystems VNIR (Visible and Near Infrared) and SWIR (Short Wave Infrared) images, for discrimination and mapping of amethyst mining residues (basalt) in the Ametista do Sul Region, Rio Grande do Sul State, Brazil. This region provides the most part of amethyst mining of the World. The basalt is extracted during the mining process and deposited outside the mine. As a result, mounts of residues (basalt) rise up. These mounts are many times smaller than ASTER pixel size (VNIR - 15 meters and SWIR - 30 meters). Thus, the pixel composition becomes a mixing of various materials, hampering its identification and mapping. Trying to solve this problem, multispectral algorithm Maximum Likelihood (MaxVer) and the hyperspectral technique SAM (Spectral Angle Mapper) were used in this work. Images from ASTER subsystems VNIR and SWIR were used to perform the classifications. SAM technique produced better results than MaxVer algorithm. The main error found by the techniques was the mixing between "shadow" and "mining residues/basalt" classes. With the SAM technique the confusion decreased because it employed the basalt spectral curve as a reference, while the multispectral techniques employed pixels groups that could have spectral mixture with other targets. The results showed that in tropical terrains as the study area, ASTER data can be efficacious for the characterization of mining residues.