Science.gov

Sample records for maximum likelihood reconstruction

  1. Improved maximum likelihood reconstruction of complex multi-generational pedigrees.

    PubMed

    Sheehan, Nuala A; Bartlett, Mark; Cussens, James

    2014-11-01

    The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as

  2. A dual formulation of a penalized maximum likelihood x-ray CT reconstruction problem

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Taguchi, Katsuyuki; Gullberg, Grant T.; Tsui, Benjamin M. W.

    2009-02-01

    This work studies the dual formulation of a penalized maximum likelihood reconstruction problem in x-ray CT. The primal objective function is a Poisson log-likelihood combined with a weighted cross-entropy penalty term. The dual formulation of the primal optimization problem is then derived and the optimization procedure outlined. The dual formulation better exploits the structure of the problem, which translates to faster convergence of iterative reconstruction algorithms. A gradient descent algorithm is implemented for solving the dual problem and its performance is compared with the filtered back-projection algorithm, and with the primal formulation optimized by using surrogate functions. The 3D XCAT phantom and an analytical x-ray CT simulator are used to generate noise-free and noisy CT projection data set with monochromatic and polychromatic x-ray spectrums. The reconstructed images from the dual formulation delineate the internal structures at early iterations better than the primal formulation using surrogate functions. However the body contour is slower to converge in the dual than in the primal formulation. The dual formulation demonstrate better noise-resolution tradeoff near the internal organs than the primal formulation. Since the surrogate functions in general can provide a diagonal approximation of the Hessian matrix of the objective function, further convergence speed up may be achieved by deriving the surrogate function of the dual objective function.

  3. Application of maximum likelihood reconstruction of subaperture data for measurement of large flat mirrors

    SciTech Connect

    Su Peng; Burge, James H.; Parks, Robert E.

    2010-01-01

    Interferometers accurately measure the difference between two wavefronts, one from a reference surface and the other from an unknown surface. If the reference surface is near perfect or is accurately known from some other test, then the shape of the unknown surface can be determined. We investigate the case where neither the reference surface nor the surface under test is well known. By making multiple shear measurements where both surfaces are translated and/or rotated, we obtain sufficient information to reconstruct the figure of both surfaces with a maximum likelihood reconstruction method. The method is demonstrated for the measurement of a 1.6 m flat mirror to 2 nm rms, using a smaller reference mirror that had significant figure error.

  4. A New Maximum-likelihood Technique for Reconstructing Cosmic-Ray Anisotropy at All Angular Scales

    NASA Astrophysics Data System (ADS)

    Ahlers, M.; BenZvi, S. Y.; Desiati, P.; Díaz–Vélez, J. C.; Fiorino, D. W.; Westerhoff, S.

    2016-05-01

    The arrival directions of TeV–PeV cosmic rays show weak but significant anisotropies with relative intensities at the level of one per mille. Due to the smallness of the anisotropies, quantitative studies require careful disentanglement of detector effects from the observation. We discuss an iterative maximum-likelihood reconstruction that simultaneously fits cosmic-ray anisotropies and detector acceptance. The method does not rely on detector simulations and provides an optimal anisotropy reconstruction for ground-based cosmic-ray observatories located in the middle latitudes. It is particularly well suited to the recovery of the dipole anisotropy, which is a crucial observable for the study of cosmic-ray diffusion in our Galaxy. We also provide general analysis methods for recovering large- and small-scale anisotropies that take into account systematic effects of the observation by ground-based detectors.

  5. Dose reduction in digital breast tomosynthesis using a penalized maximum likelihood reconstruction

    NASA Astrophysics Data System (ADS)

    Das, Mini; Gifford, Howard; O'Connor, Michael; Glick, Stephen J.

    2009-02-01

    Digital breast tomosynthesis (DBT) is a 3D imaging modality with limited angle projection data. The ability of tomosynthesis systems to accurately detect smaller microcalcifications is debatable. This is because of the higher noise in the projection data (lower average dose per projection), which is then propagated through the reconstructed image . Reconstruction methods that minimize the propagation of quantum noise have potential to improve microcalcification detectability using DBT. In this paper we show that penalized maximum likelihood (PML) reconstruction in DBT yields images with an improved resolution/noise tradeoff as compared to conventional filtered backprojection (FBP). Signal to noise ratio (SNR) using PML was observed to be higher than that obtained using the standard FBP algorithm. Our results indicate that for microcalcifications, using the PML algorithm, reconstructions obtained with a mean glandular dose (MGD) of 1.5 mGy yielded better SNR than that those obtained with FBP using a 4mGy total dose. Thus perhaps total dose could be reduced to one-third or lower with same microcalcification detectability, if PML reconstruction is used instead of FBP. Visibility of low contrast masses with various contrast levels were studied using a contrast-detail phantom in a breast shape structure with an average breast density. Images generated using various dose levels indicate that visibility of low contrast masses generated using PML reconstructions are significantly better than those generated using FBP. SNR measurements in the low-contrast study did not appear to correlate with the visual subjective analysis of the reconstruction indicating that SNR is not a good figure of merit to be used.

  6. Evaluation and optimization of the maximum-likelihood approach for image reconstruction in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Jerebko, Anna K.; Mertelmeier, Thomas

    2010-04-01

    Digital Breast Tomosynthesis (DBT) suffers from incomplete data and poor quantum statistics limited by the total dose absorbed in the breast. Hence, statistical reconstruction assuming the photon statistics to follow a Poisson distribution may have some advantages. This study investigates state-of-art iterative maximum likelihood (ML) statistical reconstruction algorithms for DBT and compares the results with simple backprojection (BP), filtered backprojection (FBP), and iFBP (FBP with filter derived from iterative reconstruction). The gradient-ascent and convex optimization variants of the transmission ML algorithm are evaluated with phantom and clinical data. Convergence speed is very similar for both iterative statistical algorithms and after approximately 5 iterations all significant details are well displayed, although we notice increasing noise. We found empirically that a relaxation factor between 0.25 and 0.5 provides the optimal trade-off between noise and contrast. The ML-convex algorithm gives smoother results than the ML-gradient algorithm. The low-contrast CNR of the ML algorithms is between CNR for simple backprojection (highest) and FBP (lowest). Spatial resolution of iterative statistical and iFBP algorithms is similar to that of FBP but the quantitative density representation better resembles conventional mammograms. The iFBP algorithm provides the benefits of statistical iterative reconstruction techniques and requires much shorter computation time.

  7. Bias reduction for low-statistics PET: maximum likelihood reconstruction with a modified Poisson distribution.

    PubMed

    Van Slambrouck, Katrien; Stute, Simon; Comtat, Claude; Sibomana, Merence; van Velden, Floris H P; Boellaard, Ronald; Nuyts, Johan

    2015-01-01

    Positron emission tomography data are typically reconstructed with maximum likelihood expectation maximization (MLEM). However, MLEM suffers from positive bias due to the non-negativity constraint. This is particularly problematic for tracer kinetic modeling. Two reconstruction methods with bias reduction properties that do not use strict Poisson optimization are presented and compared to each other, to filtered backprojection (FBP), and to MLEM. The first method is an extension of NEGML, where the Poisson distribution is replaced by a Gaussian distribution for low count data points. The transition point between the Gaussian and the Poisson regime is a parameter of the model. The second method is a simplification of ABML. ABML has a lower and upper bound for the reconstructed image whereas AML has the upper bound set to infinity. AML uses a negative lower bound to obtain bias reduction properties. Different choices of the lower bound are studied. The parameter of both algorithms determines the effectiveness of the bias reduction and should be chosen large enough to ensure bias-free images. This means that both algorithms become more similar to least squares algorithms, which turned out to be necessary to obtain bias-free reconstructions. This comes at the cost of increased variance. Nevertheless, NEGML and AML have lower variance than FBP. Furthermore, randoms handling has a large influence on the bias. Reconstruction with smoothed randoms results in lower bias compared to reconstruction with unsmoothed randoms or randoms precorrected data. However, NEGML and AML yield both bias-free images for large values of their parameter. PMID:25137726

  8. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient

    PubMed Central

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-01-01

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample’s high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use. PMID:27283980

  9. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient

    NASA Astrophysics Data System (ADS)

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-06-01

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample’s high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.

  10. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient.

    PubMed

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-01-01

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use. PMID:27283980

  11. Wobbling and LSF-based maximum likelihood expectation maximization reconstruction for wobbling PET

    NASA Astrophysics Data System (ADS)

    Kim, Hang-Keun; Son, Young-Don; Kwon, Dae-Hyuk; Joo, Yohan; Cho, Zang-Hee

    2016-04-01

    Positron emission tomography (PET) is a widely used imaging modality; however, the PET spatial resolution is not yet satisfactory for precise anatomical localization of molecular activities. Detector size is the most important factor because it determines the intrinsic resolution, which is approximately half of the detector size and determines the ultimate PET resolution. Detector size, however, cannot be made too small because both the decreased detection efficiency and the increased septal penetration effect degrade the image quality. A wobbling and line spread function (LSF)-based maximum likelihood expectation maximization (WL-MLEM) algorithm, which combined the MLEM iterative reconstruction algorithm with wobbled sampling and LSF-based deconvolution using the system matrix, was proposed for improving the spatial resolution of PET without reducing the scintillator or detector size. The new algorithm was evaluated using a simulation, and its performance was compared with that of the existing algorithms, such as conventional MLEM and LSF-based MLEM. Simulations demonstrated that the WL-MLEM algorithm yielded higher spatial resolution and image quality than the existing algorithms. The WL-MLEM algorithm with wobbling PET yielded substantially improved resolution compared with conventional algorithms with stationary PET. The algorithm can be easily extended to other iterative reconstruction algorithms, such as maximum a priori (MAP) and ordered subset expectation maximization (OSEM). The WL-MLEM algorithm with wobbling PET may offer improvements in both sensitivity and resolution, the two most sought-after features in PET design.

  12. Penalized maximum likelihood reconstruction for improved microcalcification detection in breast tomosynthesis.

    PubMed

    Das, Mini; Gifford, Howard C; O'Connor, J Michael; Glick, Stephen J

    2011-04-01

    We examined the application of an iterative penalized maximum likelihood (PML) reconstruction method for improved detectability of microcalcifications (MCs) in digital breast tomosynthesis (DBT). Localized receiver operating characteristic (LROC) psychophysical studies with human observers and 2-D image slices were conducted to evaluate the performance of this reconstruction method and to compare its performance against the commonly used Feldkamp FBP algorithm. DBT projections were generated using rigorous computer simulations that included accurate modeling of the noise and detector blur. Acquisition dose levels of 0.7, 1.0, and 1.5 mGy in a 5-cm-thick compressed breast were tested. The defined task was to localize and detect MC clusters consisting of seven MCs. The individual MC diameter was 150 μm. Compressed-breast phantoms derived from CT images of actual mastectomy specimens provided realistic background structures for the detection task. Four observers each read 98 test images for each combination of reconstruction method and acquisition dose. All observers performed better with the PML images than with the FBP images. With the acquisition dose of 0.7 mGy, the average areas under the LROC curve (A(L)) for the PML and FBP algorithms were 0.69 and 0.43, respectively. For the 1.0-mGy dose, the values of A(L) were 0.93 (PML) and 0.7 (FBP), while the 1.5-mGy dose resulted in areas of 1.0 and 0.9, respectively, for the PML and FBP algorithms. A 2-D analysis of variance applied to the individual observer areas showed statistically significant differences (at a significance level of 0.05) between the reconstruction strategies at all three dose levels. There were no significant differences in observer performance for any of the dose levels. PMID:21041158

  13. Maximum likelihood approach for the adaptive optics point spread function reconstruction

    NASA Astrophysics Data System (ADS)

    Exposito, J.; Gratadour, Damien; Rousset, Gérard; Clénet, Yann; Mugnier, Laurent; Gendron, Éric

    2014-08-01

    This paper is dedicated to a new PSF reconstruction method based on a maximum likelihood approach (ML) which uses as well the telemetry data of the AO system (see Exposito et al. (2013)1). This approach allows a joint-estimation of the covariance matrix of the mirror modes of the residual phase, the noise variance and the Fried parameter r0. In this method, an estimate of the covariance between the parallel residual phase and the orthogonal phase is required. We developed a recursive approach taking into account the temporal effect of the AO-loop, so that this covariance only depends on the r0, the wind speed and some of the parameters of the system (the gain of the loop, the interaction matrix and the command matrix). With this estimation, the high bandwidth hypothesis is no longer required to reconstruct the PSF with a good accuracy. We present the validation of the method and the results on numerical simulations (on a SCAO system) and show that our ML method allows an accurate estimation of the PSF in the case of a Shack-Hartmann (SH) wavefront sensor (WFS).

  14. Precision and accuracy of regional radioactivity quantitation using the maximum likelihood EM reconstruction algorithm

    SciTech Connect

    Carson, R.E.; Yan, Y.; Chodkowski, B.; Yap, T.K.; Daube-Witherspoon, M.E. )

    1994-09-01

    The imaging characteristics of maximum likelihood (ML) reconstruction using the EM algorithm for emission tomography have been extensively evaluated. There has been less study of the precision and accuracy of ML estimates of regional radioactivity concentration. The authors developed a realistic brain slice simulation by segmenting a normal subject's MRI scan into gray matter, white matter, and CSF and produced PET sinogram data with a model that included detector resolution and efficiencies, attenuation, scatter, and randoms. Noisy realizations at different count levels were created, and ML and filtered backprojection (FBP) reconstructions were performed. The bias and variability of ROI values were determined. In addition, the effects of ML pixel size, image smoothing and region size reduction were assessed. ML estimates at 1,000 iterations (0.6 sec per iteration on a parallel computer) for 1-cm[sup 2] gray matter ROIs showed negative biases of 6% [+-] 2% which can be reduced to 0% [+-] 3% by removing the outer 1-mm rim of each ROI. FBP applied to the full-size ROIs had 15% [+-] 4% negative bias with 50% less noise than ML. Shrinking the FBP regions provided partial bias compensation with noise increases to levels similar to ML. Smoothing of ML images produced biases comparable to FBP with slightly less noise. Because of its heavy computational requirements, the ML algorithm will be most useful for applications in which achieving minimum bias is important.

  15. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization

    NASA Astrophysics Data System (ADS)

    Stsepankou, D.; Arns, A.; Ng, S. K.; Zygmanski, P.; Hesser, J.

    2012-10-01

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone-beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system.

  16. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization.

    PubMed

    Stsepankou, D; Arns, A; Ng, S K; Zygmanski, P; Hesser, J

    2012-10-01

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone-beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system. PMID:22964760

  17. Maximum likelihood reconstruction in fully 3D PET via the SAGE algorithm

    SciTech Connect

    Ollinger, J.M.; Goggin, A.S.

    1996-12-31

    The SAGE and ordered subsets algorithms have been proposed as fast methods to compute penalized maximum likelihood estimates in PET. We have implemented both for use in fully 3D PET and completed a preliminary evaluation. The technique used to compute the transition matrix is fully described. The evaluation suggests that the ordered subsets algorithm converges much faster than SAGE, but that it stops short of the optimal solution.

  18. L.U.St: a tool for approximated maximum likelihood supertree reconstruction

    PubMed Central

    2014-01-01

    Background Supertrees combine disparate, partially overlapping trees to generate a synthesis that provides a high level perspective that cannot be attained from the inspection of individual phylogenies. Supertrees can be seen as meta-analytical tools that can be used to make inferences based on results of previous scientific studies. Their meta-analytical application has increased in popularity since it was realised that the power of statistical tests for the study of evolutionary trends critically depends on the use of taxon-dense phylogenies. Further to that, supertrees have found applications in phylogenomics where they are used to combine gene trees and recover species phylogenies based on genome-scale data sets. Results Here, we present the L.U.St package, a python tool for approximate maximum likelihood supertree inference and illustrate its application using a genomic data set for the placental mammals. L.U.St allows the calculation of the approximate likelihood of a supertree, given a set of input trees, performs heuristic searches to look for the supertree of highest likelihood, and performs statistical tests of two or more supertrees. To this end, L.U.St implements a winning sites test allowing ranking of a collection of a-priori selected hypotheses, given as a collection of input supertree topologies. It also outputs a file of input-tree-wise likelihood scores that can be used as input to CONSEL for calculation of standard tests of two trees (e.g. Kishino-Hasegawa, Shimidoara-Hasegawa and Approximately Unbiased tests). Conclusion This is the first fully parametric implementation of a supertree method, it has clearly understood properties, and provides several advantages over currently available supertree approaches. It is easy to implement and works on any platform that has python installed. Availability: bitBucket page - https://afro-juju@bitbucket.org/afro-juju/l.u.st.git. Contact: Davide.Pisani@bristol.ac.uk. PMID:24925766

  19. Maximum likelihood estimation of missing data applied to flow reconstruction around NACA profiles

    NASA Astrophysics Data System (ADS)

    Leroux, R.; Chatellier, L.; David, L.

    2015-10-01

    In this paper, we investigate the maximum likelihood estimation for missing data in fluid flows series. The maximum likelihood estimation is provided with the expectation-maximization (EM) algorithm applied to the linear and quadratic proper orthogonal decomposition POD-Galerkin reduced-order models (ROMs) for various sub-samplings of large data sets. The flows around a NACA0012 profile at Reynolds numbers of 103 and angle of incidence of 20^\\circ and a NACA0015 profile at Reynolds numbers of 105 and angle of incidence of 30^\\circ are first investigated using time-resolved particle image velocimetry measurements and sub-sampled according to different ratios of missing data. The EM algorithm is then applied to the POD ROMs constructed from the sub-sampled data sets. The results show that, depending on the sub-sampling used, the EM algorithm is robust with respect to the Reynolds number and can reproduce the velocity fields and the main structures of the missing flow fields for 50% and 75% of missing data.

  20. Three-dimensional maximum-likelihood reconstruction for an electronically collimated single-photon-emission imaging system.

    PubMed

    Hebert, T; Leahy, R; Singh, M

    1990-07-01

    A three-dimensional maximum-likelihood reconstruction method is presented for a prototype electronically collimated single-photon-emission system. The electronically collimated system uses a gamma camera fronted by an array of germanium detectors to detect gamma-ray emissions from a distributed radioisotope source. In this paper we demonstrate that optimal iterative three-dimensional reconstruction approaches can be feasibly applied to emission imaging systems that have highly complex spatial sampling patterns and that generate extremely large numbers of data values. A probabilistic factorization of the system matrix that reduces the computation by several orders of magnitude is derived. We demonstrate a dramatic increase in the convergence speed of the expectation maximization algorithm by sequentially iterating over particular subsets of the data. This result is also applicable to other emission imaging systems. PMID:2370591

  1. ROC (Receiver Operating Characteristics) study of maximum likelihood estimator human brain image reconstructions in PET (Positron Emission Tomography) clinical practice

    SciTech Connect

    Llacer, J.; Veklerov, E.; Nolan, D. ); Grafton, S.T.; Mazziotta, J.C.; Hawkins, R.A.; Hoh, C.K.; Hoffman, E.J. )

    1990-10-01

    This paper will report on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator (MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP. A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of {sup 18}F -- fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. We report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and we propose a variation based on reading three consecutive slices at a time, rating only the center slice. 9 refs., 2 figs., 1 tab.

  2. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm

    NASA Astrophysics Data System (ADS)

    Papaconstadopoulos, P.; Levesque, I. R.; Maglieri, R.; Seuntjens, J.

    2016-02-01

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size (0.5× 0.5 cm2). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  3. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    PubMed

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-01

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect. PMID:26758232

  4. The high sensitivity of the maximum likelihood estimator method of tomographic image reconstruction

    SciTech Connect

    Llacer, J.; Veklerov, E.

    1987-01-01

    Positron Emission Tomography (PET) images obtained by the MLE iterative method of image reconstruction converge towards strongly deteriorated versions of the original source image. The image deterioration is caused by an excessive attempt by the algorithm to match the projection data with high counts. We can modulate this effect. We compared a source image with reconstructions by filtered backprojection to the MLE algorithm to show that the MLE images can have similar noise to the filtered backprojection images at regions of high activity and very low noise, comparable to the source image, in regions of low activity, if the iterative procedure is stopped at an appropriate point.

  5. The maximum likelihood estimator method of image reconstruction: Its fundamental characteristics and their origin

    SciTech Connect

    Llacer, J.; Veklerov, E.

    1987-05-01

    We review our recent work characterizing the image reconstruction properties of the MLE algorithm. We studied its convergence properties and confirmed the onset of image deterioration, which is a function of the number of counts in the source. By modulating the weight given to projection tubes with high numbers of counts with respect to those with low numbers of counts in the reconstruction process, we have confirmed that image deterioration is due to an attempt by the algorithm to match projection data tubes with high numbers of counts too closely to the iterative image projections. We developed a stopping rule for the algorithm that tests the hypothesis that a reconstructed image could have given the initial projection data in a manner consistent with the underlying assumption of Poisson distributed variables. The rule was applied to two mathematically generated phantoms with success and to a third phantom with exact (no statistical fluctuations) projection data. We conclude that the behavior of the target functions whose extrema are sought in iterative schemes is more important in the early stages of the reconstruction than in the later stages, when the extrema are being approached but with the Poisson nature of the measurement. 11 refs., 14 figs.

  6. Investigation of optimal parameters for penalized maximum-likelihood reconstruction applied to iodinated contrast-enhanced breast CT

    NASA Astrophysics Data System (ADS)

    Makeev, Andrey; Ikejimba, Lynda; Lo, Joseph Y.; Glick, Stephen J.

    2016-03-01

    Although digital mammography has reduced breast cancer mortality by approximately 30%, sensitivity and specificity are still far from perfect. In particular, the performance of mammography is especially limited for women with dense breast tissue. Two out of every three biopsies performed in the U.S. are unnecessary, thereby resulting in increased patient anxiety, pain, and possible complications. One promising tomographic breast imaging method that has recently been approved by the FDA is dedicated breast computed tomography (BCT). However, visualizing lesions with BCT can still be challenging for women with dense breast tissue due to the minimal contrast for lesions surrounded by fibroglandular tissue. In recent years there has been renewed interest in improving lesion conspicuity in x-ray breast imaging by administration of an iodinated contrast agent. Due to the fully 3-D imaging nature of BCT, as well as sub-optimal contrast enhancement while the breast is under compression with mammography and breast tomosynthesis, dedicated BCT of the uncompressed breast is likely to offer the best solution for injected contrast-enhanced x-ray breast imaging. It is well known that use of statistically-based iterative reconstruction in CT results in improved image quality at lower radiation dose. Here we investigate possible improvements in image reconstruction for BCT, by optimizing free regularization parameter in method of maximum likelihood and comparing its performance with clinical cone-beam filtered backprojection (FBP) algorithm.

  7. Augmented Likelihood Image Reconstruction.

    PubMed

    Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M

    2016-01-01

    The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction. PMID:26208310

  8. MLE (Maximum Likelihood Estimator) reconstruction of a brain phantom using a Monte Carlo transition matrix and a statistical stopping rule

    SciTech Connect

    Veklerov, E.; Llacer, J.; Hoffman, E.J.

    1987-10-01

    In order to study properties of the Maximum Likelihood Estimator (MLE) algorithm for image reconstruction in Positron Emission Tomographyy (PET), the algorithm is applied to data obtained by the ECAT-III tomograph from a brain phantom. The procedure for subtracting accidental coincidences from the data stream generated by this physical phantom is such that he resultant data are not Poisson distributed. This makes the present investigation different from other investigations based on computer-simulated phantoms. It is shown that the MLE algorithm is robust enough to yield comparatively good images, especially when the phantom is in the periphery of the field of view, even though the underlying assumption of the algorithm is violated. Two transition matrices are utilized. The first uses geometric considerations only. The second is derived by a Monte Carlo simulation which takes into account Compton scattering in the detectors, positron range, etc. in the detectors. It is demonstrated that the images obtained from the Monte Carlo matrix are superior in some specific ways. A stopping rule derived earlier and allowing the user to stop the iterative process before the images begin to deteriorate is tested. Since the rule is based on the Poisson assumption, it does not work well with the presently available data, although it is successful wit computer-simulated Poisson data.

  9. Maximum-likelihood joint image reconstruction and motion estimation with misaligned attenuation in TOF-PET/CT

    NASA Astrophysics Data System (ADS)

    Bousse, Alexandre; Bertolli, Ottavia; Atkinson, David; Arridge, Simon; Ourselin, Sébastien; Hutton, Brian F.; Thielemans, Kris

    2016-02-01

    This work is an extension of our recent work on joint activity reconstruction/motion estimation (JRM) from positron emission tomography (PET) data. We performed JRM by maximization of the penalized log-likelihood in which the probabilistic model assumes that the same motion field affects both the activity distribution and the attenuation map. Our previous results showed that JRM can successfully reconstruct the activity distribution when the attenuation map is misaligned with the PET data, but converges slowly due to the significant cross-talk in the likelihood. In this paper, we utilize time-of-flight PET for JRM and demonstrate that the convergence speed is significantly improved compared to JRM with conventional PET data.

  10. Maximum Likelihood Estimation in Generalized Rasch Models.

    ERIC Educational Resources Information Center

    de Leeuw, Jan; Verhelst, Norman

    1986-01-01

    Maximum likelihood procedures are presented for a general model to unify the various models and techniques that have been proposed for item analysis. Unconditional maximum likelihood estimation, proposed by Wright and Haberman, and conditional maximum likelihood estimation, proposed by Rasch and Andersen, are shown as important special cases. (JAZ)

  11. Maximum-likelihood density modification

    PubMed Central

    Terwilliger, Thomas C.

    2000-01-01

    A likelihood-based approach to density modification is developed that can be applied to a wide variety of cases where some information about the electron density at various points in the unit cell is available. The key to the approach consists of developing likelihood functions that represent the probability that a particular value of electron density is consistent with prior expectations for the electron density at that point in the unit cell. These likelihood functions are then combined with likelihood functions based on experimental observations and with others containing any prior knowledge about structure factors to form a combined likelihood function for each structure factor. A simple and general approach to maximizing the combined likelihood function is developed. It is found that this likelihood-based approach yields greater phase improvement in model and real test cases than either conventional solvent flattening and histogram matching or a recent reciprocal-space solvent-flattening procedure [Terwilliger (1999 ▶), Acta Cryst. D55, 1863–1871]. PMID:10944333

  12. Maximum likelihood topographic map formation.

    PubMed

    Van Hulle, Marc M

    2005-03-01

    We introduce a new unsupervised learning algorithm for kernel-based topographic map formation of heteroscedastic gaussian mixtures that allows for a unified account of distortion error (vector quantization), log-likelihood, and Kullback-Leibler divergence. PMID:15802004

  13. Improving soil moisture profile reconstruction from ground-penetrating radar data: a maximum likelihood ensemble filter approach

    NASA Astrophysics Data System (ADS)

    Tran, A. P.; Vanclooster, M.; Lambot, S.

    2013-07-01

    The vertical profile of shallow unsaturated zone soil moisture plays a key role in many hydro-meteorological and agricultural applications. We propose a closed-loop data assimilation procedure based on the maximum likelihood ensemble filter algorithm to update the vertical soil moisture profile from time-lapse ground-penetrating radar (GPR) data. A hydrodynamic model is used to propagate the system state in time and a radar electromagnetic model and petrophysical relationships to link the state variable with the observation data, which enables us to directly assimilate the GPR data. Instead of using the surface soil moisture only, the approach allows to use the information of the whole soil moisture profile for the assimilation. We validated our approach through a synthetic study. We constructed a synthetic soil column with a depth of 80 cm and analyzed the effects of the soil type on the data assimilation by considering 3 soil types, namely, loamy sand, silt and clay. The assimilation of GPR data was performed to solve the problem of unknown initial conditions. The numerical soil moisture profiles generated by the Hydrus-1D model were used by the GPR model to produce the "observed" GPR data. The results show that the soil moisture profile obtained by assimilating the GPR data is much better than that of an open-loop forecast. Compared to the loamy sand and silt, the updated soil moisture profile of the clay soil converges to the true state much more slowly. Decreasing the update interval from 60 down to 10 h only slightly improves the effectiveness of the GPR data assimilation for the loamy sand but significantly for the clay soil. The proposed approach appears to be promising to improve real-time prediction of the soil moisture profiles as well as to provide effective estimates of the unsaturated hydraulic properties at the field scale from time-lapse GPR measurements.

  14. Maximum-Likelihood Detection Of Noncoherent CPM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  15. Model Fit after Pairwise Maximum Likelihood

    PubMed Central

    Barendse, M. T.; Ligtvoet, R.; Timmerman, M. E.; Oort, F. J.

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log–likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two–way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  16. Model Fit after Pairwise Maximum Likelihood.

    PubMed

    Barendse, M T; Ligtvoet, R; Timmerman, M E; Oort, F J

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log-likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two-way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  17. Maximum likelihood clustering with dependent feature trees

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.

  18. Collaborative double robust targeted maximum likelihood estimation.

    PubMed

    van der Laan, Mark J; Gruber, Susan

    2010-01-01

    Collaborative double robust targeted maximum likelihood estimators represent a fundamental further advance over standard targeted maximum likelihood estimators of a pathwise differentiable parameter of a data generating distribution in a semiparametric model, introduced in van der Laan, Rubin (2006). The targeted maximum likelihood approach involves fluctuating an initial estimate of a relevant factor (Q) of the density of the observed data, in order to make a bias/variance tradeoff targeted towards the parameter of interest. The fluctuation involves estimation of a nuisance parameter portion of the likelihood, g. TMLE has been shown to be consistent and asymptotically normally distributed (CAN) under regularity conditions, when either one of these two factors of the likelihood of the data is correctly specified, and it is semiparametric efficient if both are correctly specified. In this article we provide a template for applying collaborative targeted maximum likelihood estimation (C-TMLE) to the estimation of pathwise differentiable parameters in semi-parametric models. The procedure creates a sequence of candidate targeted maximum likelihood estimators based on an initial estimate for Q coupled with a succession of increasingly non-parametric estimates for g. In a departure from current state of the art nuisance parameter estimation, C-TMLE estimates of g are constructed based on a loss function for the targeted maximum likelihood estimator of the relevant factor Q that uses the nuisance parameter to carry out the fluctuation, instead of a loss function for the nuisance parameter itself. Likelihood-based cross-validation is used to select the best estimator among all candidate TMLE estimators of Q(0) in this sequence. A penalized-likelihood loss function for Q is suggested when the parameter of interest is borderline-identifiable. We present theoretical results for "collaborative double robustness," demonstrating that the collaborative targeted maximum

  19. Collaborative Double Robust Targeted Maximum Likelihood Estimation*

    PubMed Central

    van der Laan, Mark J.; Gruber, Susan

    2010-01-01

    Collaborative double robust targeted maximum likelihood estimators represent a fundamental further advance over standard targeted maximum likelihood estimators of a pathwise differentiable parameter of a data generating distribution in a semiparametric model, introduced in van der Laan, Rubin (2006). The targeted maximum likelihood approach involves fluctuating an initial estimate of a relevant factor (Q) of the density of the observed data, in order to make a bias/variance tradeoff targeted towards the parameter of interest. The fluctuation involves estimation of a nuisance parameter portion of the likelihood, g. TMLE has been shown to be consistent and asymptotically normally distributed (CAN) under regularity conditions, when either one of these two factors of the likelihood of the data is correctly specified, and it is semiparametric efficient if both are correctly specified. In this article we provide a template for applying collaborative targeted maximum likelihood estimation (C-TMLE) to the estimation of pathwise differentiable parameters in semi-parametric models. The procedure creates a sequence of candidate targeted maximum likelihood estimators based on an initial estimate for Q coupled with a succession of increasingly non-parametric estimates for g. In a departure from current state of the art nuisance parameter estimation, C-TMLE estimates of g are constructed based on a loss function for the targeted maximum likelihood estimator of the relevant factor Q that uses the nuisance parameter to carry out the fluctuation, instead of a loss function for the nuisance parameter itself. Likelihood-based cross-validation is used to select the best estimator among all candidate TMLE estimators of Q0 in this sequence. A penalized-likelihood loss function for Q is suggested when the parameter of interest is borderline-identifiable. We present theoretical results for “collaborative double robustness,” demonstrating that the collaborative targeted maximum

  20. Sensor registration using airlanes: maximum likelihood solution

    NASA Astrophysics Data System (ADS)

    Ong, Hwa-Tung

    2004-01-01

    In this contribution, the maximum likelihood estimation of sensor registration parameters, such as range, azimuth and elevation biases in radar measurements, using airlane information is proposed and studied. The motivation for using airlane information for sensor registration is that it is freely available as a source of reference and it provides an alternative to conventional techniques that rely on synchronised and correctly associated measurements from two or more sensors. In the paper, the problem is first formulated in terms of a measurement model that is a nonlinear function of the unknown target state and sensor parameters, plus sensor noise. A probabilistic model of the target state is developed based on airlane information. The maximum likelihood and also maximum a posteriori solutions are given. The Cramer-Rao lower bound is derived and simulation results are presented for the case of estimating the biases in radar range, azimuth and elevation measurements. The accuracy of the proposed method is compared against the Cramer-Rao lower bound and that of an existing two-sensor alignment method. It is concluded that sensor registration using airlane information is a feasible alternative to existing techniques.

  1. Sensor registration using airlanes: maximum likelihood solution

    NASA Astrophysics Data System (ADS)

    Ong, Hwa-Tung

    2003-12-01

    In this contribution, the maximum likelihood estimation of sensor registration parameters, such as range, azimuth and elevation biases in radar measurements, using airlane information is proposed and studied. The motivation for using airlane information for sensor registration is that it is freely available as a source of reference and it provides an alternative to conventional techniques that rely on synchronised and correctly associated measurements from two or more sensors. In the paper, the problem is first formulated in terms of a measurement model that is a nonlinear function of the unknown target state and sensor parameters, plus sensor noise. A probabilistic model of the target state is developed based on airlane information. The maximum likelihood and also maximum a posteriori solutions are given. The Cramer-Rao lower bound is derived and simulation results are presented for the case of estimating the biases in radar range, azimuth and elevation measurements. The accuracy of the proposed method is compared against the Cramer-Rao lower bound and that of an existing two-sensor alignment method. It is concluded that sensor registration using airlane information is a feasible alternative to existing techniques.

  2. Maximum likelihood continuity mapping for fraud detection

    SciTech Connect

    Hogden, J.

    1997-05-01

    The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.

  3. Maximum likelihood decoding of Reed Solomon Codes

    SciTech Connect

    Sudan, M.

    1996-12-31

    We present a randomized algorithm which takes as input n distinct points ((x{sub i}, y{sub i})){sup n}{sub i=1} from F x F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., y{sub i} = f (x{sub i}) for at least t values of i), provided t = {Omega}({radical}nd). The running time is bounded by a polynomial in n. This immediately provides a maximum likelihood decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides some maximum likelihood decoding for any efficient (i.e., constant or even polynomial rate) code.

  4. CORA: Emission Line Fitting with Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Ness, Jan-Uwe; Wichmann, Rainer

    2011-12-01

    The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.

  5. CORA - emission line fitting with Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Ness, J.-U.; Wichmann, R.

    2002-07-01

    The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.

  6. Approximate maximum likelihood decoding of block codes

    NASA Technical Reports Server (NTRS)

    Greenberger, H. J.

    1979-01-01

    Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.

  7. A maximum likelihood framework for protein design

    PubMed Central

    Kleinman, Claudia L; Rodrigue, Nicolas; Bonnard, Cécile; Philippe, Hervé; Lartillot, Nicolas

    2006-01-01

    Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces shaping protein sequences, and

  8. Targeted maximum likelihood estimation in safety analysis

    PubMed Central

    Lendle, Samuel D.; Fireman, Bruce; van der Laan, Mark J.

    2013-01-01

    Objectives To compare the performance of a targeted maximum likelihood estimator (TMLE) and a collaborative TMLE (CTMLE) to other estimators in a drug safety analysis, including a regression-based estimator, propensity score (PS)–based estimators, and an alternate doubly robust (DR) estimator in a real example and simulations. Study Design and Setting The real data set is a subset of observational data from Kaiser Permanente Northern California formatted for use in active drug safety surveillance. Both the real and simulated data sets include potential confounders, a treatment variable indicating use of one of two antidiabetic treatments and an outcome variable indicating occurrence of an acute myocardial infarction (AMI). Results In the real data example, there is no difference in AMI rates between treatments. In simulations, the double robustness property is demonstrated: DR estimators are consistent if either the initial outcome regression or PS estimator is consistent, whereas other estimators are inconsistent if the initial estimator is not consistent. In simulations with near-positivity violations, CTMLE performs well relative to other estimators by adaptively estimating the PS. Conclusion Each of the DR estimators was consistent, and TMLE and CTMLE had the smallest mean squared error in simulations. PMID:23849159

  9. Multiscale likelihood analysis and image reconstruction

    NASA Astrophysics Data System (ADS)

    Willett, Rebecca M.; Nowak, Robert D.

    2003-11-01

    The nonparametric multiscale polynomial and platelet methods presented here are powerful new tools for signal and image denoising and reconstruction. Unlike traditional wavelet-based multiscale methods, these methods are both well suited to processing Poisson or multinomial data and capable of preserving image edges. At the heart of these new methods lie multiscale signal decompositions based on polynomials in one dimension and multiscale image decompositions based on what the authors call platelets in two dimensions. Platelets are localized functions at various positions, scales and orientations that can produce highly accurate, piecewise linear approximations to images consisting of smooth regions separated by smooth boundaries. Polynomial and platelet-based maximum penalized likelihood methods for signal and image analysis are both tractable and computationally efficient. Polynomial methods offer near minimax convergence rates for broad classes of functions including Besov spaces. Upper bounds on the estimation error are derived using an information-theoretic risk bound based on squared Hellinger loss. Simulations establish the practical effectiveness of these methods in applications such as density estimation, medical imaging, and astronomy.

  10. A Maximum Likelihood Approach to Correlational Outlier Identification.

    ERIC Educational Resources Information Center

    Bacon, Donald R.

    1995-01-01

    A maximum likelihood approach to correlational outlier identification is introduced and compared to the Mahalanobis D squared and Comrey D statistics through Monte Carlo simulation. Identification performance depends on the nature of correlational outliers and the measure used, but the maximum likelihood approach is the most robust performance…

  11. A maximum likelihood approach to the inverse problem of scatterometry.

    PubMed

    Henn, Mark-Alexander; Gross, Hermann; Scholze, Frank; Wurm, Matthias; Elster, Clemens; Bär, Markus

    2012-06-01

    Scatterometry is frequently used as a non-imaging indirect optical method to reconstruct the critical dimensions (CD) of periodic nanostructures. A particular promising direction is EUV scatterometry with wavelengths in the range of 13 - 14 nm. The conventional approach to determine CDs is the minimization of a least squares function (LSQ). In this paper, we introduce an alternative method based on the maximum likelihood estimation (MLE) that determines the statistical error model parameters directly from measurement data. By using simulation data, we show that the MLE method is able to correct the systematic errors present in LSQ results and improves the accuracy of scatterometry. In a second step, the MLE approach is applied to measurement data from both extreme ultraviolet (EUV) and deep ultraviolet (DUV) scatterometry. Using MLE removes the systematic disagreement of EUV with other methods such as scanning electron microscopy and gives consistent results for DUV. PMID:22714306

  12. The maximum likelihood dating of magnetostratigraphic sections

    NASA Astrophysics Data System (ADS)

    Man, Otakar

    2011-04-01

    In general, stratigraphic sections are dated by biostratigraphy and magnetic polarity stratigraphy (MPS) is subsequently used to improve the dating of specific section horizons or to correlate these horizons in different sections of similar age. This paper shows, however, that the identification of a record of a sufficient number of geomagnetic polarity reversals against a reference scale often does not require any complementary information. The deposition and possible subsequent erosion of the section is herein regarded as a stochastic process, whose discrete time increments are independent and normally distributed. This model enables the expression of the time dependence of the magnetic record of section increments in terms of probability. To date samples bracketing the geomagnetic polarity reversal horizons, their levels are combined with various sequences of successive polarity reversals drawn from the reference scale. Each particular combination gives rise to specific constraints on the unknown ages of the primary remanent magnetization of samples. The problem is solved by the constrained maximization of the likelihood function with respect to these ages and parameters of the model, and by subsequent maximization of this function over the set of possible combinations. A statistical test of the significance of this solution is given. The application of this algorithm to various published magnetostratigraphic sections that included nine or more polarity reversals gave satisfactory results. This possible self-sufficiency makes MPS less dependent on other dating techniques.

  13. Convex accelerated maximum entropy reconstruction

    NASA Astrophysics Data System (ADS)

    Worley, Bradley

    2016-04-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm - called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm - is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra.

  14. Low-complexity approximations to maximum likelihood MPSK modulation classification

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2004-01-01

    We present a new approximation to the maximum likelihood classifier to discriminate between M-ary and M'-ary phase-shift-keying transmitted on an additive white Gaussian noise (AWGN) channel and received noncoherentl, partially coherently, or coherently.

  15. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  16. Relevance Data for Language Models Using Maximum Likelihood.

    ERIC Educational Resources Information Center

    Bodoff, David; Wu, Bin; Wong, K. Y. Michael

    2003-01-01

    Presents a preliminary empirical test of a maximum likelihood approach to using relevance data for training information retrieval parameters. Discusses similarities to language models; the unification of document-oriented and query-oriented views; tests on data sets; algorithms and scalability; and the effectiveness of maximum likelihood…

  17. Nonparametric identification and maximum likelihood estimation for hidden Markov models

    PubMed Central

    Alexandrovich, G.; Holzmann, H.; Leister, A.

    2016-01-01

    Nonparametric identification and maximum likelihood estimation for finite-state hidden Markov models are investigated. We obtain identification of the parameters as well as the order of the Markov chain if the transition probability matrices have full-rank and are ergodic, and if the state-dependent distributions are all distinct, but not necessarily linearly independent. Based on this identification result, we develop a nonparametric maximum likelihood estimation theory. First, we show that the asymptotic contrast, the Kullback–Leibler divergence of the hidden Markov model, also identifies the true parameter vector nonparametrically. Second, for classes of state-dependent densities which are arbitrary mixtures of a parametric family, we establish the consistency of the nonparametric maximum likelihood estimator. Here, identification of the mixing distributions need not be assumed. Numerical properties of the estimates and of nonparametric goodness of fit tests are investigated in a simulation study.

  18. Modified maximum likelihood registration based on information fusion

    NASA Astrophysics Data System (ADS)

    Qi, Yongqing; Jing, Zhongliang; Hu, Shiqiang

    2007-11-01

    The bias estimation of passive sensors is considered based on information fusion in multi-platform multi-sensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.

  19. Maximum-likelihood block detection of noncoherent continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K.; Divsalar, Dariush

    1993-01-01

    This paper examines maximum-likelihood block detection of uncoded full response CPM over an additive white Gaussian noise (AWGN) channel. Both the maximum-likelihood metrics and the bit error probability performances of the associated detection algorithms are considered. The special and popular case of minimum-shift-keying (MSK) corresponding to h = 0.5 and constant amplitude frequency pulse is treated separately. The many new receiver structures that result from this investigation can be compared to the traditional ones that have been used in the past both from the standpoint of simplicity of implementation and optimality of performance.

  20. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  1. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  2. Nonparametric maximum likelihood estimation for the multisample Wicksell corpuscle problem

    PubMed Central

    Chan, Kwun Chuen Gary; Qin, Jing

    2016-01-01

    We study nonparametric maximum likelihood estimation for the distribution of spherical radii using samples containing a mixture of one-dimensional, two-dimensional biased and three-dimensional unbiased observations. Since direct maximization of the likelihood function is intractable, we propose an expectation-maximization algorithm for implementing the estimator, which handles an indirect measurement problem and a sampling bias problem separately in the E- and M-steps, and circumvents the need to solve an Abel-type integral equation, which creates numerical instability in the one-sample problem. Extensions to ellipsoids are studied and connections to multiplicative censoring are discussed. PMID:27279657

  3. Multimodal Likelihoods in Educational Assessment: Will the Real Maximum Likelihood Score Please Stand up?

    ERIC Educational Resources Information Center

    Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike

    2011-01-01

    It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…

  4. Targeted maximum likelihood based causal inference: Part I.

    PubMed

    van der Laan, Mark J

    2010-01-01

    Given causal graph assumptions, intervention-specific counterfactual distributions of the data can be defined by the so called G-computation formula, which is obtained by carrying out these interventions on the likelihood of the data factorized according to the causal graph. The obtained G-computation formula represents the counterfactual distribution the data would have had if this intervention would have been enforced on the system generating the data. A causal effect of interest can now be defined as some difference between these counterfactual distributions indexed by different interventions. For example, the interventions can represent static treatment regimens or individualized treatment rules that assign treatment in response to time-dependent covariates, and the causal effects could be defined in terms of features of the mean of the treatment-regimen specific counterfactual outcome of interest as a function of the corresponding treatment regimens. Such features could be defined nonparametrically in terms of so called (nonparametric) marginal structural models for static or individualized treatment rules, whose parameters can be thought of as (smooth) summary measures of differences between the treatment regimen specific counterfactual distributions. In this article, we develop a particular targeted maximum likelihood estimator of causal effects of multiple time point interventions. This involves the use of loss-based super-learning to obtain an initial estimate of the unknown factors of the G-computation formula, and subsequently, applying a target-parameter specific optimal fluctuation function (least favorable parametric submodel) to each estimated factor, estimating the fluctuation parameter(s) with maximum likelihood estimation, and iterating this updating step of the initial factor till convergence. This iterative targeted maximum likelihood updating step makes the resulting estimator of the causal effect double robust in the sense that it is

  5. Targeted Maximum Likelihood Based Causal Inference: Part I

    PubMed Central

    van der Laan, Mark J.

    2010-01-01

    Given causal graph assumptions, intervention-specific counterfactual distributions of the data can be defined by the so called G-computation formula, which is obtained by carrying out these interventions on the likelihood of the data factorized according to the causal graph. The obtained G-computation formula represents the counterfactual distribution the data would have had if this intervention would have been enforced on the system generating the data. A causal effect of interest can now be defined as some difference between these counterfactual distributions indexed by different interventions. For example, the interventions can represent static treatment regimens or individualized treatment rules that assign treatment in response to time-dependent covariates, and the causal effects could be defined in terms of features of the mean of the treatment-regimen specific counterfactual outcome of interest as a function of the corresponding treatment regimens. Such features could be defined nonparametrically in terms of so called (nonparametric) marginal structural models for static or individualized treatment rules, whose parameters can be thought of as (smooth) summary measures of differences between the treatment regimen specific counterfactual distributions. In this article, we develop a particular targeted maximum likelihood estimator of causal effects of multiple time point interventions. This involves the use of loss-based super-learning to obtain an initial estimate of the unknown factors of the G-computation formula, and subsequently, applying a target-parameter specific optimal fluctuation function (least favorable parametric submodel) to each estimated factor, estimating the fluctuation parameter(s) with maximum likelihood estimation, and iterating this updating step of the initial factor till convergence. This iterative targeted maximum likelihood updating step makes the resulting estimator of the causal effect double robust in the sense that it is

  6. A maximum-likelihood estimation of pairwise relatedness for autopolyploids

    PubMed Central

    Huang, K; Guo, S T; Shattuck, M R; Chen, S T; Qi, X G; Zhang, P; Li, B G

    2015-01-01

    Relatedness between individuals is central to ecological genetics. Multiple methods are available to quantify relatedness from molecular data, including method-of-moment and maximum-likelihood estimators. We describe a maximum-likelihood estimator for autopolyploids, and quantify its statistical performance under a range of biologically relevant conditions. The statistical performances of five additional polyploid estimators of relatedness were also quantified under identical conditions. When comparing truncated estimators, the maximum-likelihood estimator exhibited lower root mean square error under some conditions and was more biased for non-relatives, especially when the number of alleles per loci was low. However, even under these conditions, this bias was reduced to be statistically insignificant with more robust genetic sampling. We also considered ambiguity in polyploid heterozygote genotyping and developed a weighting methodology for candidate genotypes. The statistical performances of three polyploid estimators under both ideal and actual conditions (including inbreeding and double reduction) were compared. The software package POLYRELATEDNESS is available to perform this estimation and supports a maximum ploidy of eight. PMID:25370210

  7. Measurement of absolute concentrations of individual compounds in metabolite mixtures by gradient-selective time-zero 1H-13C HSQC with two concentration references and fast maximum likelihood reconstruction analysis.

    PubMed

    Hu, Kaifeng; Ellinger, James J; Chylla, Roger A; Markley, John L

    2011-12-15

    Time-zero 2D (13)C HSQC (HSQC(0)) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC(0) spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero (1)H-(13)C HSQC(0) in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant-time mode. Semiautomatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semiautomated gsHSQC(0) with those obtained by the original manual phase-cycled HSQC(0) approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture. PMID:22029275

  8. Skewness for Maximum Likelihood Estimators of the Negative Binomial Distribution

    SciTech Connect

    Bowman, Kimiko o

    2007-01-01

    The probability generating function of one version of the negative binomial distribution being (p + 1 - pt){sup -k}, we study elements of the Hessian and in particular Fisher's discovery of a series form for the variance of k, the maximum likelihood estimator, and also for the determinant of the Hessian. There is a link with the Psi function and its derivatives. Basic algebra is excessively complicated and a Maple code implementation is an important task in the solution process. Low order maximum likelihood moments are given and also Fisher's examples relating to data associated with ticks on sheep. Efficiency of moment estimators is mentioned, including the concept of joint efficiency. In an Addendum we give an interesting formula for the difference of two Psi functions.

  9. Maximum-Likelihood Fits to Histograms for Improved Parameter Estimation

    NASA Astrophysics Data System (ADS)

    Fowler, J. W.

    2014-08-01

    Straightforward methods for adapting the familiar statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K fluorescence spectrum, a poor choice of can lead to biases of at least 10 % in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.

  10. A Targeted Maximum Likelihood Estimator for Two-Stage Designs

    PubMed Central

    Rose, Sherri; van der Laan, Mark J.

    2011-01-01

    We consider two-stage sampling designs, including so-called nested case control studies, where one takes a random sample from a target population and completes measurements on each subject in the first stage. The second stage involves drawing a subsample from the original sample, collecting additional data on the subsample. This data structure can be viewed as a missing data structure on the full-data structure collected in the second-stage of the study. Methods for analyzing two-stage designs include parametric maximum likelihood estimation and estimating equation methodology. We propose an inverse probability of censoring weighted targeted maximum likelihood estimator (IPCW-TMLE) in two-stage sampling designs and present simulation studies featuring this estimator. PMID:21556285

  11. Precision of maximum likelihood estimation in adaptive designs.

    PubMed

    Graf, Alexandra Christine; Gutjahr, Georg; Brannath, Werner

    2016-03-15

    There has been increasing interest in trials that allow for design adaptations like sample size reassessment or treatment selection at an interim analysis. Ignoring the adaptive and multiplicity issues in such designs leads to an inflation of the type 1 error rate, and treatment effect estimates based on the maximum likelihood principle become biased. Whereas the methodological issues concerning hypothesis testing are well understood, it is not clear how to deal with parameter estimation in designs were adaptation rules are not fixed in advanced so that, in practice, the maximum likelihood estimate (MLE) is used. It is therefore important to understand the behavior of the MLE in such designs. The investigation of Bias and mean squared error (MSE) is complicated by the fact that the adaptation rules need not be fully specified in advance and, hence, are usually unknown. To investigate Bias and MSE under such circumstances, we search for the sample size reassessment and selection rules that lead to the maximum Bias or maximum MSE. Generally, this leads to an overestimation of Bias and MSE, which can be reduced by imposing realistic constraints on the rules like, for example, a maximum sample size. We consider designs that start with k treatment groups and a common control and where selection of a single treatment and control is performed at the interim analysis with the possibility to reassess each of the sample sizes. We consider the case of unlimited sample size reassessments as well as several realistically restricted sample size reassessment rules. PMID:26459506

  12. Approximate maximum likelihood estimation of scanning observer templates

    NASA Astrophysics Data System (ADS)

    Abbey, Craig K.; Samuelson, Frank W.; Wunderlich, Adam; Popescu, Lucretiu M.; Eckstein, Miguel P.; Boone, John M.

    2015-03-01

    In localization tasks, an observer is asked to give the location of some target or feature of interest in an image. Scanning linear observer models incorporate the search implicit in this task through convolution of an observer template with the image being evaluated. Such models are becoming increasingly popular as predictors of human performance for validating medical imaging methodology. In addition to convolution, scanning models may utilize internal noise components to model inconsistencies in human observer responses. In this work, we build a probabilistic mathematical model of this process and show how it can, in principle, be used to obtain estimates of the observer template using maximum likelihood methods. The main difficulty of this approach is that a closed form probability distribution for a maximal location response is not generally available in the presence of internal noise. However, for a given image we can generate an empirical distribution of maximal locations using Monte-Carlo sampling. We show that this probability is well approximated by applying an exponential function to the scanning template output. We also evaluate log-likelihood functions on the basis of this approximate distribution. Using 1,000 trials of simulated data as a validation test set, we find that a plot of the approximate log-likelihood function along a single parameter related to the template profile achieves its maximum value near the true value used in the simulation. This finding holds regardless of whether the trials are correctly localized or not. In a second validation study evaluating a parameter related to the relative magnitude of internal noise, only the incorrect localization images produces a maximum in the approximate log-likelihood function that is near the true value of the parameter.

  13. Maximum-likelihood registration of range images with missing data.

    PubMed

    Sharp, Gregory C; Lee, Sang W; Wehe, David K

    2008-01-01

    Missing data are common in range images, due to geometric occlusions, limitations in the sensor field of view, poor reflectivity, depth discontinuities, and cast shadows. Using registration to align these data often fails, because points without valid correspondences can be incorrectly matched. This paper presents a maximum likelihood method for registration of scenes with unmatched or missing data. Using ray casting, correspondences are formed between valid and missing points in each view. These correspondences are used to classify points by their visibility properties, including occlusions, field of view, and shadow regions. The likelihood of each point match is then determined using statistical properties of the sensor, such as noise and outlier distributions. Experiments demonstrate a high rates of convergence on complex scenes with varying degrees of overlap. PMID:18000329

  14. Gaussian maximum likelihood and contextual classification algorithms for multicrop classification

    NASA Technical Reports Server (NTRS)

    Di Zenzo, Silvano; Bernstein, Ralph; Kolsky, Harwood G.; Degloria, Stephen D.

    1987-01-01

    The paper reviews some of the ways in which context has been handled in the remote-sensing literature, and additional possibilities are introduced. The problem of computing exhaustive and normalized class-membership probabilities from the likelihoods provided by the Gaussian maximum likelihood classifier (to be used as initial probability estimates to start relaxation) is discussed. An efficient implementation of probabilistic relaxation is proposed, suiting the needs of actual remote-sensing applications. A modified fuzzy-relaxation algorithm using generalized operations between fuzzy sets is presented. Combined use of the two relaxation algorithms is proposed to exploit context in multispectral classification of remotely sensed data. Results on both one artificially created image and one MSS data set are reported.

  15. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    NASA Astrophysics Data System (ADS)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  16. A Maximum-Likelihood Approach to Force-Field Calibration.

    PubMed

    Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam

    2015-09-28

    A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2

  17. Tradeoffs in regularized maximum-likelihood image restoration

    NASA Astrophysics Data System (ADS)

    Markham, Joanne; Conchello, Jose-Angel

    1997-04-01

    All algorithms for three-dimensional deconvolution of fluorescence microscopical images have as a common goal the estimation of a specimen function (SF) that is consistent with the recorded image and the process for image formation and recording. To check for consistency, the image of the estimated SF predicted by the imaging operator is compared to the recorded image, and the similarity between them is used as a figure of merit (FOM) in the algorithm to improve the specimen function estimate. Commonly used FOMs include squared differences, maximum entropy, and maximum likelihood (ML). The imaging operator is usually characterized by the point-spread function (PSF), the image of a point source of light, or its Fourier transform, the optical transfer function (OTF). Because the OTF is non-zero only over a small region of the spatial-frequency domain, the inversion of the image formation operator is non-unique and the estimated SF is potentially artifactual. Adding a term to the FOM that penalizes some unwanted behavior of the estimated SF effectively ameliorates potential artifacts, but at the same time biases the estimation process. For example, an intensity penalty avoids overly large pixel values but biases the SF to small pixel values. A roughness penalty avoids rapid pixel to pixel variations but biases the SF to be smooth. In this article we assess the effects of the roughness and intensity penalties on maximum likelihood image estimation.

  18. Maximum likelihood analysis of bubble incidence for mixed gas diving.

    PubMed

    Tikuisis, P; Gault, K; Carrod, G

    1990-03-01

    The method of maximum likelihood has been applied to predict the incidence of bubbling in divers for both air and helium diving. Data were obtained from 108 air man-dives and 622 helium man-dives conducted experimentally in a hyperbaric chamber. Divers were monitored for bubbles using Doppler ultrasonics during the period from surfacing until approximately 2 h after surfacing. Bubble grades were recorded according to the K-M code, and the maximum value in the precordial region for each diver was used in the likelihood analysis. Prediction models were based on monoexponential gas kinetics using one and two parallel-compartment configurations. The model parameters were of three types: gas kinetics, gas potency, and compartment gain. When the potency of the gases was not distinguished, the risk criterion used was inherently based on the gas supersaturation ratio, otherwise it was based on the potential bubble volume. The two-compartment model gave a significantly better prediction than the one-compartment model only if the kinetics of nitrogen and helium were distinguished. A further significant improvement with the two-compartment model was obtained when the potency of the two gases was distinguished, thereby making the potential bubble volume criterion a better choice than the gas pressure criterion. The results suggest that when the method of maximum likelihood is applied for the prediction of the incidence of bubbling, more than one compartment should be used and if more than one is used consideration should be given to distinguishing the potencies of the inert gases. PMID:2181767

  19. Maximum likelihood estimation for distributed parameter models of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Williams, J. L.

    1989-01-01

    A distributed-parameter model of the NASA Solar Array Flight Experiment spacecraft structure is constructed on the basis of measurement data and analyzed to generate a priori estimates of modal frequencies and mode shapes. A Newton-Raphson maximum-likelihood algorithm is applied to determine the unknown parameters, using a truncated model for the estimation and the full model for the computation of the higher modes. Numerical results are presented in a series of graphs and briefly discussed, and the significant improvement in computation speed obtained by parallel implementation of the method on a supercomputer is noted.

  20. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  1. Targeted maximum likelihood based causal inference: Part II.

    PubMed

    van der Laan, Mark J

    2010-01-01

    In this article, we provide a template for the practical implementation of the targeted maximum likelihood estimator for analyzing causal effects of multiple time point interventions, for which the methodology was developed and presented in Part I. In addition, the application of this template is demonstrated in two important estimation problems: estimation of the effect of individualized treatment rules based on marginal structural models for treatment rules, and the effect of a baseline treatment on survival in a randomized clinical trial in which the time till event is subject to right censoring. PMID:21731531

  2. Targeted Maximum Likelihood Based Causal Inference: Part II

    PubMed Central

    van der Laan, Mark J.

    2010-01-01

    In this article, we provide a template for the practical implementation of the targeted maximum likelihood estimator for analyzing causal effects of multiple time point interventions, for which the methodology was developed and presented in Part I. In addition, the application of this template is demonstrated in two important estimation problems: estimation of the effect of individualized treatment rules based on marginal structural models for treatment rules, and the effect of a baseline treatment on survival in a randomized clinical trial in which the time till event is subject to right censoring. PMID:21731531

  3. Efficient maximum likelihood parameterization of continuous-time Markov processes

    PubMed Central

    McGibbon, Robert T.; Pande, Vijay S.

    2015-01-01

    Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016

  4. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  5. The Relative Performance of Targeted Maximum Likelihood Estimators

    PubMed Central

    Porter, Kristin E.; Gruber, Susan; van der Laan, Mark J.; Sekhon, Jasjeet S.

    2011-01-01

    There is an active debate in the literature on censored data about the relative performance of model based maximum likelihood estimators, IPCW-estimators, and a variety of double robust semiparametric efficient estimators. Kang and Schafer (2007) demonstrate the fragility of double robust and IPCW-estimators in a simulation study with positivity violations. They focus on a simple missing data problem with covariates where one desires to estimate the mean of an outcome that is subject to missingness. Responses by Robins, et al. (2007), Tsiatis and Davidian (2007), Tan (2007) and Ridgeway and McCaffrey (2007) further explore the challenges faced by double robust estimators and offer suggestions for improving their stability. In this article, we join the debate by presenting targeted maximum likelihood estimators (TMLEs). We demonstrate that TMLEs that guarantee that the parametric submodel employed by the TMLE procedure respects the global bounds on the continuous outcomes, are especially suitable for dealing with positivity violations because in addition to being double robust and semiparametric efficient, they are substitution estimators. We demonstrate the practical performance of TMLEs relative to other estimators in the simulations designed by Kang and Schafer (2007) and in modified simulations with even greater estimation challenges. PMID:21931570

  6. Maximum likelihood versus likelihood-free quantum system identification in the atom maser

    NASA Astrophysics Data System (ADS)

    Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin

    2014-10-01

    We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle.

  7. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Singh, Harpreet; Arvind; Dorai, Kavita

    2016-09-01

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation.

  8. Fermi-surface reconstruction from two-dimensional angular correlation of positron annihilation radiation (2D-ACAR) data using maximum-likelihood fitting of wavelet-like functions

    NASA Astrophysics Data System (ADS)

    G, A., Major; Fretwell, H. M.; Dugdale, S. B.; Alam, M. A.

    1998-11-01

    A novel method for reconstructing the Fermi surface from experimental two-dimensional angular correlation of positron annihilation radiation (2D-ACAR) projections is proposed. In this algorithm, the 3D electron momentum-density distribution is expanded in terms of a basis of wavelet-like functions. The parameters of the model, the wavelet coefficients, are determined by maximizing the likelihood function corresponding to the experimental data and the projections calculated from the model. In contrast to other expansions, in the case of that in terms of wavelets a relatively small number of model parameters are sufficient for representing the relevant parts of the 3D distribution, thus keeping computation times reasonably short. Unlike other reconstruction methods, this algorithm takes full account of the statistical information content of the data and therefore may help to reduce the amount of time needed for data acquisition. An additional advantage of wavelet expansion may be the possibility of retrieving the Fermi surface directly from the wavelet coefficients rather than indirectly using the reconstructed 3D distribution.

  9. Maximum-likelihood approach to strain imaging using ultrasound

    PubMed Central

    Insana, M. F.; Cook, L. T.; Bilgen, M.; Chaturvedi, P.; Zhu, Y.

    2009-01-01

    A maximum-likelihood (ML) strategy for strain estimation is presented as a framework for designing and evaluating bioelasticity imaging systems. Concepts from continuum mechanics, signal analysis, and acoustic scattering are combined to develop a mathematical model of the ultrasonic waveforms used to form strain images. The model includes three-dimensional (3-D) object motion described by affine transformations, Rayleigh scattering from random media, and 3-D system response functions. The likelihood function for these waveforms is derived to express the Fisher information matrix and variance bounds for displacement and strain estimation. The ML estimator is a generalized cross correlator for pre- and post-compression echo waveforms that is realized by waveform warping and filtering prior to cross correlation and peak detection. Experiments involving soft tissuelike media show the ML estimator approaches the Cramér–Rao error bound for small scaling deformations: at 5 MHz and 1.2% compression, the predicted lower bound for displacement errors is 4.4 µm and the measured standard deviation is 5.7 µm. PMID:10738797

  10. Maximum likelihood: Extracting unbiased information from complex networks

    NASA Astrophysics Data System (ADS)

    Garlaschelli, Diego; Loffredo, Maria I.

    2008-07-01

    The choice of free parameters in network models is subjective, since it depends on what topological properties are being monitored. However, we show that the maximum likelihood (ML) principle indicates a unique, statistically rigorous parameter choice, associated with a well-defined topological feature. We then find that, if the ML condition is incompatible with the built-in parameter choice, network models turn out to be intrinsically ill defined or biased. To overcome this problem, we construct a class of safely unbiased models. We also propose an extension of these results that leads to the fascinating possibility to extract, only from topological data, the “hidden variables” underlying network organization, making them “no longer hidden.” We test our method on World Trade Web data, where we recover the empirical gross domestic product using only topological information.

  11. A maximum likelihood approach to estimating correlation functions

    SciTech Connect

    Baxter, Eric Jones; Rozo, Eduardo

    2013-12-10

    We define a maximum likelihood (ML for short) estimator for the correlation function, ξ, that uses the same pair counting observables (D, R, DD, DR, RR) as the standard Landy and Szalay (LS for short) estimator. The ML estimator outperforms the LS estimator in that it results in smaller measurement errors at any fixed random point density. Put another way, the ML estimator can reach the same precision as the LS estimator with a significantly smaller random point catalog. Moreover, these gains are achieved without significantly increasing the computational requirements for estimating ξ. We quantify the relative improvement of the ML estimator over the LS estimator and discuss the regimes under which these improvements are most significant. We present a short guide on how to implement the ML estimator and emphasize that the code alterations required to switch from an LS to an ML estimator are minimal.

  12. Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs

    SciTech Connect

    Nix, D.A.; Hogden, J.E.

    1998-12-01

    The authors describe Maximum-Likelihood Continuity Mapping (MALCOM) as an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete ''hidden'' space constrained by a fixed finite-automata architecture, MALCOM has a continuous hidden space (a continuity map) that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a far more realistic model of the speech production process. The authors support this claim by generating continuity maps for three speakers and using the resulting MALCOM paths to predict measured speech articulator data. The correlations between the MALCOM paths (obtained from only the speech acoustics) and the actual articulator movements average 0.77 on an independent test set not used to train MALCOM nor the predictor. On average, this unsupervised model achieves 92% of performance obtained using the corresponding supervised method.

  13. Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data

    SciTech Connect

    Agnese, R.

    2015-03-30

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from Pb210decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we also perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in our data. Finally, we confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.

  14. Stochastic Maximum Likelihood (SML) parametric estimation of overlapped Doppler echoes

    NASA Astrophysics Data System (ADS)

    Boyer, E.; Petitdidier, M.; Larzabal, P.

    2004-11-01

    This paper investigates the area of overlapped echo data processing. In such cases, classical methods, such as Fourier-like techniques or pulse pair methods, fail to estimate the first three spectral moments of the echoes because of their lack of resolution. A promising method, based on a modelization of the covariance matrix of the time series and on a Stochastic Maximum Likelihood (SML) estimation of the parameters of interest, has been recently introduced in literature. This method has been tested on simulations and on few spectra from actual data but no exhaustive investigation of the SML algorithm has been conducted on actual data: this paper fills this gap. The radar data came from the thunderstorm campaign that took place at the National Astronomy and Ionospheric Center (NAIC) in Arecibo, Puerto Rico, in 1998.

  15. A calibration method of self-referencing interferometry based on maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Chen; Li, Dahai; Li, Mengyang; E, Kewei; Guo, Guangrao

    2015-05-01

    Self-referencing interferometry has been widely used in wavefront sensing. However, currently the results of wavefront measurement include two parts, one is the real phase information of wavefront under test and the other is the system error in self-referencing interferometer. In this paper, a method based on maximum likelihood estimation is presented to calibrate the system error in self-referencing interferometer. Firstly, at least three phase difference distributions are obtained by three position measurements of the tested component: one basic position, one rotation and one lateral translation. Then, combining the three phase difference data and using the maximum likelihood method to create a maximum likelihood function, reconstructing the wavefront under test and the system errors by least square estimation and Zernike polynomials. The simulation results show that the proposed method can deal with the issue of calibration of a self-referencing interferometer. The method can be used to reduce the effect of system errors on extracting and reconstructing the wavefront under test, and improve the measurement accuracy of the self-referencing interferometer.

  16. Maximum-likelihood estimation of recent shared ancestry (ERSA)

    PubMed Central

    Huff, Chad D.; Witherspoon, David J.; Simonson, Tatum S.; Xing, Jinchuan; Watkins, W. Scott; Zhang, Yuhua; Tuohy, Therese M.; Neklason, Deborah W.; Burt, Randall W.; Guthery, Stephen L.; Woodward, Scott R.; Jorde, Lynn B.

    2011-01-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package. PMID:21324875

  17. A statistical technique for processing radio interferometer data. [using maximum likelihood algorithm

    NASA Technical Reports Server (NTRS)

    Papadopoulos, G. D.

    1975-01-01

    The output of a radio interferometer is the Fourier transform of the object under investigation. Due to the limited coverage of the Fourier plane, the reconstruction of the image of the source is blurred by the beam of the synthesized array. A maximum-likelihood processing technique is described which uses the statistical properties of the received noise-like signals. This technique has been used extensively in the processing of large-aperture seismic arrays. This inversion method results in a synthesized beam that is more uniform, has lower sidelobes, and higher resolution than the normal Fourier transform methods. The maximum-likelihood method algorithm was applied successfully to very long baseline and short baseline interferometric data.

  18. Wavelet domain watermarking using maximum-likelihood detection

    NASA Astrophysics Data System (ADS)

    Ng, Tek M.; Garg, Hari K.

    2004-06-01

    A digital watermark is an imperceptible mark placed on multimedia content for a variety of applications including copyright protection, fingerprinting, broadcast monitoring, etc. Traditionally, watermark detection algorithms are based on the correlation between the watermark and the media the watermark is embedded in. Although simple to use, correlation detection is only optimal when the watermark embedding process follows an additive rule and when the media is drawn from Gaussian distributions. More recent works on watermark detection are based on decision theory. In this paper, a maximum-likelihood (ML) detection scheme based on Bayes's decision theory is proposed for image watermarking in wavelet transform domain. The decision threshold is derived using the Neyman-Pearson criterion to minimize the missed detection probability subject to a given false alarm probability. The detection performance depends on choosing a probability distribution function (PDF) that can accurately model the distribution of the wavelet transform coefficients. The generalized Gaussian PDF is adopted here. Previously, the Gaussian PDF, which is a special case, has been considered for such detection scheme. Using extensive experimentation, the generalized Gaussian PDF is shown to be a better model.

  19. Maximum likelihood estimation of shear wave speed in transient elastography.

    PubMed

    Audière, Stéphane; Angelini, Elsa D; Sandrin, Laurent; Charbit, Maurice

    2014-06-01

    Ultrasonic transient elastography (TE), enables to assess, under active mechanical constraints, the elasticity of the liver, which correlates with hepatic fibrosis stages. This technique is routinely used in clinical practice to assess noninvasively liver stiffness. The Fibroscan system used in this work generates a shear wave via an impulse stress applied on the surface of the skin and records a temporal series of radio-frequency (RF) lines using a single-element ultrasound probe. A shear wave propagation map (SWPM) is generated as a 2-D map of the displacements along depth and time, derived from the correlations of the sequential 1-D RF lines, assuming that the direction of propagation (DOP) of the shear wave coincides with the ultrasound beam axis (UBA). Under the assumption of pure elastic tissue, elasticity is proportional to the shear wave speed. This paper introduces a novel approach to the processing of the SWPM, deriving the maximum likelihood estimate of the shear wave speed when comparing the observed displacements and the estimates provided by the Green's functions. A simple parametric model is used to interface Green's theoretical values of noisy measures provided by the SWPM, taking into account depth-varying attenuation and time-delay. The proposed method was evaluated on numerical simulations using a finite element method simulator and on physical phantoms. Evaluation on this test database reported very high agreements of shear wave speed measures when DOP and UBA coincide. PMID:24835213

  20. Fluorescence resonance energy transfer imaging by maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yupeng; Yuan, Yumin; Holmes, Timothy J.

    2004-06-01

    Fluorescence resonance energy transfer (FRET) is a fluorescence microscope imaging process involving nonradiative energy transfer between two fluorophores (the donor and the acceptor). FRET is used to detect the chemical interactions and, in some cases, measure the distance between molecules. Existing approaches do not always well compensate for bleed-through in excitation, cross-talk in emission detection and electronic noise in image acquisition. We have developed a system to automatically search for maximum-likelihood estimates of the FRET image, donor concentration and acceptor concentration. It also produces other system parameters, such as excitation/emission filter efficiency and FRET conversion factor. The mathematical model is based upon a Poisson process since the CCD camera is a photon-counting device. The main advantage of the approach is that it automatically compensates for bleed-through and cross-talk degradations. Tests are presented with synthetic images and with real data referred to as positive and negative controls, where FRET is known to occur and to not occur, respectively. The test results verify the claimed advantages by showing consistent accuracy in detecting FRET and by showing improved accuracy in calculating FRET efficiency.

  1. Maximum likelihood estimation for cytogenetic dose-response curves

    SciTech Connect

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  2. Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data

    DOE PAGESBeta

    Agnese, R.

    2015-03-30

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from Pb210decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we also perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in ourmore » data. Finally, we confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.« less

  3. Maximum likelihood estimation for cytogenetic dose-response curves

    SciTech Connect

    Frome, E.L.; DuFrain, R.J.

    1986-03-01

    In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  4. Maximum-likelihood estimation of circle parameters via convolution.

    PubMed

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374

  5. Correcting for Sequencing Error in Maximum Likelihood Phylogeny Inference

    PubMed Central

    Kuhner, Mary K.; McGill, James

    2014-01-01

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. PMID:25378476

  6. Parallel computation of a maximum-likelihood estimator of a physical map.

    PubMed Central

    Bhandarkar, S M; Machaka, S A; Shete, S S; Kota, R N

    2001-01-01

    Reconstructing a physical map of a chromosome from a genomic library presents a central computational problem in genetics. Physical map reconstruction in the presence of errors is a problem of high computational complexity that provides the motivation for parallel computing. Parallelization strategies for a maximum-likelihood estimation-based approach to physical map reconstruction are presented. The estimation procedure entails a gradient descent search for determining the optimal spacings between probes for a given probe ordering. The optimal probe ordering is determined using a stochastic optimization algorithm such as simulated annealing or microcanonical annealing. A two-level parallelization strategy is proposed wherein the gradient descent search is parallelized at the lower level and the stochastic optimization algorithm is simultaneously parallelized at the higher level. Implementation and experimental results on a distributed-memory multiprocessor cluster running the parallel virtual machine (PVM) environment are presented using simulated and real hybridization data. PMID:11238392

  7. The effect of high leverage points on the maximum estimated likelihood for separation in logistic regression

    NASA Astrophysics Data System (ADS)

    Ariffin, Syaiba Balqish; Midi, Habshah; Arasan, Jayanthi; Rana, Md Sohel

    2015-02-01

    This article is concerned with the performance of the maximum estimated likelihood estimator in the presence of separation in the space of the independent variables and high leverage points. The maximum likelihood estimator suffers from the problem of non overlap cases in the covariates where the regression coefficients are not identifiable and the maximum likelihood estimator does not exist. Consequently, iteration scheme fails to converge and gives faulty results. To remedy this problem, the maximum estimated likelihood estimator is put forward. It is evident that the maximum estimated likelihood estimator is resistant against separation and the estimates always exist. The effect of high leverage points are then investigated on the performance of maximum estimated likelihood estimator through real data sets and Monte Carlo simulation study. The findings signify that the maximum estimated likelihood estimator fails to provide better parameter estimates in the presence of both separation, and high leverage points.

  8. Quantum-state reconstruction by maximizing likelihood and entropy.

    PubMed

    Teo, Yong Siah; Zhu, Huangjun; Englert, Berthold-Georg; Řeháček, Jaroslav; Hradil, Zdeněk

    2011-07-01

    Quantum-state reconstruction on a finite number of copies of a quantum system with informationally incomplete measurements, as a rule, does not yield a unique result. We derive a reconstruction scheme where both the likelihood and the von Neumann entropy functionals are maximized in order to systematically select the most-likely estimator with the largest entropy, that is, the least-bias estimator, consistent with a given set of measurement data. This is equivalent to the joint consideration of our partial knowledge and ignorance about the ensemble to reconstruct its identity. An interesting structure of such estimators will also be explored. PMID:21797584

  9. Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors

    PubMed Central

    Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.

    2009-01-01

    In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527

  10. The numerical evaluation of the maximum-likelihood estimate of a subset of mixture proportions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    Necessary and sufficient conditions are given for a maximum likelihood estimate of a subset of mixture proportions. From these conditions, likelihood equations are derived satisfied by the maximum-likelihood estimate and a successive-approximations procedure is discussed as suggested by equations for numerically evaluating the maximum-likelihood estimate. It is shown that, with probability one for large samples, this procedure converges locally to the maximum-likelihood estimate whenever a certain step-size lies between zero and two. Furthermore, optimal rates of local convergence are obtained for a step-size which is bounded below by a number between one and two.

  11. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  12. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  13. MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600

    SciTech Connect

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.

  14. The recursive maximum likelihood proportion estimator: User's guide and test results

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.

  15. Digital combining-weight estimation for broadband sources using maximum-likelihood estimates

    NASA Technical Reports Server (NTRS)

    Rodemich, E. R.; Vilnrotter, V. A.

    1994-01-01

    An algorithm described for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system is compared with the maximum-likelihood estimate. This provides some improvement in performance, with an increase in computational complexity. However, the maximum-likelihood algorithm is simple enough to allow implementation on a PC-based combining system.

  16. Item Parameter Estimation via Marginal Maximum Likelihood and an EM Algorithm: A Didactic.

    ERIC Educational Resources Information Center

    Harwell, Michael R.; And Others

    1988-01-01

    The Bock and Aitkin Marginal Maximum Likelihood/EM (MML/EM) approach to item parameter estimation is an alternative to the classical joint maximum likelihood procedure of item response theory. This paper provides the essential mathematical details of a MML/EM solution and shows its use in obtaining consistent item parameter estimates. (TJH)

  17. W-IQ-TREE: a fast online phylogenetic tool for maximum likelihood analysis.

    PubMed

    Trifinopoulos, Jana; Nguyen, Lam-Tung; von Haeseler, Arndt; Minh, Bui Quang

    2016-07-01

    This article presents W-IQ-TREE, an intuitive and user-friendly web interface and server for IQ-TREE, an efficient phylogenetic software for maximum likelihood analysis. W-IQ-TREE supports multiple sequence types (DNA, protein, codon, binary and morphology) in common alignment formats and a wide range of evolutionary models including mixture and partition models. W-IQ-TREE performs fast model selection, partition scheme finding, efficient tree reconstruction, ultrafast bootstrapping, branch tests, and tree topology tests. All computations are conducted on a dedicated computer cluster and the users receive the results via URL or email. W-IQ-TREE is available at http://iqtree.cibiv.univie.ac.at It is free and open to all users and there is no login requirement. PMID:27084950

  18. CodonPhyML: Fast Maximum Likelihood Phylogeny Estimation under Codon Substitution Models

    PubMed Central

    Gil, Manuel; Zoller, Stefan; Anisimova, Maria

    2013-01-01

    Markov models of codon substitution naturally incorporate the structure of the genetic code and the selection intensity at the protein level, providing a more realistic representation of protein-coding sequences compared with nucleotide or amino acid models. Thus, for protein-coding genes, phylogenetic inference is expected to be more accurate under codon models. So far, phylogeny reconstruction under codon models has been elusive due to computational difficulties of dealing with high dimension matrices. Here, we present a fast maximum likelihood (ML) package for phylogenetic inference, CodonPhyML offering hundreds of different codon models, the largest variety to date, for phylogeny inference by ML. CodonPhyML is tested on simulated and real data and is shown to offer excellent speed and convergence properties. In addition, CodonPhyML includes most recent fast methods for estimating phylogenetic branch supports and provides an integral framework for models selection, including amino acid and DNA models. PMID:23436912

  19. Statistical analysis of maximum likelihood estimator images of human brain FDG PET studies

    SciTech Connect

    Llacer, J.; Veklerov, E. ); Hoffman, E.J. . Dept. of Radiological Sciences); Nunez, J. , Facultat de Fisica); Coakley, K.J.

    1993-06-01

    The work presented in this paper evaluates the statistical characteristics of regional bias and expected error in reconstructions of real PET data of human brain fluorodeoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task that the authors have investigated is that of quantifying radioisotope uptake in regions-of-interest (ROI's). They first describe a robust methodology for the use of the MLE method with clinical data which contains only one adjustable parameter: the kernel size for a Gaussian filtering operation that determines final resolution and expected regional error. Simulation results are used to establish the fundamental characteristics of the reconstructions obtained by out methodology, corresponding to the case in which the transition matrix is perfectly known. Then, data from 72 independent human brain FDG scans from four patients are used to show that the results obtained from real data are consistent with the simulation, although the quality of the data and of the transition matrix have an effect on the final outcome.

  20. Binomial and Poisson Mixtures, Maximum Likelihood, and Maple Code

    SciTech Connect

    Bowman, Kimiko o; Shenton, LR

    2006-01-01

    The bias, variance, and skewness of maximum likelihoood estimators are considered for binomial and Poisson mixture distributions. The moments considered are asymptotic, and they are assessed using the Maple code. Question of existence of solutions and Karl Pearson's study are mentioned, along with the problems of valid sample space. Large samples to reduce variances are not unusual; this also applies to the size of the asymptotic skewness.

  1. C-arm perfusion imaging with a fast penalized maximum-likelihood approach

    NASA Astrophysics Data System (ADS)

    Frysch, Robert; Pfeiffer, Tim; Bannasch, Sebastian; Serowy, Steffen; Gugel, Sebastian; Skalej, Martin; Rose, Georg

    2014-03-01

    Perfusion imaging is an essential method for stroke diagnostics. One of the most important factors for a successful therapy is to get the diagnosis as fast as possible. Therefore our approach aims at perfusion imaging (PI) with a cone beam C-arm system providing perfusion information directly in the interventional suite. For PI the imaging system has to provide excellent soft tissue contrast resolution in order to allow the detection of small attenuation enhancement due to contrast agent in the capillary vessels. The limited dynamic range of flat panel detectors as well as the sparse sampling of the slow rotating C-arm in combination with standard reconstruction methods results in limited soft tissue contrast. We choose a penalized maximum-likelihood reconstruction method to get suitable results. To minimize the computational load, the 4D reconstruction task is reduced to several static 3D reconstructions. We also include an ordered subset technique with transitioning to a small number of subsets, which adds sharpness to the image with less iterations while also suppressing the noise. Instead of the standard multiplicative EM correction, we apply a Newton-based optimization to further accelerate the reconstruction algorithm. The latter optimization reduces the computation time by up to 70%. Further acceleration is provided by a multi-GPU implementation of the forward and backward projection, which fulfills the demands of cone beam geometry. In this preliminary study we evaluate this procedure on clinical data. Perfusion maps are computed and compared with reference images from magnetic resonance scans. We found a high correlation between both images.

  2. Maximum likelihood positioning and energy correction for scintillation detectors.

    PubMed

    Lerche, Christoph W; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten

    2016-02-21

    An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the [Formula: see text] scintillator pixel array with an [Formula: see text] SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner's spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner's overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time. PMID:26836394

  3. Maximum likelihood positioning and energy correction for scintillation detectors

    NASA Astrophysics Data System (ADS)

    Lerche, Christoph W.; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten

    2016-02-01

    An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30× 30 scintillator pixel array with an 8× 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner’s spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner’s overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.

  4. Maximum-likelihood estimation of photon-number distribution from homodyne statistics

    NASA Astrophysics Data System (ADS)

    Banaszek, Konrad

    1998-06-01

    We present a method for reconstructing the photon-number distribution from the homodyne statistics based on maximization of the likelihood function derived from the exact statistical description of a homodyne experiment. This method incorporates in a natural way the physical constraints on the reconstructed quantities, and the compensation for the nonunit detection efficiency.

  5. Speech processing using conditional observable maximum likelihood continuity mapping

    DOEpatents

    Hogden, John; Nix, David

    2004-01-13

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence of speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.

  6. Automated Maximum Likelihood Separation of Signal from Baseline in Noisy Quantal Data

    PubMed Central

    Bruno, William J.; Ullah, Ghanim; Daniel Mak, Don-On; Pearson, John E.

    2013-01-01

    Data recordings often include high-frequency noise and baseline fluctuations that are not generated by the system under investigation, which need to be removed before analyzing the signal for the system’s behavior. In the absence of an automated method, experimentalists fall back on manual procedures for removing these fluctuations, which can be laborious and prone to subjective bias. We introduce a maximum likelihood formalism for separating signal from a drifting baseline plus noise, when the signal takes on integer multiples of some value, as in ion channel patch-clamp current traces. Parameters such as the quantal step size (e.g., current passing through a single channel), noise amplitude, and baseline drift rate can all be optimized automatically using the expectation-maximization algorithm, taking the number of open channels (or molecules in the on-state) at each time point as a hidden variable. Our goal here is to reconstruct the signal, not model the (possibly highly complex) underlying system dynamics. Thus, our likelihood function is independent of those dynamics. This may be thought of as restricting to the simplest possible hidden Markov model for the underlying channel current, in which successive measurements of the state of the channel(s) are independent. The resulting method is comparable to an experienced human in terms of results, but much faster. FORTRAN 90, C, R, and JAVA codes that implement the algorithm are available for download from our website. PMID:23823225

  7. Automated maximum likelihood separation of signal from baseline in noisy quantal data.

    PubMed

    Bruno, William J; Ullah, Ghanim; Mak, Don-On Daniel; Pearson, John E

    2013-07-01

    Data recordings often include high-frequency noise and baseline fluctuations that are not generated by the system under investigation, which need to be removed before analyzing the signal for the system's behavior. In the absence of an automated method, experimentalists fall back on manual procedures for removing these fluctuations, which can be laborious and prone to subjective bias. We introduce a maximum likelihood formalism for separating signal from a drifting baseline plus noise, when the signal takes on integer multiples of some value, as in ion channel patch-clamp current traces. Parameters such as the quantal step size (e.g., current passing through a single channel), noise amplitude, and baseline drift rate can all be optimized automatically using the expectation-maximization algorithm, taking the number of open channels (or molecules in the on-state) at each time point as a hidden variable. Our goal here is to reconstruct the signal, not model the (possibly highly complex) underlying system dynamics. Thus, our likelihood function is independent of those dynamics. This may be thought of as restricting to the simplest possible hidden Markov model for the underlying channel current, in which successive measurements of the state of the channel(s) are independent. The resulting method is comparable to an experienced human in terms of results, but much faster. FORTRAN 90, C, R, and JAVA codes that implement the algorithm are available for download from our website. PMID:23823225

  8. Lesion quantification in oncological positron emission tomography: a maximum likelihood partial volume correction strategy.

    PubMed

    De Bernardi, Elisabetta; Faggiano, Elena; Zito, Felicia; Gerundini, Paolo; Baselli, Giuseppe

    2009-07-01

    A maximum likelihood (ML) partial volume effect correction (PVEC) strategy for the quantification of uptake and volume of oncological lesions in 18F-FDG positron emission tomography is proposed. The algorithm is based on the application of ML reconstruction on volumetric regional basis functions initially defined on a smooth standard clinical image and iteratively updated in terms of their activity and volume. The volume of interest (VOI) containing a previously detected region is segmented by a k-means algorithm in three regions: A central region surrounded by a partial volume region and a spill-out region. All volume outside the VOI (background with all other structures) is handled as a unique basis function and therefore "frozen" in the reconstruction process except for a gain coefficient. The coefficients of the regional basis functions are iteratively estimated with an attenuation-weighted ordered subset expectation maximization (AWOSEM) algorithm in which a 3D, anisotropic, space variant model of point spread function (PSF) is included for resolution recovery. The reconstruction-segmentation process is iterated until convergence; at each iteration, segmentation is performed on the reconstructed image blurred by the system PSF in order to update the partial volume and spill-out regions. The developed PVEC strategy was tested on sphere phantom studies with activity contrasts of 7.5 and 4 and compared to a conventional recovery coefficient method. Improved volume and activity estimates were obtained with low computational costs, thanks to blur recovery and to a better local approximation to ML convergence. PMID:19673203

  9. Maximum likelihood ratio tests for comparing the discriminatory ability of biomarkers subject to limit of detection.

    PubMed

    Vexler, Albert; Liu, Aiyi; Eliseeva, Ekaterina; Schisterman, Enrique F

    2008-09-01

    In this article, we consider comparing the areas under correlated receiver operating characteristic (ROC) curves of diagnostic biomarkers whose measurements are subject to a limit of detection (LOD), a source of measurement error from instruments' sensitivity in epidemiological studies. We propose and examine the likelihood ratio tests with operating characteristics that are easily obtained by classical maximum likelihood methodology. PMID:18047527

  10. Penalized maximum-likelihood sinogram restoration for dual focal spot computed tomography.

    PubMed

    Forthmann, P; Köhler, T; Begemann, P G C; Defrise, M

    2007-08-01

    Due to various system non-idealities, the raw data generated by a computed tomography (CT) machine are not readily usable for reconstruction. Although the deterministic nature of corruption effects such as crosstalk and afterglow permits correction by deconvolution, there is a drawback because deconvolution usually amplifies noise. Methods that perform raw data correction combined with noise suppression are commonly termed sinogram restoration methods. The need for sinogram restoration arises, for example, when photon counts are low and non-statistical reconstruction algorithms such as filtered backprojection are used. Many modern CT machines offer a dual focal spot (DFS) mode, which serves the goal of increased radial sampling by alternating the focal spot between two positions on the anode plate during the scan. Although the focal spot mode does not play a role with respect to how the data are affected by the above-mentioned corruption effects, it needs to be taken into account if regularized sinogram restoration is to be applied to the data. This work points out the subtle difference in processing that sinogram restoration for DFS requires, how it is correctly employed within the penalized maximum-likelihood sinogram restoration algorithm and what impact it has on image quality. PMID:17634647

  11. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  12. FITTING STATISTICAL DISTRIBUTIONS TO AIR QUALITY DATA BY THE MAXIMUM LIKELIHOOD METHOD

    EPA Science Inventory

    A computer program has been developed for fitting statistical distributions to air pollution data using maximum likelihood estimation. Appropriate uses of this software are discussed and a grouped data example is presented. The program fits the following continuous distributions:...

  13. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  14. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  15. A Comparison of Maximum Likelihood and Bayesian Estimation for Polychoric Correlation Using Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon

    2011-01-01

    The purpose of this study is to compare the maximum likelihood (ML) and Bayesian estimation methods for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…

  16. An independent sequential maximum likelihood approach to simultaneous track-to-track association and bias removal

    NASA Astrophysics Data System (ADS)

    Song, Qiong; Wang, Yuehuan; Yan, Xiaoyun; Liu, Dang

    2015-12-01

    In this paper we propose an independent sequential maximum likelihood approach to address the joint track-to-track association and bias removal in multi-sensor information fusion systems. First, we enumerate all kinds of association situation following by estimating a bias for each association. Then we calculate the likelihood of each association after bias compensated. Finally we choose the maximum likelihood of all association situations as the association result and the corresponding bias estimation is the registration result. Considering the high false alarm and interference, we adopt the independent sequential association to calculate the likelihood. Simulation results show that our proposed method can give out the right association results and it can estimate the bias precisely simultaneously for small number of targets in multi-sensor fusion system.

  17. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  18. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  19. Maximum-likelihood constrained regularized algorithms: an objective criterion for the determination of regularization parameters

    NASA Astrophysics Data System (ADS)

    Lanteri, Henri; Roche, Muriel; Cuevas, Olga; Aime, Claude

    1999-12-01

    We propose regularized versions of Maximum Likelihood algorithms for Poisson process with non-negativity constraint. For such process, the best-known (non- regularized) algorithm is that of Richardson-Lucy, extensively used for astronomical applications. Regularization is necessary to prevent an amplification of the noise during the iterative reconstruction; this can be done either by limiting the iteration number or by introducing a penalty term. In this Communication, we focus our attention on the explicit regularization using Tikhonov (Identity and Laplacian operator) or entropy terms (Kullback-Leibler and Csiszar divergences). The algorithms are established from the Kuhn-Tucker first order optimality conditions for the minimization of the Lagrange function and from the method of successive substitutions. The algorithms may be written in a `product form'. Numerical illustrations are given for simulated images corrupted by photon noise. The effects of the regularization are shown in the Fourier plane. The tests we have made indicate that a noticeable improvement of the results may be obtained for some of these explicitly regularized algorithms. We also show that a comparison with a Wiener filter can give the optimal regularizing conditions (operator and strength).

  20. A maximum-likelihood search for neutrino point sources with the AMANDA-II detector

    NASA Astrophysics Data System (ADS)

    Braun, James R.

    Neutrino astronomy offers a new window to study the high energy universe. The AMANDA-II detector records neutrino-induced muon events in the ice sheet beneath the geographic South Pole, and has accumulated 3.8 years of livetime from 2000 - 2006. After reconstructing muon tracks and applying selection criteria, we arrive at a sample of 6595 events originating from the Northern Sky, predominantly atmospheric neutrinos with primary energy 100 GeV to 8 TeV. We search these events for evidence of astrophysical neutrino point sources using a maximum-likelihood method. No excess above the atmospheric neutrino background is found, and we set upper limits on neutrino fluxes. Finally, a well-known potential dark matter signature is emission of high energy neutrinos from annihilation of WIMPs gravitationally bound to the Sun. We search for high energy neutrinos from the Sun and find no excess. Our limits on WIMP-nucleon cross section set new constraints on MSSM parameter space.

  1. A general methodology for maximum likelihood inference from band-recovery data

    USGS Publications Warehouse

    Conroy, M.J.; Williams, B.K.

    1984-01-01

    A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.

  2. Estimation of bias errors in measured airplane responses using maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Klein, Vladiaslav; Morgan, Dan R.

    1987-01-01

    A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.

  3. Maximum likelihood estimation with poisson (counting) statistics for waste drum inspection

    SciTech Connect

    Goodman, D.

    1997-05-01

    This note provides a preliminary look at the issues involved in waste drum inspection when emission levels are so low that central limit theorem arguments do not apply and counting statistics, rather than the usual Gaussian assumption, must be considered. At very high count rates the assumption of Gaussian statistics is reasonable, and the maximum likelihood arguments that we discuss below for low count rates would lead to the usual approach of least squares fits. Least squares is not the the best technique for low counts, and we will develop the maximum likelihood estimators for the low count case.

  4. A maximum likelihood method for determining the distribution of galaxies in clusters

    NASA Astrophysics Data System (ADS)

    Sarazin, C. L.

    1980-02-01

    A maximum likelihood method is proposed for the analysis of the projected distribution of galaxies in clusters. It has many advantages compared to the standard method; principally, it does not require binning of the galaxy positions, applies to asymmetric clusters, and can simultaneously determine all cluster parameters. A rapid method of solving the maximum likelihood equations is given which also automatically gives error estimates for the parameters. Monte Carlo tests indicate this method applies even for rather sparse clusters. The Godwin-Peach data on the Coma cluster are analyzed; the core sizes derived agree reasonably with those of Bahcall. Some slight evidence of mass segregation is found.

  5. Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Mehra, R. K.

    1974-01-01

    This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.

  6. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  7. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  8. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  9. Marginal Maximum Likelihood Estimation of a Latent Variable Model with Interaction

    ERIC Educational Resources Information Center

    Cudeck, Robert; Harring, Jeffrey R.; du Toit, Stephen H. C.

    2009-01-01

    There has been considerable interest in nonlinear latent variable models specifying interaction between latent variables. Although it seems to be only slightly more complex than linear regression without the interaction, the model that includes a product of latent variables cannot be estimated by maximum likelihood assuming normality.…

  10. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  11. 12-mode OFDM transmission using reduced-complexity maximum likelihood detection.

    PubMed

    Lobato, Adriana; Chen, Yingkan; Jung, Yongmin; Chen, Haoshuo; Inan, Beril; Kuschnerov, Maxim; Fontaine, Nicolas K; Ryf, Roland; Spinnler, Bernhard; Lankl, Berthold

    2015-02-01

    We report the transmission of 163-Gb/s MDM-QPSK-OFDM and 245-Gb/s MDM-8QAM-OFDM transmission over 74 km of few-mode fiber supporting 12 spatial and polarization modes. A low-complexity maximum likelihood detector is employed to enhance the performance of a system impaired by mode-dependent loss. PMID:25680039

  12. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  13. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  14. Finding Quantitative Trait Loci Genes with Collaborative Targeted Maximum Likelihood Learning.

    PubMed

    Wang, Hui; Rose, Sherri; van der Laan, Mark J

    2011-07-01

    Quantitative trait loci mapping is focused on identifying the positions and effect of genes underlying an an observed trait. We present a collaborative targeted maximum likelihood estimator in a semi-parametric model using a newly proposed 2-part super learning algorithm to find quantitative trait loci genes in listeria data. Results are compared to the parametric composite interval mapping approach. PMID:21572586

  15. Finding Quantitative Trait Loci Genes with Collaborative Targeted Maximum Likelihood Learning

    PubMed Central

    Wang, Hui; Rose, Sherri; van der Laan, Mark J.

    2010-01-01

    Quantitative trait loci mapping is focused on identifying the positions and effect of genes underlying an an observed trait. We present a collaborative targeted maximum likelihood estimator in a semi-parametric model using a newly proposed 2-part super learning algorithm to find quantitative trait loci genes in listeria data. Results are compared to the parametric composite interval mapping approach. PMID:21572586

  16. Estimation of Maximum Likelihood of the Unextendable Dead Time Period in a Flow of Physical Events

    NASA Astrophysics Data System (ADS)

    Gortsev, A. M.; Solov'ev, A. A.

    2016-03-01

    A flow of physical events (photons, electrons, etc.) is studied. One of the mathematical models of such flows is the MAP-flow of events. The flow circulates under conditions of the unextendable dead time period, when the dead time period is unknown. The dead time period is estimated by the method of maximum likelihood from observations of arrival instants of events.

  17. THE MAXIMUM LIKELIHOOD APPROACH TO PROBABILISTIC MODELING OF AIR QUALITY DATA

    EPA Science Inventory

    Software using maximum likelihood estimation to fit six probabilistic models is discussed. The software is designed as a tool for the air pollution researcher to determine what assumptions are valid in the statistical analysis of air pollution data for the purpose of standard set...

  18. A Study of Item Bias for Attitudinal Measurement Using Maximum Likelihood Factor Analysis.

    ERIC Educational Resources Information Center

    Mayberry, Paul W.

    A technique for detecting item bias that is responsive to attitudinal measurement considerations is a maximum likelihood factor analysis procedure comparing multivariate factor structures across various subpopulations, often referred to as SIFASP. The SIFASP technique allows for factorial model comparisons in the testing of various hypotheses…

  19. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  20. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  1. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  2. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    ERIC Educational Resources Information Center

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  3. Indoor Ultra-Wide Band Network Adjustment using Maximum Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C. K.

    2014-11-01

    This study is the part of our ongoing research on using ultra-wide band (UWB) technology for navigation at the Ohio State University. Our tests have indicated that the UWB two-way time-of-flight ranges under indoor circumstances follow a Gaussian mixture distribution that may be caused by the incompleteness of the functional model. In this case, to adjust the UWB network from the observed ranges, the maximum likelihood estimation (MLE) may provide a better solution for the node coordinates than the widely-used least squares approach. The prerequisite of the maximum likelihood method is to know the probability density functions. The 30 Hz sampling rate of the UWB sensors enables to estimate these functions between each node from the samples in static positioning mode. In order to prove the MLE hypothesis, an UWB network has been established in a multi-path density environment for test data acquisition. The least squares and maximum likelihood coordinate solutions are determined and compared, and the results indicate that better accuracy can be achieved with maximum likelihood estimation.

  4. A multinomial maximum likelihood program /MUNOML/. [in modeling sensory and decision phenomena

    NASA Technical Reports Server (NTRS)

    Curry, R. E.

    1975-01-01

    A multinomial maximum likelihood program (MUNOML) for signal detection and for behavior models is discussed. It is found to be useful in day to day operation since it provides maximum flexibility with minimum duplicated effort. It has excellent convergence qualities and rarely goes beyond 10 iterations. A library of subroutines is being collected for use with MUNOML, including subroutines for a successive categories model and for signal detectability models.

  5. Maximum likelihood analysis for heteroscedastic one-way random effects ANOVA in interlaboratory studies.

    PubMed

    Vangel, M G; Rukhin, A L

    1999-03-01

    This article presents results for the maximum likelihood analysis of several groups of measurements made on the same quantity. Following Cochran (1937, Journal of the Royal Statistical Society 4(Supple), 102-118; 1954, Biometrics 10, 101-129; 1980, in Proceedings of the 25th Conference on the Design of Experiments in Army Research, Development and Testing, 21-33) and others, this problem is formulated as a one-way unbalanced random-effects ANOVA with unequal within-group variances. A reparametrization of the likelihood leads to simplified computations, easier identification and interpretation of multimodality of the likelihood, and (through a non-informative-prior Bayesian approach) approximate confidence regions for the mean and between-group variance. PMID:11318146

  6. Intra-Die Spatial Correlation Extraction with Maximum Likelihood Estimation Method for Multiple Test Chips

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Luk, Wai-Shing; Tao, Jun; Zeng, Xuan; Cai, Wei

    In this paper, a novel intra-die spatial correlation extraction method referred to as MLEMTC (Maximum Likelihood Estimation for Multiple Test Chips) is presented. In the MLEMTC method, a joint likelihood function is formulated by multiplying the set of individual likelihood functions for all test chips. This joint likelihood function is then maximized to extract a unique group of parameter values of a single spatial correlation function, which can be used for statistical circuit analysis and design. Moreover, to deal with the purely random component and measurement error contained in measurement data, the spatial correlation function combined with the correlation of white noise is used in the extraction, which significantly improves the accuracy of the extraction results. Furthermore, an LU decomposition based technique is developed to calculate the log-determinant of the positive definite matrix within the likelihood function, which solves the numerical stability problem encountered in the direct calculation. Experimental results have shown that the proposed method is efficient and practical.

  7. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  8. Maximum-likelihood methods in cryo-EM. Part II: application to experimental data

    PubMed Central

    Scheres, Sjors H.W.

    2010-01-01

    With the advent of computationally feasible approaches to maximum likelihood image processing for cryo-electron microscopy, these methods have proven particularly useful in the classification of structurally heterogeneous single-particle data. A growing number of experimental studies have applied these algorithms to study macromolecular complexes with a wide range of structural variability, including non-stoichiometric complex formation, large conformational changes and combinations of both. This chapter aims to share the practical experience that has been gained from the application of these novel approaches. Current insights on how to prepare the data and how to perform two- or three-dimensional classifications are discussed together with aspects related to high-performance computing. Thereby, this chapter will hopefully be of practical use for those microscopists wanting to apply maximum likelihood methods in their own investigations. PMID:20888966

  9. Introducing robustness to maximum-likelihood refinement of electron-microsopy data

    SciTech Connect

    Scheres, Sjors H. W. Carazo, José-María

    2009-07-01

    An expectation-maximization algorithm for maximum-likelihood refinement of electron-microscopy data is presented that is based on finite mixtures of multivariate t-distributions. Compared with the conventionally employed Gaussian mixture model, the t-distribution provides robustness against outliers in the data. An expectation-maximization algorithm for maximum-likelihood refinement of electron-microscopy images is presented that is based on fitting mixtures of multivariate t-distributions. The novel algorithm has intrinsic characteristics for providing robustness against atypical observations in the data, which is illustrated using an experimental test set with artificially generated outliers. Tests on experimental data revealed only minor differences in two-dimensional classifications, while three-dimensional classification with the new algorithm gave stronger elongation factor G density in the corresponding class of a structurally heterogeneous ribosome data set than the conventional algorithm for Gaussian mixtures.

  10. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    NASA Technical Reports Server (NTRS)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  11. Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks.

    PubMed

    Lin, Lin; Yang, Chengfeng; Ma, Maode

    2015-01-01

    Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks. PMID:26690173

  12. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  13. A Maximum-Likelihood Method for the Estimation of Pairwise Relatedness in Structured Populations

    PubMed Central

    Anderson, Amy D.; Weir, Bruce S.

    2007-01-01

    A maximum-likelihood estimator for pairwise relatedness is presented for the situation in which the individuals under consideration come from a large outbred subpopulation of the population for which allele frequencies are known. We demonstrate via simulations that a variety of commonly used estimators that do not take this kind of misspecification of allele frequencies into account will systematically overestimate the degree of relatedness between two individuals from a subpopulation. A maximum-likelihood estimator that includes FST as a parameter is introduced with the goal of producing the relatedness estimates that would have been obtained if the subpopulation allele frequencies had been known. This estimator is shown to work quite well, even when the value of FST is misspecified. Bootstrap confidence intervals are also examined and shown to exhibit close to nominal coverage when FST is correctly specified. PMID:17339212

  14. Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks

    PubMed Central

    Lin, Lin; Yang, Chengfeng; Ma, Maode

    2015-01-01

    Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks. PMID:26690173

  15. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  16. Determination of linear displacement by envelope detection with maximum likelihood estimation

    SciTech Connect

    Lang, Kuo-Chen; Teng, Hui-Kang

    2010-09-20

    We demonstrate in this report an envelope detection technique with maximum likelihood estimation in a least square sense for determining displacement. This technique is achieved by sampling the amplitudes of quadrature signals resulted from a heterodyne interferometer so that the resolution of displacement measurement of the order of {lambda}/10{sup 4} is experimentally verified. A phase unwrapping procedure is also described and experimentally demonstrated and indicates that the unambiguity range of displacement can be measured beyond a single wavelength.

  17. PHYML Online—a web server for fast maximum likelihood-based phylogenetic inference

    PubMed Central

    Guindon, Stéphane; Lethiec, Franck; Duroux, Patrice; Gascuel, Olivier

    2005-01-01

    PHYML Online is a web interface to PHYML, a software that implements a fast and accurate heuristic for estimating maximum likelihood phylogenies from DNA and protein sequences. This tool provides the user with a number of options, e.g. nonparametric bootstrap and estimation of various evolutionary parameters, in order to perform comprehensive phylogenetic analyses on large datasets in reasonable computing time. The server and its documentation are available at . PMID:15980534

  18. Maximum Likelihood-Based Iterated Divided Difference Filter for Nonlinear Systems from Discrete Noisy Measurements

    PubMed Central

    Wang, Changyuan; Zhang, Jing; Mu, Jing

    2012-01-01

    A new filter named the maximum likelihood-based iterated divided difference filter (MLIDDF) is developed to improve the low state estimation accuracy of nonlinear state estimation due to large initial estimation errors and nonlinearity of measurement equations. The MLIDDF algorithm is derivative-free and implemented only by calculating the functional evaluations. The MLIDDF algorithm involves the use of the iteration measurement update and the current measurement, and the iteration termination criterion based on maximum likelihood is introduced in the measurement update step, so the MLIDDF is guaranteed to produce a sequence estimate that moves up the maximum likelihood surface. In a simulation, its performance is compared against that of the unscented Kalman filter (UKF), divided difference filter (DDF), iterated unscented Kalman filter (IUKF) and iterated divided difference filter (IDDF) both using a traditional iteration strategy. Simulation results demonstrate that the accumulated mean-square root error for the MLIDDF algorithm in position is reduced by 63% compared to that of UKF and DDF algorithms, and by 7% compared to that of IUKF and IDDF algorithms. The new algorithm thus has better state estimation accuracy and a fast convergence rate. PMID:23012525

  19. Robust maximum likelihood estimation for stochastic state space model with observation outliers

    NASA Astrophysics Data System (ADS)

    AlMutawa, J.

    2016-08-01

    The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.

  20. A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.

    PubMed

    Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming

    2008-01-01

    This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system. PMID:19162641

  1. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  2. Uniform Accuracy of the Maximum Likelihood Estimates for Probabilistic Models of Biological Sequences

    PubMed Central

    Ekisheva, Svetlana

    2010-01-01

    Probabilistic models for biological sequences (DNA and proteins) have many useful applications in bioinformatics. Normally, the values of parameters of these models have to be estimated from empirical data. However, even for the most common estimates, the maximum likelihood (ML) estimates, properties have not been completely explored. Here we assess the uniform accuracy of the ML estimates for models of several types: the independence model, the Markov chain and the hidden Markov model (HMM). Particularly, we derive rates of decay of the maximum estimation error by employing the measure concentration as well as the Gaussian approximation, and compare these rates. PMID:21318122

  3. Maximum-likelihood and other processors for incoherent and coherent matched-field localization.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2012-10-01

    This paper develops a series of maximum-likelihood processors for matched-field source localization given various states of information regarding the frequency and time variation of source amplitude and phase, and compares these with existing approaches to coherent processing with incomplete source knowledge. The comparison involves elucidating each processor's approach to source spectral information within a unifying formulation, which provides a conceptual framework for classifying and comparing processors and explaining their relative performance, as quantified in a numerical study. The maximum-likelihood processors represent optimal estimators given the assumption of Gaussian noise, and are based on analytically maximizing the corresponding likelihood function over explicit unknown source spectral parameters. Cases considered include knowledge of the relative variation in source amplitude over time and/or frequency (e.g., a flat spectrum), and tracking the relative phase variation over time, as well as incoherent and coherent processing. Other approaches considered include the conventional (Bartlett) processor, cross-frequency incoherent processor, pair-wise processor, and coherent normalized processor. Processor performance is quantified as the probability of correct localization from Monte Carlo appraisal over a large number of random realizations of noise, source location, and environmental parameters. Processors are compared as a function of signal-to-noise ratio, number of frequencies, and number of sensors. PMID:23039424

  4. Task-based detectability in CT image reconstruction by filtered backprojection and penalized likelihood estimation

    SciTech Connect

    Gang, Grace J.; Stayman, J. Webster; Zbijewski, Wojciech; Siewerdsen, Jeffrey H.

    2014-08-15

    Purpose: Nonstationarity is an important aspect of imaging performance in CT and cone-beam CT (CBCT), especially for systems employing iterative reconstruction. This work presents a theoretical framework for both filtered-backprojection (FBP) and penalized-likelihood (PL) reconstruction that includes explicit descriptions of nonstationary noise, spatial resolution, and task-based detectability index. Potential utility of the model was demonstrated in the optimal selection of regularization parameters in PL reconstruction. Methods: Analytical models for local modulation transfer function (MTF) and noise-power spectrum (NPS) were investigated for both FBP and PL reconstruction, including explicit dependence on the object and spatial location. For FBP, a cascaded systems analysis framework was adapted to account for nonstationarity by separately calculating fluence and system gains for each ray passing through any given voxel. For PL, the point-spread function and covariance were derived using the implicit function theorem and first-order Taylor expansion according toFessler [“Mean and variance of implicitly defined biased estimators (such as penalized maximum likelihood): Applications to tomography,” IEEE Trans. Image Process. 5(3), 493–506 (1996)]. Detectability index was calculated for a variety of simple tasks. The model for PL was used in selecting the regularization strength parameter to optimize task-based performance, with both a constant and a spatially varying regularization map. Results: Theoretical models of FBP and PL were validated in 2D simulated fan-beam data and found to yield accurate predictions of local MTF and NPS as a function of the object and the spatial location. The NPS for both FBP and PL exhibit similar anisotropic nature depending on the pathlength (and therefore, the object and spatial location within the object) traversed by each ray, with the PL NPS experiencing greater smoothing along directions with higher noise. The MTF of FBP

  5. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    SciTech Connect

    Pražnikar, Jure; Turk, Dušan

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. They utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.

  6. An inconsistency in the standard maximum likelihood estimation of bulk flows

    SciTech Connect

    Nusser, Adi

    2014-11-01

    Maximum likelihood estimation of the bulk flow from radial peculiar motions of galaxies generally assumes a constant velocity field inside the survey volume. This assumption is inconsistent with the definition of bulk flow as the average of the peculiar velocity field over the relevant volume. This follows from a straightforward mathematical relation between the bulk flow of a sphere and the velocity potential on its surface. This inconsistency also exists for ideal data with exact radial velocities and full spatial coverage. Based on the same relation, we propose a simple modification to correct for this inconsistency.

  7. The epoch state navigation filter. [for maximum likelihood estimates of position and velocity vectors

    NASA Technical Reports Server (NTRS)

    Battin, R. H.; Croopnick, S. R.; Edwards, J. A.

    1977-01-01

    The formulation of a recursive maximum likelihood navigation system employing reference position and velocity vectors as state variables is presented. Convenient forms of the required variational equations of motion are developed together with an explicit form of the associated state transition matrix needed to refer measurement data from the measurement time to the epoch time. Computational advantages accrue from this design in that the usual forward extrapolation of the covariance matrix of estimation errors can be avoided without incurring unacceptable system errors. Simulation data for earth orbiting satellites are provided to substantiate this assertion.

  8. Optimized sparse presentation-based classification method with weighted block and maximum likelihood model

    NASA Astrophysics Data System (ADS)

    He, Jun; Zuo, Tian; Sun, Bo; Wu, Xuewen; Chen, Chao

    2014-06-01

    This paper is aiming at applying sparse representation based classification (SRC) on face recognition with disguise or illumination variation. Having analyzed the characteristics of general object recognition and the principle of the classifier of SRC method, authors focus on evaluating blocks of a probe sample and propose an optimized SRC method based on position-preserving weighted block and maximum likelihood model. Principle and implementation of the proposed method have been introduced in the article, and experiments on Yale and AR face database have been given too. From experimental results, it can be seen that the proposed optimized SRC method works well than existing methods.

  9. Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,

  10. Phantom study of tear film dynamics with optical coherence tomography and maximum-likelihood estimation

    PubMed Central

    Huang, Jinxin; Lee, Kye-sung; Clarkson, Eric; Kupinski, Matthew; Maki, Kara L.; Ross, David S.; Aquavella, James V.; Rolland, Jannick P.

    2016-01-01

    In this Letter, we implement a maximum-likelihood estimator to interpret optical coherence tomography (OCT) data for the first time, based on Fourier-domain OCT and a two-interface tear film model. We use the root mean square error as a figure of merit to quantify the system performance of estimating the tear film thickness. With the methodology of task-based assessment, we study the trade-off between system imaging speed (temporal resolution of the dynamics) and the precision of the estimation. Finally, the estimator is validated with a digital tear-film dynamics phantom. PMID:23938923

  11. A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,

    2014-09-01

    In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.

  12. User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.

  13. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  14. F-8C adaptive flight control extensions. [for maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Stein, G.; Hartmann, G. L.

    1977-01-01

    An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.

  15. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast.

    PubMed

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M

    2016-04-21

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented. PMID:27025665

  16. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast

    NASA Astrophysics Data System (ADS)

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M.

    2016-04-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.

  17. IQ-TREE: A Fast and Effective Stochastic Algorithm for Estimating Maximum-Likelihood Phylogenies

    PubMed Central

    Nguyen, Lam-Tung; Schmidt, Heiko A.; von Haeseler, Arndt; Minh, Bui Quang

    2015-01-01

    Large phylogenomics data sets require fast tree inference methods, especially for maximum-likelihood (ML) phylogenies. Fast programs exist, but due to inherent heuristics to find optimal trees, it is not clear whether the best tree is found. Thus, there is need for additional approaches that employ different search strategies to find ML trees and that are at the same time as fast as currently available ML programs. We show that a combination of hill-climbing approaches and a stochastic perturbation method can be time-efficiently implemented. If we allow the same CPU time as RAxML and PhyML, then our software IQ-TREE found higher likelihoods between 62.2% and 87.1% of the studied alignments, thus efficiently exploring the tree-space. If we use the IQ-TREE stopping rule, RAxML and PhyML are faster in 75.7% and 47.1% of the DNA alignments and 42.2% and 100% of the protein alignments, respectively. However, the range of obtaining higher likelihoods with IQ-TREE improves to 73.3–97.1%. IQ-TREE is freely available at http://www.cibiv.at/software/iqtree. PMID:25371430

  18. Maximum-likelihood techniques for joint segmentation-classification of multispectral chromosome images.

    PubMed

    Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L

    2005-12-01

    Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information. PMID:16350919

  19. Maximum likelihood estimation for model Mt,α for capture-recapture data with misidentification.

    PubMed

    Vale, R T R; Fewster, R M; Carroll, E L; Patenaude, N J

    2014-12-01

    We investigate model Mt,α  for abundance estimation in closed-population capture-recapture studies, where animals are identified from natural marks such as DNA profiles or photographs of distinctive individual features. Model Mt,α  extends the classical model Mt  to accommodate errors in identification, by specifying that each sample identification is correct with probability α and false with probability 1-α. Information about misidentification is gained from a surplus of capture histories with only one entry, which arise from false identifications. We derive an exact closed-form expression for the likelihood for model Mt,α  and show that it can be computed efficiently, in contrast to previous studies which have held the likelihood to be computationally intractable. Our fast computation enables us to conduct a thorough investigation of the statistical properties of the maximum likelihood estimates. We find that the indirect approach to error estimation places high demands on data richness, and good statistical properties in terms of precision and bias require high capture probabilities or many capture occasions. When these requirements are not met, abundance is estimated with very low precision and negative bias, and at the extreme better properties can be obtained by the naive approach of ignoring misidentification error. We recommend that model Mt,α  be used with caution and other strategies for handling misidentification error be considered. We illustrate our study with genetic and photographic surveys of the New Zealand population of southern right whale (Eubalaena australis). PMID:24942186

  20. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    SciTech Connect

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  1. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here. PMID:20555971

  2. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas.

    PubMed

    Washeleski, Robert L; Meyer, Edmond J; King, Lyon B

    2013-10-01

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed. PMID:24182157

  3. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas

    NASA Astrophysics Data System (ADS)

    Washeleski, Robert L.; Meyer, Edmond J.; King, Lyon B.

    2013-10-01

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.

  4. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. PMID:27176912

  5. Estimating sampling error of evolutionary statistics based on genetic covariance matrices using maximum likelihood.

    PubMed

    Houle, D; Meyer, K

    2015-08-01

    We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. PMID:26079756

  6. On the use of maximum likelihood estimation for the assembly of Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr.; Ramakrishnan, Jayant

    1991-01-01

    Distributed parameter models of the Solar Array Flight Experiment, the Mini-MAST truss, and Space Station Freedom assembly are discussed. The distributed parameter approach takes advantage of (1) the relatively small number of model parameters associated with partial differential equation models of structural dynamics, (2) maximum-likelihood estimation using both prelaunch and on-orbit test data, (3) the inclusion of control system dynamics in the same equations, and (4) the incremental growth of the structural configurations. Maximum-likelihood parameter estimates for distributed parameter models were based on static compliance test results and frequency response measurements. Because the Space Station Freedom does not yet exist, the NASA Mini-MAST truss was used to test the procedure of modeling and parameter estimation. The resulting distributed parameter model of the Mini-MAST truss successfully demonstrated the approach taken. The computer program PDEMOD enables any configuration that can be represented by a network of flexible beam elements and rigid bodies to be remodeled.

  7. A maximum likelihood approach to estimating articulator positions from speech acoustics

    SciTech Connect

    Hogden, J.

    1996-09-23

    This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.

  8. New method to compute Rcomplete enables maximum likelihood refinement for small datasets

    PubMed Central

    Luebben, Jens; Gruene, Tim

    2015-01-01

    The crystallographic reliability index Rcomplete is based on a method proposed more than two decades ago. Because its calculation is computationally expensive its use did not spread into the crystallographic community in favor of the cross-validation method known as Rfree. The importance of Rfree has grown beyond a pure validation tool. However, its application requires a sufficiently large dataset. In this work we assess the reliability of Rcomplete and we compare it with k-fold cross-validation, bootstrapping, and jackknifing. As opposed to proper cross-validation as realized with Rfree, Rcomplete relies on a method of reducing bias from the structural model. We compare two different methods reducing model bias and question the widely spread notion that random parameter shifts are required for this purpose. We show that Rcomplete has as little statistical bias as Rfree with the benefit of a much smaller variance. Because the calculation of Rcomplete is based on the entire dataset instead of a small subset, it allows the estimation of maximum likelihood parameters even for small datasets. Rcomplete enables maximum likelihood-based refinement to be extended to virtually all areas of crystallographic structure determination including high-pressure studies, neutron diffraction studies, and datasets from free electron lasers. PMID:26150515

  9. The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation

    NASA Technical Reports Server (NTRS)

    Tsou, Haiping; Yan, Tsun-Yee

    2000-01-01

    This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.

  10. Maximum-likelihood methods for array processing based on time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  11. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas

    SciTech Connect

    Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.

    2013-10-15

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.

  12. Inertial Sensor Arrays, Maximum Likelihood, and Cramér–Rao Bound

    NASA Astrophysics Data System (ADS)

    Skog, Isaac; Nilsson, John-Olof; Handel, Peter; Nehorai, Arye

    2016-08-01

    A maximum likelihood estimator for fusing the measurements in an inertial sensor array is presented. The maximum likelihood estimator is concentrated and an iterative solution method is presented for the resulting low-dimensional optimization problem. The Cram\\'er-Rao bound for the corresponding measurement fusion problem is derived and used to assess the performance of the proposed method, as well as to analyze how the geometry of the array and sensor errors affect the accuracy of the measurement fusion. The angular velocity information gained from the accelerometers in the array is shown to be proportional to the square of the array dimension and to the square of the angular speed. In our simulations the proposed fusion method attains the Cram\\'er-Rao bound and outperforms the current state-of-the-art method for measurement fusion in accelerometer arrays. Further, in contrast to the state-of-the-art method that requires a 3D array to work, the proposed method also works for 2D arrays. The theoretical findings are compared to results from real-world experiments with an in-house developed array that consists of 192 sensing elements.

  13. An evaluation of several different classification schemes - Their parameters and performance. [maximum likelihood decision for crop identification

    NASA Technical Reports Server (NTRS)

    Scholz, D.; Fuhs, N.; Hixson, M.

    1979-01-01

    The overall objective of this study was to apply and evaluate several of the currently available classification schemes for crop identification. The approaches examined were: (1) a per point Gaussian maximum likelihood classifier, (2) a per point sum of normal densities classifier, (3) a per point linear classifier, (4) a per point Gaussian maximum likelihood decision tree classifier, and (5) a texture sensitive per field Gaussian maximum likelihood classifier. Three agricultural data sets were used in the study: areas from Fayette County, Illinois, and Pottawattamie and Shelby Counties in Iowa. The segments were located in two distinct regions of the Corn Belt to sample variability in soils, climate, and agricultural practices.

  14. A Targeted Maximum Likelihood Estimator of a Causal Effect on a Bounded Continuous Outcome

    PubMed Central

    Gruber, Susan; van der Laan, Mark J.

    2010-01-01

    Targeted maximum likelihood estimation of a parameter of a data generating distribution, known to be an element of a semi-parametric model, involves constructing a parametric model through an initial density estimator with parameter ɛ representing an amount of fluctuation of the initial density estimator, where the score of this fluctuation model at ɛ = 0 equals the efficient influence curve/canonical gradient. The latter constraint can be satisfied by many parametric fluctuation models since it represents only a local constraint of its behavior at zero fluctuation. However, it is very important that the fluctuations stay within the semi-parametric model for the observed data distribution, even if the parameter can be defined on fluctuations that fall outside the assumed observed data model. In particular, in the context of sparse data, by which we mean situations where the Fisher information is low, a violation of this property can heavily affect the performance of the estimator. This paper presents a fluctuation approach that guarantees the fluctuated density estimator remains inside the bounds of the data model. We demonstrate this in the context of estimation of a causal effect of a binary treatment on a continuous outcome that is bounded. It results in a targeted maximum likelihood estimator that inherently respects known bounds, and consequently is more robust in sparse data situations than the targeted MLE using a naive fluctuation model. When an estimation procedure incorporates weights, observations having large weights relative to the rest heavily influence the point estimate and inflate the variance. Truncating these weights is a common approach to reducing the variance, but it can also introduce bias into the estimate. We present an alternative targeted maximum likelihood estimation (TMLE) approach that dampens the effect of these heavily weighted observations. As a substitution estimator, TMLE respects the global constraints of the observed data

  15. Image properties of list mode likelihood reconstruction for a rectangular positron emission mammography with DOI measurements

    SciTech Connect

    Qi, Jinyi; Klein, Gregory J.; Huesman, Ronald H.

    2000-10-01

    A positron emission mammography scanner is under development at our Laboratory. The tomograph has a rectangular geometry consisting of four banks of detector modules. For each detector, the system can measure the depth of interaction information inside the crystal. The rectangular geometry leads to irregular radial and angular sampling and spatially variant sensitivity that are different from conventional PET systems. Therefore, it is of importance to study the image properties of the reconstructions. We adapted the theoretical analysis that we had developed for conventional PET systems to the list mode likelihood reconstruction for this tomograph. The local impulse response and covariance of the reconstruction can be easily computed using FFT. These theoretical results are also used with computer observer models to compute the signal-to-noise ratio for lesion detection. The analysis reveals the spatially variant resolution and noise properties of the list mode likelihood reconstruction. The theoretical predictions are in good agreement with Monte Carlo results.

  16. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  17. Accuracy of Maximum Likelihood Parameter Estimators for Heston Stochastic Volatility SDE

    NASA Astrophysics Data System (ADS)

    Azencott, Robert; Gadhyan, Yutheeka

    2015-04-01

    We study approximate maximum likelihood estimators (MLEs) for the parameters of the widely used Heston Stock price and volatility stochastic differential equations (SDEs). We compute explicit closed form estimators maximizing the discretized log-likelihood of observations recorded at times . We compute the asymptotic biases of these parameter estimators for fixed and , as well as the rate at which these biases vanish when . We determine asymptotically consistent explicit modifications of these MLEs. For the Heston volatility SDE, we identify a canonical form determined by two canonical parameters and which are explicit functions of the original SDE parameters. We analyze theoretically the asymptotic distribution of the MLEs and of their consistent modifications, and we outline their concrete speeds of convergence by numerical simulations. We clarify in terms of the precise dichotomy between asymptotic normality and attraction by stable like distributions with heavy tails. We illustrate numerical model fitting for Heston SDEs by two concrete examples, one for daily data and one for intraday data, both with moderate values of.

  18. A comparison of minimum distance and maximum likelihood techniques for proportion estimation

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.

    1982-01-01

    The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.

  19. Fuzzy modeling, maximum likelihood estimation, and Kalman filtering for target tracking in NLOS scenarios

    NASA Astrophysics Data System (ADS)

    Yan, Jun; Yu, Kegen; Wu, Lenan

    2014-12-01

    To mitigate the non-line-of-sight (NLOS) effect, a three-step positioning approach is proposed in this article for target tracking. The possibility of each distance measurement under line-of-sight condition is first obtained by applying the truncated triangular probability-possibility transformation associated with fuzzy modeling. Based on the calculated possibilities, the measurements are utilized to obtain intermediate position estimates using the maximum likelihood estimation (MLE), according to identified measurement condition. These intermediate position estimates are then filtered using a linear Kalman filter (KF) to produce the final target position estimates. The target motion information and statistical characteristics of the MLE results are employed in updating the KF parameters. The KF position prediction is exploited for MLE parameter initialization and distance measurement selection. Simulation results demonstrate that the proposed approach outperforms the existing algorithms in the presence of unknown NLOS propagation conditions and achieves a performance close to that when propagation conditions are perfectly known.

  20. On maximum likelihood estimation of the concentration parameter of von Mises-Fisher distributions.

    PubMed

    Hornik, Kurt; Grün, Bettina

    2014-01-01

    Maximum likelihood estimation of the concentration parameter of von Mises-Fisher distributions involves inverting the ratio [Formula: see text] of modified Bessel functions and computational methods are required to invert these functions using approximative or iterative algorithms. In this paper we use Amos-type bounds for [Formula: see text] to deduce sharper bounds for the inverse function, determine the approximation error of these bounds, and use these to propose a new approximation for which the error tends to zero when the inverse of [Formula: see text] is evaluated at values tending to [Formula: see text] (from the left). We show that previously introduced rational bounds for [Formula: see text] which are invertible using quadratic equations cannot be used to improve these bounds. PMID:25309045

  1. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters.

    PubMed

    Li, Xinya; Deng, Z Daniel; Sun, Yannan; Martinez, Jayson J; Fu, Tao; McMichael, Geoffrey A; Carlson, Thomas J

    2014-01-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature. PMID:25427517

  2. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    NASA Astrophysics Data System (ADS)

    Li, Xinya; Deng, Z. Daniel; Sun, Yannan; Martinez, Jayson J.; Fu, Tao; McMichael, Geoffrey A.; Carlson, Thomas J.

    2014-11-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  3. Determination of instrumentation errors from measured data using maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Keskar, D. A.; Klein, V.

    1980-01-01

    The maximum likelihood method is used for estimation of unknown initial conditions, constant bias and scale factor errors in measured flight data. The model for the system to be identified consists of the airplane six-degree-of-freedom kinematic equations, and the output equations specifying the measured variables. The estimation problem is formulated in a general way and then, for practical use, simplified by ignoring the effect of process noise. The algorithm developed is first applied to computer generated data having different levels of process noise for the demonstration of the robustness of the method. Then the real flight data are analyzed and the results compared with those obtained by the extended Kalman filter algorithm.

  4. Estimating contaminant loads in rivers: An application of adjusted maximum likelihood to type 1 censored data

    USGS Publications Warehouse

    Cohn, T.A.

    2005-01-01

    This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored-data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet-Crame??r-Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real-time water quality monitoring.

  5. An extended-source spatial acquisition process based on maximum likelihood criterion for planetary optical communications

    NASA Technical Reports Server (NTRS)

    Yan, Tsun-Yee

    1992-01-01

    This paper describes an extended-source spatial acquisition process based on the maximum likelihood criterion for interplanetary optical communications. The objective is to use the sun-lit Earth image as a receiver beacon and point the transmitter laser to the Earth-based receiver to establish a communication path. The process assumes the existence of a reference image. The uncertainties between the reference image and the received image are modeled as additive white Gaussian disturbances. It has been shown that the optimal spatial acquisition requires solving two nonlinear equations to estimate the coordinates of the transceiver from the received camera image in the transformed domain. The optimal solution can be obtained iteratively by solving two linear equations. Numerical results using a sample sun-lit Earth as a reference image demonstrate that sub-pixel resolutions can be achieved in a high disturbance environment. Spatial resolution is quantified by Cramer-Rao lower bounds.

  6. BOREAS TE-18 Landsat TM Maximum Likelihood Classification Image of the NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 20-Aug-1988 was used to derive this classification. A standard supervised maximum likelihood classification approach was used to produce this classification. The data are provided in a binary image format file. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  7. Maximum likelihood estimation and the multivariate Bernoulli distribution: An application to reliability

    SciTech Connect

    Kvam, P.H.

    1994-08-01

    We investigate systems designed using redundant component configurations. If external events exist in the working environment that cause two or more components in the system to fail within the same demand period, the designed redundancy in the system can be quickly nullified. In the engineering field, such events are called common cause failures (CCFs), and are primary factors in some risk assessments. If CCFs have positive probability, but are not addressed in the analysis, the assessment may contain a gross overestimation of the system reliability. We apply a discrete, multivariate shock model for a parallel system of two or more components, allowing for positive probability that such external events can occur. The methods derived are motivated by attribute data for emergency diesel generators from various US nuclear power plants. Closed form solutions for maximum likelihood estimators exist in many cases; statistical tests and confidence intervals are discussed for the different test environments considered.

  8. Maximum likelihood estimation for semiparametric transformation models with interval-censored data

    PubMed Central

    Zeng, Donglin; Mao, Lu; Lin, D. Y.

    2016-01-01

    Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656

  9. Maximum-Likelihood Tree Estimation Using Codon Substitution Models with Multiple Partitions

    PubMed Central

    Zoller, Stefan; Boskova, Veronika; Anisimova, Maria

    2015-01-01

    Many protein sequences have distinct domains that evolve with different rates, different selective pressures, or may differ in codon bias. Instead of modeling these differences by more and more complex models of molecular evolution, we present a multipartition approach that allows maximum-likelihood phylogeny inference using different codon models at predefined partitions in the data. Partition models can, but do not have to, share free parameters in the estimation process. We test this approach with simulated data as well as in a phylogenetic study of the origin of the leucin-rich repeat regions in the type III effector proteins of the pythopathogenic bacteria Ralstonia solanacearum. Our study does not only show that a simple two-partition model resolves the phylogeny better than a one-partition model but also gives more evidence supporting the hypothesis of lateral gene transfer events between the bacterial pathogens and its eukaryotic hosts. PMID:25911229

  10. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    DOE PAGESBeta

    Li, Xinya; Deng, Z. Daniel; USA, Richland Washington; Sun, Yannan; USA, Richland Washington; Martinez, Jayson J.; USA, Richland Washington; Fu, Tao; USA, Richland Washington; McMichael, Geoffrey A.; et al

    2014-11-27

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developedmore » using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.« less

  11. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    SciTech Connect

    Li, Xinya; Deng, Z. Daniel; USA, Richland Washington; Sun, Yannan; USA, Richland Washington; Martinez, Jayson J.; USA, Richland Washington; Fu, Tao; USA, Richland Washington; McMichael, Geoffrey A.; USA, Richland Washington; Carlson, Thomas J.; USA, Richland Washington

    2014-11-27

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  12. Parsimonious estimation of sex-specific map distances by stepwise maximum likelihood regression

    SciTech Connect

    Fann, C.S.J.; Ott, J.

    1995-10-10

    In human genetic maps, differences between female (x{sub f}) and male (x{sub m}) map distances may be characterized by the ratio, R = x{sub f}/x{sub m}, or the relative difference, Q = (x{sub f} - x{sub m})/(x{sub f} + x{sub m}) = (R - 1)/(R + 1). For a map of genetic markers spread along a chromosome, Q(d) may be viewed as a graph of Q versus the midpoints, d, of the map intervals. To estimate male and female map distances for each interval, a novel method is proposed to evaluate the most parsimonious trend of Q(d) along the chromosome, where Q(d) is expressed as a polynomial in d. Stepwise maximum likelihood polynomial regression of Q is described. The procedure has been implemented in a FORTRAN program package, TREND, and is applied to data on chromosome 18. 11 refs., 2 figs., 3 tabs.

  13. An algorithm for maximum likelihood estimation using an efficient method for approximating sensitivities

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1984-01-01

    An algorithm for maximum likelihood (ML) estimation is developed primarily for multivariable dynamic systems. The algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). The method determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared with integrating the analytically determined sensitivity equations or using a finite-difference method. Different surface-fitting methods are discussed and demonstrated. Aircraft estimation problems are solved by using both simulated and real-flight data to compare MNRES with commonly used methods; in these solutions MNRES is found to be equally accurate and substantially faster. MNRES eliminates the need to derive sensitivity equations, thus producing a more generally applicable algorithm.

  14. Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Anissipour, Amir A.; Benson, Russell A.

    1989-01-01

    The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.

  15. Blind deconvolution of quantum-limited incoherent imagery: maximum-likelihood approach.

    PubMed

    Holmes, T J

    1992-07-01

    Previous research presented by the author and others into maximum-likelihood image restoration for incoherent imagery is extended to consider problems of blind deconvolution in which the impulse response of the system is assumed to be unknown. Potential applications that motivate this study are wide-field and confocal fluorescence microscopy, although applications in astronomy and infrared imaging are foreseen as well. The methodology incorporates the iterative expectation-maximization algorithm. Although the precise impulse response is assumed to be unknown, some prior knowledge about characteristics of the impulse response is used. In preliminary simulation studies that are presented, the circular symmetry and the band-limited nature of the impulse response are used as such. These simulations demonstrate the potential utility and present limitations of these methods. PMID:1634965

  16. Maximum-likelihood estimation in Optical Coherence Tomography in the context of the tear film dynamics

    PubMed Central

    Huang, Jinxin; Clarkson, Eric; Kupinski, Matthew; Lee, Kye-sung; Maki, Kara L.; Ross, David S.; Aquavella, James V.; Rolland, Jannick P.

    2013-01-01

    Understanding tear film dynamics is a prerequisite for advancing the management of Dry Eye Disease (DED). In this paper, we discuss the use of optical coherence tomography (OCT) and statistical decision theory to analyze the tear film dynamics of a digital phantom. We implement a maximum-likelihood (ML) estimator to interpret OCT data based on mathematical models of Fourier-Domain OCT and the tear film. With the methodology of task-based assessment, we quantify the tradeoffs among key imaging system parameters. We find, on the assumption that the broadband light source is characterized by circular Gaussian statistics, ML estimates of 40 nm +/− 4 nm for an axial resolution of 1 μm and an integration time of 5 μs. Finally, the estimator is validated with a digital phantom of tear film dynamics, which reveals estimates of nanometer precision. PMID:24156045

  17. A maximum likelihood analysis of the CoGeNT public dataset

    NASA Astrophysics Data System (ADS)

    Kelso, Chris

    2016-06-01

    The CoGeNT detector, located in the Soudan Underground Laboratory in Northern Minnesota, consists of a 475 grams (fiducial mass of 330 grams) target mass of p-type point contact germanium detector that measures the ionization charge created by nuclear recoils. This detector has searched for recoils created by dark matter since December of 2009. We analyze the public dataset from the CoGeNT experiment to search for evidence of dark matter interactions with the detector. We perform an unbinned maximum likelihood fit to the data and compare the significance of different WIMP hypotheses relative to each other and the null hypothesis of no WIMP interactions. This work presents the current status of the analysis.

  18. Off-Grid DOA Estimation Based on Analysis of the Convexity of Maximum Likelihood Function

    NASA Astrophysics Data System (ADS)

    LIU, Liang; WEI, Ping; LIAO, Hong Shu

    Spatial compressive sensing (SCS) has recently been applied to direction-of-arrival (DOA) estimation owing to advantages over conventional ones. However the performance of compressive sensing (CS)-based estimation methods decreases when true DOAs are not exactly on the discretized sampling grid. We solve the off-grid DOA estimation problem using the deterministic maximum likelihood (DML) estimation method. In this work, we analyze the convexity of the DML function in the vicinity of the global solution. Especially under the condition of large array, we search for an approximately convex range around the ture DOAs to guarantee the DML function convex. Based on the convexity of the DML function, we propose a computationally efficient algorithm framework for off-grid DOA estimation. Numerical experiments show that the rough convex range accords well with the exact convex range of the DML function with large array and demonstrate the superior performance of the proposed methods in terms of accuracy, robustness and speed.

  19. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    PubMed Central

    Li, Xinya; Deng, Z. Daniel; Sun, Yannan; Martinez, Jayson J.; Fu, Tao; McMichael, Geoffrey A.; Carlson, Thomas J.

    2014-01-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature. PMID:25427517

  20. Maximum Likelihood Estimation of the Broken Power Law Spectral Parameters with Detector Design Applications

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    The maximum likelihood procedure is developed for estimating the three spectral parameters of an assumed broken power law energy spectrum from simulated detector responses and their statistical properties investigated. The estimation procedure is then generalized for application to real cosmic-ray data. To illustrate the procedure and its utility, analytical methods were developed in conjunction with a Monte Carlo simulation to explore the combination of the expected cosmic-ray environment with a generic space-based detector and its planned life cycle, allowing us to explore various detector features and their subsequent influence on estimating the spectral parameters. This study permits instrument developers to make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.

  1. Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1981-01-01

    The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.

  2. A new maximum-likelihood change estimator for two-pass SAR coherent change detection

    DOE PAGESBeta

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.; Simonson, Katherine Mary

    2016-01-11

    In past research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimatormore » is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.« less

  3. MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker

    SciTech Connect

    Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw

    2009-06-09

    MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  4. Maximum Likelihood Bayesian Averaging of Spatial Variability Models in Unsaturated Fractured Tuff

    SciTech Connect

    Ye, Ming; Neuman, Shlomo P.; Meyer, Philip D.

    2004-05-25

    Hydrologic analyses typically rely on a single conceptual-mathematical model. Yet hydrologic environments are open and complex, rendering them prone to multiple interpretations and mathematical descriptions. Adopting only one of these may lead to statistical bias and underestimation of uncertainty. Bayesian Model Averaging (BMA) provides an optimal way to combine the predictions of several competing models and to assess their joint predictive uncertainty. However, it tends to be computationally demanding and relies heavily on prior information about model parameters. We apply a maximum likelihood (ML) version of BMA (MLBMA) to seven alternative variogram models of log air permeability data from single-hole pneumatic injection tests in six boreholes at the Apache Leap Research Site (ALRS) in central Arizona. Unbiased ML estimates of variogram and drift parameters are obtained using Adjoint State Maximum Likelihood Cross Validation in conjunction with Universal Kriging and Generalized L east Squares. Standard information criteria provide an ambiguous ranking of the models, which does not justify selecting one of them and discarding all others as is commonly done in practice. Instead, we eliminate some of the models based on their negligibly small posterior probabilities and use the rest to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. We then average these four projections, and associated kriging variances, using the posterior probability of each model as weight. Finally, we cross-validate the results by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of MLBMA with that of each individual model. We find that MLBMA is superior to any individual geostatistical model of log permeability among those we consider at the ALRS.

  5. A maximum likelihood approach to jointly estimating seasonal and annual flood frequency distributions

    NASA Astrophysics Data System (ADS)

    Baratti, E.; Montanari, A.; Castellarin, A.; Salinas, J. L.; Viglione, A.; Blöschl, G.

    2012-04-01

    Flood frequency analysis is often used by practitioners to support the design of river engineering works, flood miti- gation procedures and civil protection strategies. It is often carried out at annual time scale, by fitting observations of annual maximum peak flows. However, in many cases one is also interested in inferring the flood frequency distribution for given intra-annual periods, for instance when one needs to estimate the risk of flood in different seasons. Such information is needed, for instance, when planning the schedule of river engineering works whose building area is in close proximity to the river bed for several months. A key issue in seasonal flood frequency analysis is to ensure the compatibility between intra-annual and annual flood probability distributions. We propose an approach to jointly estimate the parameters of seasonal and annual probability distribution of floods. The approach is based on the preliminary identification of an optimal number of seasons within the year,which is carried out by analysing the timing of flood flows. Then, parameters of intra-annual and annual flood distributions are jointly estimated by using (a) an approximate optimisation technique and (b) a formal maximum likelihood approach. The proposed methodology is applied to some case studies for which extended hydrological information is available at annual and seasonal scale.

  6. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  7. The Benefits of Maximum Likelihood Estimators in Predicting Bulk Permeability and Upscaling Fracture Networks

    NASA Astrophysics Data System (ADS)

    Emanuele Rizzo, Roberto; Healy, David; De Siena, Luca

    2016-04-01

    The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in fractured rock, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (lengths, apertures, orientations and densities) is fundamental to the estimation of permeability and fluid flow, which are of primary importance in a number of contexts including: hydrocarbon production from fractured reservoirs; geothermal energy extraction; and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. Our work links outcrop fracture data to modelled fracture networks in order to numerically predict bulk permeability. We collected outcrop data from a highly fractured upper Miocene biosiliceous mudstone formation, cropping out along the coastline north of Santa Cruz (California, USA). Using outcrop fracture networks as analogues for subsurface fracture systems has several advantages, because key fracture attributes such as spatial arrangements and lengths can be effectively measured only on outcrops [1]. However, a limitation when dealing with outcrop data is the relative sparseness of natural data due to the intrinsic finite size of the outcrops. We make use of a statistical approach for the overall workflow, starting from data collection with the Circular Windows Method [2]. Then we analyse the data statistically using Maximum Likelihood Estimators, which provide greater accuracy compared to the more commonly used Least Squares linear regression when investigating distribution of fracture attributes. Finally, we estimate the bulk permeability of the fractured rock mass using Oda's tensorial approach [3]. The higher quality of this statistical analysis is fundamental: better statistics of the fracture attributes means more accurate permeability estimation, since the fracture attributes feed

  8. Integrating functional genomics data using maximum likelihood based simultaneous component analysis

    PubMed Central

    van den Berg, Robert A; Van Mechelen, Iven; Wilderjans, Tom F; Van Deun, Katrijn; Kiers, Henk AL; Smilde, Age K

    2009-01-01

    Background In contemporary biology, complex biological processes are increasingly studied by collecting and analyzing measurements of the same entities that are collected with different analytical platforms. Such data comprise a number of data blocks that are coupled via a common mode. The goal of collecting this type of data is to discover biological mechanisms that underlie the behavior of the variables in the different data blocks. The simultaneous component analysis (SCA) family of data analysis methods is suited for this task. However, a SCA may be hampered by the data blocks being subjected to different amounts of measurement error, or noise. To unveil the true mechanisms underlying the data, it could be fruitful to take noise heterogeneity into consideration in the data analysis. Maximum likelihood based SCA (MxLSCA-P) was developed for this purpose. In a previous simulation study it outperformed normal SCA-P. This previous study, however, did not mimic in many respects typical functional genomics data sets, such as, data blocks coupled via the experimental mode, more variables than experimental units, and medium to high correlations between variables. Here, we present a new simulation study in which the usefulness of MxLSCA-P compared to ordinary SCA-P is evaluated within a typical functional genomics setting. Subsequently, the performance of the two methods is evaluated by analysis of a real life Escherichia coli metabolomics data set. Results In the simulation study, MxLSCA-P outperforms SCA-P in terms of recovery of the true underlying scores of the common mode and of the true values underlying the data entries. MxLSCA-P further performed especially better when the simulated data blocks were subject to different noise levels. In the analysis of an E. coli metabolomics data set, MxLSCA-P provided a slightly better and more consistent interpretation. Conclusion MxLSCA-P is a promising addition to the SCA family. The analysis of coupled functional genomics

  9. Rayleigh-maximum-likelihood filtering for speckle reduction of ultrasound images.

    PubMed

    Aysal, Tuncer C; Barner, Kenneth E

    2007-05-01

    Speckle is a multiplicative noise that degrades ultrasound images. Recent advancements in ultrasound instrumentation and portable ultrasound devices necessitate the need for more robust despeckling techniques, for both routine clinical practice and teleconsultation. Methods previously proposed for speckle reduction suffer from two major limitations: 1) noise attenuation is not sufficient, especially in the smooth and background areas; 2) existing methods do not sufficiently preserve or enhance edges--they only inhibit smoothing near edges. In this paper, we propose a novel technique that is capable of reducing the speckle more effectively than previous methods and jointly enhancing the edge information, rather than just inhibiting smoothing. The proposed method utilizes the Rayleigh distribution to model the speckle and adopts the robust maximum-likelihood estimation approach. The resulting estimator is statistically analyzed through first and second moment derivations. A tuning parameter that naturally evolves in the estimation equation is analyzed, and an adaptive method utilizing the instantaneous coefficient of variation is proposed to adjust this parameter. To further tailor performance, a weighted version of the proposed estimator is introduced to exploit varying statistics of input samples. Finally, the proposed method is evaluated and compared to well-accepted methods through simulations utilizing synthetic and real ultrasound data. PMID:17518065

  10. Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods.

    PubMed

    Benevides, Leandro de Jesus; Carvalho, Daniel Santana de; Andrade, Roberto Fernandes Silva; Bomfim, Gilberto Cafezeiro; Fernandes, Flora Maria de Campos

    2016-07-14

    Apolipoprotein E (apo E) is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL) and a group of high-density lipoproteins (HDL). Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML), and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1) and another with fish (C2), and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups. PMID:27419397