Sample records for maximum likelihood mle

  1. F-8C adaptive flight control extensions. [for maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Stein, G.; Hartmann, G. L.

    1977-01-01

    An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.

  2. The Maximum Likelihood Estimation of Signature Transformation /MLEST/ algorithm. [for affine transformation of crop inventory data

    NASA Technical Reports Server (NTRS)

    Thadani, S. G.

    1977-01-01

    The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.

  3. Maximum likelihood estimation for Cox's regression model under nested case-control sampling.

    PubMed

    Scheike, Thomas H; Juul, Anders

    2004-04-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.

  4. Computation of nonlinear least squares estimator and maximum likelihood using principles in matrix calculus

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.

    2017-11-01

    This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation

  5. A New Online Calibration Method Based on Lord's Bias-Correction.

    PubMed

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  6. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    NASA Astrophysics Data System (ADS)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  7. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    PubMed

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-05-01

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.

    PubMed

    Yin, Guosheng; Ma, Yanyuan

    2013-01-01

    The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.

  9. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation

    PubMed Central

    Li, Hong; Lu, Mingquan

    2017-01-01

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318

  10. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.

    PubMed

    Wang, Fei; Li, Hong; Lu, Mingquan

    2017-06-30

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.

  11. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    PubMed

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  12. Deterministic quantum annealing expectation-maximization algorithm

    NASA Astrophysics Data System (ADS)

    Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki

    2017-11-01

    Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.

  13. An 'unconditional-like' structure for the conditional estimator of odds ratio from 2 x 2 tables.

    PubMed

    Hanley, James A; Miettinen, Olli S

    2006-02-01

    In the estimation of the odds ratio (OR), the conditional maximum-likelihood estimate (cMLE) is preferred to the more readily computed unconditional one (uMLE). However, the exact cMLE does not have a closed form to help divine it from the uMLE or to understand in what circumstances the difference between the two is appreciable. Here, the cMLE is shown to have the same 'ratio of cross-products' structure as its unconditional counterpart, but with two of the cell frequencies augmented, so as to shrink the unconditional estimator towards unity. The augmentation involves a factor, similar to the finite population correction, derived from the minimum of the marginal totals.

  14. Estimating contaminant loads in rivers: An application of adjusted maximum likelihood to type 1 censored data

    USGS Publications Warehouse

    Cohn, Timothy A.

    2005-01-01

    This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.

  15. Mortality table construction

    NASA Astrophysics Data System (ADS)

    Sutawanir

    2015-12-01

    Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.

  16. Comparison of Kasai Autocorrelation and Maximum Likelihood Estimators for Doppler Optical Coherence Tomography

    PubMed Central

    Chan, Aaron C.; Srinivasan, Vivek J.

    2013-01-01

    In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044

  17. Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function

    NASA Astrophysics Data System (ADS)

    Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.

    2017-06-01

    This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.

  18. Improving z-tracking accuracy in the two-photon single-particle tracking microscope.

    PubMed

    Liu, C; Liu, Y-L; Perillo, E P; Jiang, N; Dunn, A K; Yeh, H-C

    2015-10-12

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico . Our method can be generally applied to other 3D single-particle tracking techniques.

  19. Improving z-tracking accuracy in the two-photon single-particle tracking microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C.; Liu, Y.-L.; Perillo, E. P.

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we havemore » precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.« less

  20. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  1. SLDAssay: A software package and web tool for analyzing limiting dilution assays.

    PubMed

    Trumble, Ilana M; Allmon, Andrew G; Archin, Nancie M; Rigdon, Joseph; Francis, Owen; Baldoni, Pedro L; Hudgens, Michael G

    2017-11-01

    Serial limiting dilution (SLD) assays are used in many areas of infectious disease related research. This paper presents SLDAssay, a free and publicly available R software package and web tool for analyzing data from SLD assays. SLDAssay computes the maximum likelihood estimate (MLE) for the concentration of target cells, with corresponding exact and asymptotic confidence intervals. Exact and asymptotic goodness of fit p-values, and a bias-corrected (BC) MLE are also provided. No other publicly available software currently implements the BC MLE or the exact methods. For validation of SLDAssay, results from Myers et al. (1994) are replicated. Simulations demonstrate the BC MLE is less biased than the MLE. Additionally, simulations demonstrate that exact methods tend to give better confidence interval coverage and goodness-of-fit tests with lower type I error than the asymptotic methods. Additional advantages of using exact methods are also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less

  3. GLOBAL RATES OF CONVERGENCE OF THE MLES OF LOG-CONCAVE AND s-CONCAVE DENSITIES

    PubMed Central

    Doss, Charles R.; Wellner, Jon A.

    2017-01-01

    We establish global rates of convergence for the Maximum Likelihood Estimators (MLEs) of log-concave and s-concave densities on ℝ. The main finding is that the rate of convergence of the MLE in the Hellinger metric is no worse than n−2/5 when −1 < s < ∞ where s = 0 corresponds to the log-concave case. We also show that the MLE does not exist for the classes of s-concave densities with s < −1. PMID:28966409

  4. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  5. The early maximum likelihood estimation model of audiovisual integration in speech perception.

    PubMed

    Andersen, Tobias S

    2015-05-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures favored more complex models. This difference between conventional error measures and cross-validation was found to be indicative of over-fitting in more complex models such as the FLMP.

  6. Monte Carlo studies of ocean wind vector measurements by SCATT: Objective criteria and maximum likelihood estimates for removal of aliases, and effects of cell size on accuracy of vector winds

    NASA Technical Reports Server (NTRS)

    Pierson, W. J.

    1982-01-01

    The scatterometer on the National Oceanic Satellite System (NOSS) is studied by means of Monte Carlo techniques so as to determine the effect of two additional antennas for alias (or ambiguity) removal by means of an objective criteria technique and a normalized maximum likelihood estimator. Cells nominally 10 km by 10 km, 10 km by 50 km, and 50 km by 50 km are simulated for winds of 4, 8, 12 and 24 m/s and incidence angles of 29, 39, 47, and 53.5 deg for 15 deg changes in direction. The normalized maximum likelihood estimate (MLE) is correct a large part of the time, but the objective criterion technique is recommended as a reserve, and more quickly computed, procedure. Both methods for alias removal depend on the differences in the present model function at upwind and downwind. For 10 km by 10 km cells, it is found that the MLE method introduces a correlation between wind speed errors and aspect angle (wind direction) errors that can be as high as 0.8 or 0.9 and that the wind direction errors are unacceptably large, compared to those obtained for the SASS for similar assumptions.

  7. Superfast maximum-likelihood reconstruction for quantum tomography

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  8. Markov Chain Monte Carlo: an introduction for epidemiologists

    PubMed Central

    Hamra, Ghassan; MacLehose, Richard; Richardson, David

    2013-01-01

    Markov Chain Monte Carlo (MCMC) methods are increasingly popular among epidemiologists. The reason for this may in part be that MCMC offers an appealing approach to handling some difficult types of analyses. Additionally, MCMC methods are those most commonly used for Bayesian analysis. However, epidemiologists are still largely unfamiliar with MCMC. They may lack familiarity either with he implementation of MCMC or with interpretation of the resultant output. As with tutorials outlining the calculus behind maximum likelihood in previous decades, a simple description of the machinery of MCMC is needed. We provide an introduction to conducting analyses with MCMC, and show that, given the same data and under certain model specifications, the results of an MCMC simulation match those of methods based on standard maximum-likelihood estimation (MLE). In addition, we highlight examples of instances in which MCMC approaches to data analysis provide a clear advantage over MLE. We hope that this brief tutorial will encourage epidemiologists to consider MCMC approaches as part of their analytic tool-kit. PMID:23569196

  9. Maximum likelihood estimation of correction for dilution bias in simple linear regression using replicates from subjects with extreme first measurements.

    PubMed

    Berglund, Lars; Garmo, Hans; Lindbäck, Johan; Svärdsudd, Kurt; Zethelius, Björn

    2008-09-30

    The least-squares estimator of the slope in a simple linear regression model is biased towards zero when the predictor is measured with random error. A corrected slope may be estimated by adding data from a reliability study, which comprises a subset of subjects from the main study. The precision of this corrected slope depends on the design of the reliability study and estimator choice. Previous work has assumed that the reliability study constitutes a random sample from the main study. A more efficient design is to use subjects with extreme values on their first measurement. Previously, we published a variance formula for the corrected slope, when the correction factor is the slope in the regression of the second measurement on the first. In this paper we show that both designs improve by maximum likelihood estimation (MLE). The precision gain is explained by the inclusion of data from all subjects for estimation of the predictor's variance and by the use of the second measurement for estimation of the covariance between response and predictor. The gain of MLE enhances with stronger true relationship between response and predictor and with lower precision in the predictor measurements. We present a real data example on the relationship between fasting insulin, a surrogate marker, and true insulin sensitivity measured by a gold-standard euglycaemic insulin clamp, and simulations, where the behavior of profile-likelihood-based confidence intervals is examined. MLE was shown to be a robust estimator for non-normal distributions and efficient for small sample situations. Copyright (c) 2008 John Wiley & Sons, Ltd.

  10. Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei

    2010-01-01

    This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…

  11. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis-Hastings Markov Chain Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen

    2017-06-01

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.

  12. A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.

    PubMed

    Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan

    2017-06-22

    Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.

  13. Tools of Robustness for Item Response Theory.

    ERIC Educational Resources Information Center

    Jones, Douglas H.

    This paper briefly demonstrates a few of the possibilities of a systematic application of robustness theory, concentrating on the estimation of ability when the true item response model does and does not fit the data. The definition of the maximum likelihood estimator (MLE) of ability is briefly reviewed. After introducing the notion of…

  14. The Asymptotic Distribution of Ability Estimates: Beyond Dichotomous Items and Unidimensional IRT Models

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2015-01-01

    The maximum likelihood estimate (MLE) of the ability parameter of an item response theory model with known item parameters was proved to be asymptotically normally distributed under a set of regularity conditions for tests involving dichotomous items and a unidimensional ability parameter (Klauer, 1990; Lord, 1983). This article first considers…

  15. Analysis of an all-digital maximum likelihood carrier phase and clock timing synchronizer for eight phase-shift keying modulation

    NASA Astrophysics Data System (ADS)

    Degaudenzi, Riccardo; Vanghi, Vieri

    1994-02-01

    In all-digital Trellis-Coded 8PSK (TC-8PSK) demodulator well suited for VLSI implementation, including maximum likelihood estimation decision-directed (MLE-DD) carrier phase and clock timing recovery, is introduced and analyzed. By simply removing the trellis decoder the demodulator can efficiently cope with uncoded 8PSK signals. The proposed MLE-DD synchronization algorithm requires one sample for the phase and two samples per symbol for the timing loop. The joint phase and timing discriminator characteristics are analytically derived and numerical results checked by means of computer simulations. An approximated expression for steady-state carrier phase and clock timing mean square error has been derived and successfully checked with simulation findings. Synchronizer deviation from the Cramer Rao bound is also discussed. Mean acquisition time for the digital synchronizer has also been computed and checked, using the Monte Carlo simulation technique. Finally, TC-8PSK digital demodulator performance in terms of bit error rate and mean time to lose lock, including digital interpolators and synchronization loops, is presented.

  16. Accuracy of maximum likelihood and least-squares estimates in the lidar slope method with noisy data.

    PubMed

    Eberhard, Wynn L

    2017-04-01

    The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.

  17. Optimal designs based on the maximum quasi-likelihood estimator

    PubMed Central

    Shen, Gang; Hyun, Seung Won; Wong, Weng Kee

    2016-01-01

    We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359

  18. Statistical Techniques to Analyze Pesticide Data Program Food Residue Observations.

    PubMed

    Szarka, Arpad Z; Hayworth, Carol G; Ramanarayanan, Tharacad S; Joseph, Robert S I

    2018-06-26

    The U.S. EPA conducts dietary-risk assessments to ensure that levels of pesticides on food in the U.S. food supply are safe. Often these assessments utilize conservative residue estimates, maximum residue levels (MRLs), and a high-end estimate derived from registrant-generated field-trial data sets. A more realistic estimate of consumers' pesticide exposure from food may be obtained by utilizing residues from food-monitoring programs, such as the Pesticide Data Program (PDP) of the U.S. Department of Agriculture. A substantial portion of food-residue concentrations in PDP monitoring programs are below the limits of detection (left-censored), which makes the comparison of regulatory-field-trial and PDP residue levels difficult. In this paper, we present a novel adaption of established statistical techniques, the Kaplan-Meier estimator (K-M), the robust regression on ordered statistic (ROS), and the maximum-likelihood estimator (MLE), to quantify the pesticide-residue concentrations in the presence of heavily censored data sets. The examined statistical approaches include the most commonly used parametric and nonparametric methods for handling left-censored data that have been used in the fields of medical and environmental sciences. This work presents a case study in which data of thiamethoxam residue on bell pepper generated from registrant field trials were compared with PDP-monitoring residue values. The results from the statistical techniques were evaluated and compared with commonly used simple substitution methods for the determination of summary statistics. It was found that the maximum-likelihood estimator (MLE) is the most appropriate statistical method to analyze this residue data set. Using the MLE technique, the data analyses showed that the median and mean PDP bell pepper residue levels were approximately 19 and 7 times lower, respectively, than the corresponding statistics of the field-trial residues.

  19. Constrained Maximum Likelihood Estimation of Relative Abundances of Protein Conformation in a Heterogeneous Mixture from Small Angle X-Ray Scattering Intensity Measurements

    PubMed Central

    Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee

    2015-01-01

    In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916

  20. PRECISE TULLY-FISHER RELATIONS WITHOUT GALAXY INCLINATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obreschkow, D.; Meyer, M.

    2013-11-10

    Power-law relations between tracers of baryonic mass and rotational velocities of disk galaxies, so-called Tully-Fisher relations (TFRs), offer a wealth of applications in galaxy evolution and cosmology. However, measurements of rotational velocities require galaxy inclinations, which are difficult to measure, thus limiting the range of TFR studies. This work introduces a maximum likelihood estimation (MLE) method for recovering the TFR in galaxy samples with limited or no information on inclinations. The robustness and accuracy of this method is demonstrated using virtual and real galaxy samples. Intriguingly, the MLE reliably recovers the TFR of all test samples, even without using anymore » inclination measurements—that is, assuming a random sin i-distribution for galaxy inclinations. Explicitly, this 'inclination-free MLE' recovers the three TFR parameters (zero-point, slope, scatter) with statistical errors only about 1.5 times larger than the best estimates based on perfectly known galaxy inclinations with zero uncertainty. Thus, given realistic uncertainties, the inclination-free MLE is highly competitive. If inclination measurements have mean errors larger than 10°, it is better not to use any inclinations than to consider the inclination measurements to be exact. The inclination-free MLE opens interesting perspectives for future H I surveys by the Square Kilometer Array and its pathfinders.« less

  1. Maximum mutual information estimation of a simplified hidden MRF for offline handwritten Chinese character recognition

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Reichenbach, Stephen E.

    1999-01-01

    Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.

  2. A Monte Carlo comparison of the recovery of winds near upwind and downwind from the SASS-1 model function by means of the sum of squares algorithm and a maximum likelihood estimator

    NASA Technical Reports Server (NTRS)

    Pierson, W. J., Jr.

    1984-01-01

    Backscatter measurements at upwind and crosswind are simulated for five incidence angles by means of the SASS-1 model function. The effects of communication noise and attitude errors are simulated by Monte Carlo methods, and the winds are recovered by both the Sum of Square (SOS) algorithm and a Maximum Likelihood Estimater (MLE). The SOS algorithm is shown to fail for light enough winds at all incidence angles and to fail to show areas of calm because backscatter estimates that were negative or that produced incorrect values of K sub p greater than one were discarded. The MLE performs well for all input backscatter estimates and returns calm when both are negative. The use of the SOS algorithm is shown to have introduced errors in the SASS-1 model function that, in part, cancel out the errors that result from using it, but that also cause disagreement with other data sources such as the AAFE circle flight data at light winds. Implications for future scatterometer systems are given.

  3. Modeling the distribution of extreme share return in Malaysia using Generalized Extreme Value (GEV) distribution

    NASA Astrophysics Data System (ADS)

    Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya

    2012-05-01

    Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.

  4. Maximum likelihood estimates, from censored data, for mixed-Weibull distributions

    NASA Astrophysics Data System (ADS)

    Jiang, Siyuan; Kececioglu, Dimitri

    1992-06-01

    A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.

  5. Resolution of Closely Spaced Optical Targets Using Maximum Likelihood Estimator and Maximum Entropy Method: A Comparison Study

    DTIC Science & Technology

    1981-03-03

    Government Agencies. The views and conclusions contained in this document are those of the contractor and should not be interpreted as necessarily...resolving closely spaced j optical point targets are compared using Monte Carlo simulation ,esults for three different examples. It is found that the MEM is...although no direct compari- son was given. The objective of this report is to compare the capabilities of MLE and MEM in resolving two optical CSO’s

  6. Enhancing resolution and contrast in second-harmonic generation microscopy using an advanced maximum likelihood estimation restoration method

    NASA Astrophysics Data System (ADS)

    Sivaguru, Mayandi; Kabir, Mohammad M.; Gartia, Manas Ranjan; Biggs, David S. C.; Sivaguru, Barghav S.; Sivaguru, Vignesh A.; Berent, Zachary T.; Wagoner Johnson, Amy J.; Fried, Glenn A.; Liu, Gang Logan; Sadayappan, Sakthivel; Toussaint, Kimani C.

    2017-02-01

    Second-harmonic generation (SHG) microscopy is a label-free imaging technique to study collagenous materials in extracellular matrix environment with high resolution and contrast. However, like many other microscopy techniques, the actual spatial resolution achievable by SHG microscopy is reduced by out-of-focus blur and optical aberrations that degrade particularly the amplitude of the detectable higher spatial frequencies. Being a two-photon scattering process, it is challenging to define a point spread function (PSF) for the SHG imaging modality. As a result, in comparison with other two-photon imaging systems like two-photon fluorescence, it is difficult to apply any PSF-engineering techniques to enhance the experimental spatial resolution closer to the diffraction limit. Here, we present a method to improve the spatial resolution in SHG microscopy using an advanced maximum likelihood estimation (AdvMLE) algorithm to recover the otherwise degraded higher spatial frequencies in an SHG image. Through adaptation and iteration, the AdvMLE algorithm calculates an improved PSF for an SHG image and enhances the spatial resolution by decreasing the full-width-at-halfmaximum (FWHM) by 20%. Similar results are consistently observed for biological tissues with varying SHG sources, such as gold nanoparticles and collagen in porcine feet tendons. By obtaining an experimental transverse spatial resolution of 400 nm, we show that the AdvMLE algorithm brings the practical spatial resolution closer to the theoretical diffraction limit. Our approach is suitable for adaptation in micro-nano CT and MRI imaging, which has the potential to impact diagnosis and treatment of human diseases.

  7. Optimally resolving Lambertian surface orientation

    NASA Astrophysics Data System (ADS)

    Bertsatos, Ioannis; Makris, Nicholas C.

    2003-10-01

    Sonar images of remote surfaces are typically corrupted by signal-dependent noise known as speckle. Relative motion between source, surface, and receiver causes the received field to fluctuate over time with circular complex Gaussian random (CCGR) statistics. In many cases of practical importance, Lambert's law is appropriate to model radiant intensity from the surface. In a previous paper, maximum likelihood estimators (MLE) for Lambertian surface orientation have been derived based on CCGR measurements [N. C. Makris, SACLANT Conference Proceedings Series CP-45, 1997, pp. 339-346]. A Lambertian surface needs to be observed from more than one illumination direction for its orientation to be properly constrained. It is found, however, that MLE performance varies significantly with illumination direction due to the inherently nonlinear nature of this problem. It is shown that a large number of samples is often required to optimally resolve surface orientation using the optimality criteria of the MLE derived in Naftali and Makris [J. Acoust. Soc. Am. 110, 1917-1930 (2001)].

  8. F-8C adaptive flight control laws

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Harvey, C. A.; Stein, G.; Carlson, D. N.; Hendrick, R. C.

    1977-01-01

    Three candidate digital adaptive control laws were designed for NASA's F-8C digital flyby wire aircraft. Each design used the same control laws but adjusted the gains with a different adaptative algorithm. The three adaptive concepts were: high-gain limit cycle, Liapunov-stable model tracking, and maximum likelihood estimation. Sensors were restricted to conventional inertial instruments (rate gyros and accelerometers) without use of air-data measurements. Performance, growth potential, and computer requirements were used as criteria for selecting the most promising of these candidates for further refinement. The maximum likelihood concept was selected primarily because it offers the greatest potential for identifying several aircraft parameters and hence for improved control performance in future aircraft application. In terms of identification and gain adjustment accuracy, the MLE design is slightly superior to the other two, but this has no significant effects on the control performance achievable with the F-8C aircraft. The maximum likelihood design is recommended for flight test, and several refinements to that design are proposed.

  9. Flood Frequency Analysis With Historical and Paleoflood Information

    NASA Astrophysics Data System (ADS)

    Stedinger, Jery R.; Cohn, Timothy A.

    1986-05-01

    An investigation is made of flood quantile estimators which can employ "historical" and paleoflood information in flood frequency analyses. Two categories of historical information are considered: "censored" data, where the magnitudes of historical flood peaks are known; and "binomial" data, where only threshold exceedance information is available. A Monte Carlo study employing the two-parameter lognormal distribution shows that maximum likelihood estimators (MLEs) can extract the equivalent of an additional 10-30 years of gage record from a 50-year period of historical observation. The MLE routines are shown to be substantially better than an adjusted-moment estimator similar to the one recommended in Bulletin 17B of the United States Water Resources Council Hydrology Committee (1982). The MLE methods performed well even when floods were drawn from other than the assumed lognormal distribution.

  10. Using local multiplicity to improve effect estimation from a hypothesis-generating pharmacogenetics study.

    PubMed

    Zou, W; Ouyang, H

    2016-02-01

    We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.

  11. Estimating the mean and standard deviation of environmental data with below detection limit observations: Considering highly skewed data and model misspecification.

    PubMed

    Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin

    2015-11-01

    In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Forecasting overhaul or replacement intervals based on estimated system failure intensity

    NASA Astrophysics Data System (ADS)

    Gannon, James M.

    1994-12-01

    System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.

  13. Fast estimation of diffusion tensors under Rician noise by the EM algorithm.

    PubMed

    Liu, Jia; Gasbarra, Dario; Railavo, Juha

    2016-01-15

    Diffusion tensor imaging (DTI) is widely used to characterize, in vivo, the white matter of the central nerve system (CNS). This biological tissue contains much anatomic, structural and orientational information of fibers in human brain. Spectral data from the displacement distribution of water molecules located in the brain tissue are collected by a magnetic resonance scanner and acquired in the Fourier domain. After the Fourier inversion, the noise distribution is Gaussian in both real and imaginary parts and, as a consequence, the recorded magnitude data are corrupted by Rician noise. Statistical estimation of diffusion leads a non-linear regression problem. In this paper, we present a fast computational method for maximum likelihood estimation (MLE) of diffusivities under the Rician noise model based on the expectation maximization (EM) algorithm. By using data augmentation, we are able to transform a non-linear regression problem into the generalized linear modeling framework, reducing dramatically the computational cost. The Fisher-scoring method is used for achieving fast convergence of the tensor parameter. The new method is implemented and applied using both synthetic and real data in a wide range of b-amplitudes up to 14,000s/mm(2). Higher accuracy and precision of the Rician estimates are achieved compared with other log-normal based methods. In addition, we extend the maximum likelihood (ML) framework to the maximum a posteriori (MAP) estimation in DTI under the aforementioned scheme by specifying the priors. We will describe how close numerically are the estimators of model parameters obtained through MLE and MAP estimation. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis–Hastings Markov Chain Monte Carlo algorithm

    DOE PAGES

    Wang, Hongrui; Wang, Cheng; Wang, Ying; ...

    2017-04-05

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less

  15. BAO from Angular Clustering: Optimization and Mitigation of Theoretical Systematics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crocce, M.; et al.

    We study the theoretical systematics and optimize the methodology in Baryon Acoustic Oscillations (BAO) detections using the angular correlation function with tomographic bins. We calibrate and optimize the pipeline for the Dark Energy Survey Year 1 dataset using 1800 mocks. We compare the BAO fitting results obtained with three estimators: the Maximum Likelihood Estimator (MLE), Profile Likelihood, and Markov Chain Monte Carlo. The MLE method yields the least bias in the fit results (bias/spreadmore » $$\\sim 0.02$$) and the error bar derived is the closest to the Gaussian results (1% from 68% Gaussian expectation). When there is mismatch between the template and the data either due to incorrect fiducial cosmology or photo-$z$ error, the MLE again gives the least-biased results. The BAO angular shift that is estimated based on the sound horizon and the angular diameter distance agree with the numerical fit. Various analysis choices are further tested: the number of redshift bins, cross-correlations, and angular binning. We propose two methods to correct the mock covariance when the final sample properties are slightly different from those used to create the mock. We show that the sample changes can be accommodated with the help of the Gaussian covariance matrix or more effectively using the eigenmode expansion of the mock covariance. The eigenmode expansion is significantly less susceptible to statistical fluctuations relative to the direct measurements of the covariance matrix because the number of free parameters is substantially reduced [$p$ parameters versus $p(p+1)/2$ from direct measurement].« less

  16. Zika and Chikungunya virus detection in naturally infected Aedes aegypti in Ecuador.

    PubMed

    Cevallos, Varsovia; Ponce, Patricio; Waggoner, Jesse J; Pinsky, Benjamin A; Coloma, Josefina; Quiroga, Cristina; Morales, Diego; Cárdenas, Maria José

    2018-01-01

    The wide and rapid spread of Chikungunya (CHIKV) and Zika (ZIKV) viruses represent a global public health problem, especially for tropical and subtropical environments. The early detection of CHIKV and ZIKV in mosquitoes may help to understand the dynamics of the diseases in high-risk areas, and to design data based epidemiological surveillance to activate the preparedness and response of the public health system and vector control programs. This study was done to detect ZIKV and CHIKV viruses in naturally infected fed female Aedes aegypti (L.) mosquitoes from active epidemic urban areas in Ecuador. Pools (n=193; 22 pools) and individuals (n=22) of field collected Ae. aegypti mosquitoes from high-risk arboviruses infection sites in Ecuador were analyzed for the presence of CHIKV and ZIKV using RT-PCR. Phylogenetic analysis demonstrated that both ZIKV and CHIKV viruses circulating in Ecuador correspond to the Asian lineages. Minimum infection rate (MIR) of CHIKV for Esmeraldas city was 2.3% and the maximum likelihood estimation (MLE) was 3.3%. The minimum infection rate (MIR) of ZIKV for Portoviejo city was 5.3% and for Manta city was 2.1%. Maximum likelihood estimation (MLE) for Portoviejo city was 6.9% and 2.6% for Manta city. Detection of arboviruses and infection rates in the arthropod vectors may help to predict an outbreak and serve as a warning tool in surveillance programs. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Technical guidance and analytic services in support of SEASAT-A. [radar altimeters for altimetry and ocean wave height

    NASA Technical Reports Server (NTRS)

    Brooks, W. L.; Dooley, R. P.

    1975-01-01

    The design of a high resolution radar for altimetry and ocean wave height estimation was studied. From basic principles, it is shown that a short pulse wide beam radar is the most appropriate and recommended technique for measuring both altitude and ocean wave height. To achieve a topographic resolution of + or - 10 cm RMS at 5.0 meter RMS wave heights, as required for SEASAT-A, it is recommended that the altimeter design include an onboard adaptive processor. The resulting design, which assumes a maximum likelihood estimation (MLE) processor, is shown to satisfy all performance requirements. A design summary is given for the recommended radar altimeter, which includes a full deramp STRETCH pulse compression technique followed by an analog filter bank to separate range returns as well as the assumed MLE processor. The feedback loop implementation of the MLE on a digital computer was examined in detail, and computer size, estimation accuracies, and bias due to range sidelobes are given for the MLE with typical SEASAT-A parameters. The standard deviation of the altitude estimate was developed and evaluated for several adaptive and nonadaptive split-gate trackers. Split-gate tracker biases due to range sidelobes and transmitter noise are examined. An approximate closed form solution for the altimeter power return is derived and evaluated. The feasibility of utilizing the basic radar altimeter design for the measurement of ocean wave spectra was examined.

  18. Some New Estimation Methods for Weighted Regression When There are Possible Outliers.

    DTIC Science & Technology

    1985-01-01

    about influential points, and to add to our understanding of the structure of the data In Section 2 we show, by considering the influence function , why... influence function lampel; 1968, 1974) for the maximum likelihood esti- mator is proportional to (EP-l)h(x), where £= (y-x’B)exp[-h’(x)e], and is thus...unbounded. Since the influence function for the MLE is quadratic in the residual c, in theory a point with a sufficiently large residual can have an

  19. Fractal analysis of the short time series in a visibility graph method

    NASA Astrophysics Data System (ADS)

    Li, Ruixue; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Chen, Yingyuan

    2016-05-01

    The aim of this study is to evaluate the performance of the visibility graph (VG) method on short fractal time series. In this paper, the time series of Fractional Brownian motions (fBm), characterized by different Hurst exponent H, are simulated and then mapped into a scale-free visibility graph, of which the degree distributions show the power-law form. The maximum likelihood estimation (MLE) is applied to estimate power-law indexes of degree distribution, and in this progress, the Kolmogorov-Smirnov (KS) statistic is used to test the performance of estimation of power-law index, aiming to avoid the influence of droop head and heavy tail in degree distribution. As a result, we find that the MLE gives an optimal estimation of power-law index when KS statistic reaches its first local minimum. Based on the results from KS statistic, the relationship between the power-law index and the Hurst exponent is reexamined and then amended to meet short time series. Thus, a method combining VG, MLE and KS statistics is proposed to estimate Hurst exponents from short time series. Lastly, this paper also offers an exemplification to verify the effectiveness of the combined method. In addition, the corresponding results show that the VG can provide a reliable estimation of Hurst exponents.

  20. The effect of mis-specification on mean and selection between the Weibull and lognormal models

    NASA Astrophysics Data System (ADS)

    Jia, Xiang; Nadarajah, Saralees; Guo, Bo

    2018-02-01

    The lognormal and Weibull models are commonly used to analyse data. Although selection procedures have been extensively studied, it is possible that the lognormal model could be selected when the true model is Weibull or vice versa. As the mean is important in applications, we focus on the effect of mis-specification on mean. The effect on lognormal mean is first considered if the lognormal sample is wrongly fitted by a Weibull model. The maximum likelihood estimate (MLE) and quasi-MLE (QMLE) of lognormal mean are obtained based on lognormal and Weibull models. Then, the impact is evaluated by computing ratio of biases and ratio of mean squared errors (MSEs) between MLE and QMLE. For completeness, the theoretical results are demonstrated by simulation studies. Next, the effect of the reverse mis-specification on Weibull mean is discussed. It is found that the ratio of biases and the ratio of MSEs are independent of the location and scale parameters of the lognormal and Weibull models. The influence could be ignored if some special conditions hold. Finally, a model selection method is proposed by comparing ratios concerning biases and MSEs. We also present a published data to illustrate the study in this paper.

  1. Reliability Stress-Strength Models for Dependent Observations with Applications in Clinical Trials

    NASA Technical Reports Server (NTRS)

    Kushary, Debashis; Kulkarni, Pandurang M.

    1995-01-01

    We consider the applications of stress-strength models in studies involving clinical trials. When studying the effects and side effects of certain procedures (treatments), it is often the case that observations are correlated due to subject effect, repeated measurements and observing many characteristics simultaneously. We develop maximum likelihood estimator (MLE) and uniform minimum variance unbiased estimator (UMVUE) of the reliability which in clinical trial studies could be considered as the chances of increased side effects due to a particular procedure compared to another. The results developed apply to both univariate and multivariate situations. Also, for the univariate situations we develop simple to use lower confidence bounds for the reliability. Further, we consider the cases when both stress and strength constitute time dependent processes. We define the future reliability and obtain methods of constructing lower confidence bounds for this reliability. Finally, we conduct simulation studies to evaluate all the procedures developed and also to compare the MLE and the UMVUE.

  2. Estimating distributions with increasing failure rate in an imperfect repair model.

    PubMed

    Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R

    2002-03-01

    A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.

  3. An algorithm for computing moments-based flood quantile estimates when historical flood information is available

    USGS Publications Warehouse

    Cohn, T.A.; Lane, W.L.; Baier, W.G.

    1997-01-01

    This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.

  4. Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention

    PubMed Central

    Noppeney, Uta

    2018-01-01

    Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567

  5. An algorithm for computing moments-based flood quantile estimates when historical flood information is available

    NASA Astrophysics Data System (ADS)

    Cohn, T. A.; Lane, W. L.; Baier, W. G.

    This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.

  6. Maximum likelihood techniques applied to quasi-elastic light scattering

    NASA Technical Reports Server (NTRS)

    Edwards, Robert V.

    1992-01-01

    There is a necessity of having an automatic procedure for reliable estimation of the quality of the measurement of particle size from QELS (Quasi-Elastic Light Scattering). Getting the measurement itself, before any error estimates can be made, is a problem because it is obtained by a very indirect measurement of a signal derived from the motion of particles in the system and requires the solution of an inverse problem. The eigenvalue structure of the transform that generates the signal is such that an arbitrarily small amount of noise can obliterate parts of any practical inversion spectrum. This project uses the Maximum Likelihood Estimation (MLE) as a framework to generate a theory and a functioning set of software to oversee the measurement process and extract the particle size information, while at the same time providing error estimates for those measurements. The theory involved verifying a correct form of the covariance matrix for the noise on the measurement and then estimating particle size parameters using a modified histogram approach.

  7. Motor unit action potential conduction velocity estimated from surface electromyographic signals using image processing techniques.

    PubMed

    Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira

    2015-09-17

    In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.

  8. Comparing methods of analysing datasets with small clusters: case studies using four paediatric datasets.

    PubMed

    Marston, Louise; Peacock, Janet L; Yu, Keming; Brocklehurst, Peter; Calvert, Sandra A; Greenough, Anne; Marlow, Neil

    2009-07-01

    Studies of prematurely born infants contain a relatively large percentage of multiple births, so the resulting data have a hierarchical structure with small clusters of size 1, 2 or 3. Ignoring the clustering may lead to incorrect inferences. The aim of this study was to compare statistical methods which can be used to analyse such data: generalised estimating equations, multilevel models, multiple linear regression and logistic regression. Four datasets which differed in total size and in percentage of multiple births (n = 254, multiple 18%; n = 176, multiple 9%; n = 10 098, multiple 3%; n = 1585, multiple 8%) were analysed. With the continuous outcome, two-level models produced similar results in the larger dataset, while generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) produced divergent estimates using the smaller dataset. For the dichotomous outcome, most methods, except generalised least squares multilevel modelling (ML GH 'xtlogit' in Stata) gave similar odds ratios and 95% confidence intervals within datasets. For the continuous outcome, our results suggest using multilevel modelling. We conclude that generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) should be used with caution when the dataset is small. Where the outcome is dichotomous and there is a relatively large percentage of non-independent data, it is recommended that these are accounted for in analyses using logistic regression with adjusted standard errors or multilevel modelling. If, however, the dataset has a small percentage of clusters greater than size 1 (e.g. a population dataset of children where there are few multiples) there appears to be less need to adjust for clustering.

  9. Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.

    PubMed

    Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D

    2016-10-01

    Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  10. Quantification of variability and uncertainty for air toxic emission inventories with censored emission factor data.

    PubMed

    Frey, H Christopher; Zhao, Yuchao

    2004-11-15

    Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.

  11. Optimal estimation of diffusion coefficients from single-particle trajectories

    NASA Astrophysics Data System (ADS)

    Vestergaard, Christian L.; Blainey, Paul C.; Flyvbjerg, Henrik

    2014-02-01

    How does one optimally determine the diffusion coefficient of a diffusing particle from a single-time-lapse recorded trajectory of the particle? We answer this question with an explicit, unbiased, and practically optimal covariance-based estimator (CVE). This estimator is regression-free and is far superior to commonly used methods based on measured mean squared displacements. In experimentally relevant parameter ranges, it also outperforms the analytically intractable and computationally more demanding maximum likelihood estimator (MLE). For the case of diffusion on a flexible and fluctuating substrate, the CVE is biased by substrate motion. However, given some long time series and a substrate under some tension, an extended MLE can separate particle diffusion on the substrate from substrate motion in the laboratory frame. This provides benchmarks that allow removal of bias caused by substrate fluctuations in CVE. The resulting unbiased CVE is optimal also for short time series on a fluctuating substrate. We have applied our estimators to human 8-oxoguanine DNA glycolase proteins diffusing on flow-stretched DNA, a fluctuating substrate, and found that diffusion coefficients are severely overestimated if substrate fluctuations are not accounted for.

  12. Range estimation of passive infrared targets through the atmosphere

    NASA Astrophysics Data System (ADS)

    Cho, Hoonkyung; Chun, Joohwan; Seo, Doochun; Choi, Seokweon

    2013-04-01

    Target range estimation is traditionally based on radar and active sonar systems in modern combat systems. However, jamming signals tremendously degrade the performance of such active sensor devices. We introduce a simple target range estimation method and the fundamental limits of the proposed method based on the atmosphere propagation model. Since passive infrared (IR) sensors measure IR signals radiating from objects in different wavelengths, this method has robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and various attenuation factors (i.e., the distance between sensor and target and atmosphere environment parameters). MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the results from MODTRAN and atmosphere propagation-based modeling, the target range can be estimated. To analyze the proposed method's performance statistically, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao lower bound (CRLB) via the probability density function of measured radiance. We also compare CRLB and the variance of MLE using Monte-Carlo simulation.

  13. Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers

    USGS Publications Warehouse

    Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.

    2004-01-01

    LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.

  14. A Bayesian model for estimating multi-state disease progression.

    PubMed

    Shen, Shiwen; Han, Simon X; Petousis, Panayiotis; Weiss, Robert E; Meng, Frank; Bui, Alex A T; Hsu, William

    2017-02-01

    A growing number of individuals who are considered at high risk of cancer are now routinely undergoing population screening. However, noted harms such as radiation exposure, overdiagnosis, and overtreatment underscore the need for better temporal models that predict who should be screened and at what frequency. The mean sojourn time (MST), an average duration period when a tumor can be detected by imaging but with no observable clinical symptoms, is a critical variable for formulating screening policy. Estimation of MST has been long studied using continuous Markov model (CMM) with Maximum likelihood estimation (MLE). However, a lot of traditional methods assume no observation error of the imaging data, which is unlikely and can bias the estimation of the MST. In addition, the MLE may not be stably estimated when data is sparse. Addressing these shortcomings, we present a probabilistic modeling approach for periodic cancer screening data. We first model the cancer state transition using a three state CMM model, while simultaneously considering observation error. We then jointly estimate the MST and observation error within a Bayesian framework. We also consider the inclusion of covariates to estimate individualized rates of disease progression. Our approach is demonstrated on participants who underwent chest x-ray screening in the National Lung Screening Trial (NLST) and validated using posterior predictive p-values and Pearson's chi-square test. Our model demonstrates more accurate and sensible estimates of MST in comparison to MLE. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Optimal Audiovisual Integration in the Ventriloquism Effect But Pervasive Deficits in Unisensory Spatial Localization in Amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-01-01

    Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.

  16. Experimental determination of particle range and dose distribution in thick targets through fragmentation reactions of stable heavy ions.

    PubMed

    Inaniwa, Taku; Kohno, Toshiyuki; Tomitani, Takehiro; Urakabe, Eriko; Sato, Shinji; Kanazawa, Mitsutaka; Kanai, Tatsuaki

    2006-09-07

    In radiation therapy with highly energetic heavy ions, the conformal irradiation of a tumour can be achieved by using their advantageous features such as the good dose localization and the high relative biological effectiveness around their mean range. For effective utilization of such properties, it is necessary to evaluate the range of incident ions and the deposited dose distribution in a patient's body. Several methods have been proposed to derive such physical quantities; one of them uses positron emitters generated through projectile fragmentation reactions of incident ions with target nuclei. We have proposed the application of the maximum likelihood estimation (MLE) method to a detected annihilation gamma-ray distribution for determination of the range of incident ions in a target and we have demonstrated the effectiveness of the method with computer simulations. In this paper, a water, a polyethylene and a polymethyl methacrylate target were each irradiated with stable (12)C, (14)N, (16)O and (20)Ne beams. Except for a few combinations of incident beams and targets, the MLE method could determine the range of incident ions R(MLE) with a difference between R(MLE) and the experimental range of less than 2.0 mm under the circumstance that the measurement of annihilation gamma rays was started just after the irradiation of 61.4 s and lasted for 500 s. In the process of evaluating the range of incident ions with the MLE method, we must calculate many physical quantities such as the fluence and the energy of both primary ions and fragments as a function of depth in a target. Consequently, by using them we can obtain the dose distribution. Thus, when the mean range of incident ions is determined with the MLE method, the annihilation gamma-ray distribution and the deposited dose distribution can be derived simultaneously. The derived dose distributions in water for the mono-energetic heavy-ion beams of four species were compared with those measured with an ionization chamber. The good agreement between the derived and the measured distributions implies that the deposited dose distribution in a target can be estimated from the detected annihilation gamma-ray distribution with a positron camera.

  17. Development of probabilistic emission inventories of air toxics for Jacksonville, Florida, USA.

    PubMed

    Zhao, Yuchao; Frey, H Christopher

    2004-11-01

    Probabilistic emission inventories were developed for 1,3-butadiene, mercury (Hg), arsenic (As), benzene, formaldehyde, and lead for Jacksonville, FL. To quantify inter-unit variability in empirical emission factor data, the Maximum Likelihood Estimation (MLE) method or the Method of Matching Moments was used to fit parametric distributions. For data sets that contain nondetected measurements, a method based upon MLE was used for parameter estimation. To quantify the uncertainty in urban air toxic emission factors, parametric bootstrap simulation and empirical bootstrap simulation were applied to uncensored and censored data, respectively. The probabilistic emission inventories were developed based on the product of the uncertainties in the emission factors and in the activity factors. The uncertainties in the urban air toxics emission inventories range from as small as -25 to +30% for Hg to as large as -83 to +243% for As. The key sources of uncertainty in the emission inventory for each toxic are identified based upon sensitivity analysis. Typically, uncertainty in the inventory of a given pollutant can be attributed primarily to a small number of source categories. Priorities for improving the inventories and for refining the probabilistic analysis are discussed.

  18. Transmission potential of Zika virus infection in the South Pacific.

    PubMed

    Nishiura, Hiroshi; Kinoshita, Ryo; Mizumoto, Kenji; Yasuda, Yohei; Nah, Kyeongah

    2016-04-01

    Zika virus has spread internationally through countries in the South Pacific and Americas. The present study aimed to estimate the basic reproduction number, R0, of Zika virus infection as a measurement of the transmission potential, reanalyzing past epidemic data from the South Pacific. Incidence data from two epidemics, one on Yap Island, Federal State of Micronesia in 2007 and the other in French Polynesia in 2013-2014, were reanalyzed. R0 of Zika virus infection was estimated from the early exponential growth rate of these two epidemics. The maximum likelihood estimate (MLE) of R0 for the Yap Island epidemic was in the order of 4.3-5.8 with broad uncertainty bounds due to the small sample size of confirmed and probable cases. The MLE of R0 for French Polynesia based on syndromic data ranged from 1.8 to 2.0 with narrow uncertainty bounds. The transmissibility of Zika virus infection appears to be comparable to those of dengue and chikungunya viruses. Considering that Aedes species are a shared vector, this finding indicates that Zika virus replication within the vector is perhaps comparable to dengue and chikungunya. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. A polychromatic adaption of the Beer-Lambert model for spectral decomposition

    NASA Astrophysics Data System (ADS)

    Sellerer, Thorsten; Ehn, Sebastian; Mechlem, Korbinian; Pfeiffer, Franz; Herzen, Julia; Noël, Peter B.

    2017-03-01

    We present a semi-empirical forward-model for spectral photon-counting CT which is fully compatible with state-of-the-art maximum-likelihood estimators (MLE) for basis material line integrals. The model relies on a minimum calibration effort to make the method applicable in routine clinical set-ups with the need for periodic re-calibration. In this work we present an experimental verifcation of our proposed method. The proposed method uses an adapted Beer-Lambert model, describing the energy dependent attenuation of a polychromatic x-ray spectrum using additional exponential terms. In an experimental dual-energy photon-counting CT setup based on a CdTe detector, the model demonstrates an accurate prediction of the registered counts for an attenuated polychromatic spectrum. Thereby deviations between model and measurement data lie within the Poisson statistical limit of the performed acquisitions, providing an effectively unbiased forward-model. The experimental data also shows that the model is capable of handling possible spectral distortions introduced by the photon-counting detector and CdTe sensor. The simplicity and high accuracy of the proposed model provides a viable forward-model for MLE-based spectral decomposition methods without the need of costly and time-consuming characterization of the system response.

  20. Transstadial Transmission of Hepatozoon canis by Rhipicephalus sanguineus (Acari: Ixodidae) in Field Conditions.

    PubMed

    Aktas, M; Özübek, S

    2017-07-01

    This study investigated possible transovarial and transstadial transmission of Hepatozoon canis by Rhipicephalus sanguineus (Latreille) ticks collected from naturally infected dogs in a municipal dog shelter and the grounds of the shelter. Four hundred sixty-five engorged nymphs were collected from 16 stray dogs that were found to be infected with H. canis by blood smear and PCR analyses and maintained in an incubator at 28 °C for moulting. Four hundred eighteen nymphs moulted to adults 14-16 d post collection. Unfed ticks from the shelter grounds comprised 1,500 larvae, 2,100 nymphs, and 85 adults; were sorted according to origin, developmental stage, and sex into 117 pools; and screened by 18S rRNA PCR for Hepatozoon infection. Of 60 adult tick pools examined, 51 were infected with H. canis. The overall maximum likelihood estimate (MLE) of infection rate was calculated as 21.0% (CI 15.80-28.21). Hepatozoon canis was detected in 31 out of 33 female pools (MLE 26.96%, CI 17.64-44.33) and 20 out of 27 male pools (MLE 14.82%, CI 20.15-46.41). Among 42 unfed nymph pools collected from the shelter, 26 were infected with H. canis, and MLE of infection was calculated as 1.9% (CI 1.25-2.77). No H. canis DNA was detected in any of the gDNA pools consisting of larva specimens. Partial sequences of the 18S rRNA gene shared 99-100% similarity with the corresponding H. canis isolates. Our results revealed the transstadial transmission of H. canis by R. sanguineus, both from larva to nymph and from nymph to adult, in field conditions. However, there were no evidence of transovarial transmission. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  2. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  3. Statistical Considerations of Data Processing in Giovanni Online Tool

    NASA Technical Reports Server (NTRS)

    Suhung, Shen; Leptoukh, G.; Acker, J.; Berrick, S.

    2005-01-01

    The GES DISC Interactive Online Visualization and Analysis Infrastructure (Giovanni) is a web-based interface for the rapid visualization and analysis of gridded data from a number of remote sensing instruments. The GES DISC currently employs several Giovanni instances to analyze various products, such as Ocean-Giovanni for ocean products from SeaWiFS and MODIS-Aqua; TOMS & OM1 Giovanni for atmospheric chemical trace gases from TOMS and OMI, and MOVAS for aerosols from MODIS, etc. (http://giovanni.gsfc.nasa.gov) Foremost among the Giovanni statistical functions is data averaging. Two aspects of this function are addressed here. The first deals with the accuracy of averaging gridded mapped products vs. averaging from the ungridded Level 2 data. Some mapped products contain mean values only; others contain additional statistics, such as number of pixels (NP) for each grid, standard deviation, etc. Since NP varies spatially and temporally, averaging with or without weighting by NP will be different. In this paper, we address differences of various weighting algorithms for some datasets utilized in Giovanni. The second aspect is related to different averaging methods affecting data quality and interpretation for data with non-normal distribution. The present study demonstrates results of different spatial averaging methods using gridded SeaWiFS Level 3 mapped monthly chlorophyll a data. Spatial averages were calculated using three different methods: arithmetic mean (AVG), geometric mean (GEO), and maximum likelihood estimator (MLE). Biogeochemical data, such as chlorophyll a, are usually considered to have a log-normal distribution. The study determined that differences between methods tend to increase with increasing size of a selected coastal area, with no significant differences in most open oceans. The GEO method consistently produces values lower than AVG and MLE. The AVG method produces values larger than MLE in some cases, but smaller in other cases. Further studies indicated that significant differences between AVG and MLE methods occurred in coastal areas where data have large spatial variations and a log-bimodal distribution instead of log-normal distribution.

  4. Minimax Estimation of Functionals of Discrete Distributions

    PubMed Central

    Jiao, Jiantao; Venkat, Kartik; Han, Yanjun; Weissman, Tsachy

    2017-01-01

    We propose a general methodology for the construction and analysis of essentially minimax estimators for a wide class of functionals of finite dimensional parameters, and elaborate on the case of discrete distributions, where the support size S is unknown and may be comparable with or even much larger than the number of observations n. We treat the respective regions where the functional is nonsmooth and smooth separately. In the nonsmooth regime, we apply an unbiased estimator for the best polynomial approximation of the functional whereas, in the smooth regime, we apply a bias-corrected version of the maximum likelihood estimator (MLE). We illustrate the merit of this approach by thoroughly analyzing the performance of the resulting schemes for estimating two important information measures: 1) the entropy H(P)=∑i=1S−pilnpi and 2) Fα(P)=∑i=1Spiα, α > 0. We obtain the minimax L2 rates for estimating these functionals. In particular, we demonstrate that our estimator achieves the optimal sample complexity n ≍ S/ln S for entropy estimation. We also demonstrate that the sample complexity for estimating Fα(P), 0 < α < 1, is n ≍ S1/α/ln S, which can be achieved by our estimator but not the MLE. For 1 < α < 3/2, we show the minimax L2 rate for estimating Fα(P) is (n ln n)−2(α−1) for infinite support size, while the maximum L2 rate for the MLE is n−2(α−1). For all the above cases, the behavior of the minimax rate-optimal estimators with n samples is essentially that of the MLE (plug-in rule) with n ln n samples, which we term “effective sample size enlargement.” We highlight the practical advantages of our schemes for the estimation of entropy and mutual information. We compare our performance with various existing approaches, and demonstrate that our approach reduces running time and boosts the accuracy. Moreover, we show that the minimax rate-optimal mutual information estimator yielded by our framework leads to significant performance boosts over the Chow–Liu algorithm in learning graphical models. The wide use of information measure estimation suggests that the insights and estimators obtained in this paper could be broadly applicable. PMID:29375152

  5. Isolation of a Novel Insect-Specific Flavivirus from Culiseta melanura in the Northeastern United States

    PubMed Central

    Misencik, Michael J.; Grubaugh, Nathan D.; Andreadis, Theodore G.; Ebel, Gregory D.

    2016-01-01

    Abstract The genus Flavivirus includes a number of newly recognized viruses that infect and replicate only within mosquitoes. To determine whether insect-specific flaviviruses (ISFs) may infect Culiseta (Cs.) melanura mosquitoes, we screened pools of field-collected mosquitoes for virus infection by RT-PCR targeting conserved regions of the NS5 gene. NS5 nucleotide sequences amplified from Cs. melanura pools were genetically similar to other ISFs and most closely matched Calbertado virus from Culex tarsalis, sharing 68.7% nucleotide and 76.1% amino acid sequence identity. The complete genome of one virus isolate was sequenced to reveal a primary open reading frame (ORF) encoding a viral polyprotein characteristic of the genus Flavivirus. Phylogenetic analysis showed that this virus represents a distinct evolutionary lineage that belongs to the classical ISF group. The virus was detected solely in Cs. melanura pools, occurred in sampled populations from Connecticut, New York, New Hampshire, and Maine, and infected both adult and larval stages of the mosquito. Maximum likelihood estimate infection rates (MLE-IR) were relatively stable in overwintering Cs. melanura larvae collected monthly from November of 2012 through May of 2013 (MLE-IR = 0.7–2.1/100 mosquitoes) and in host-seeking females collected weekly from June through October of 2013 (MLE-IR = 3.8–11.5/100 mosquitoes). Phylogenetic analysis of viral sequences revealed limited genetic variation that lacked obvious geographic structure among strains in the northeastern United States. This new virus is provisionally named Culiseta flavivirus on the basis of its host association with Cs. melanura. PMID:26807512

  6. Long-range persistence in the global mean surface temperature and the global warming "time bomb"

    NASA Astrophysics Data System (ADS)

    Rypdal, M.; Rypdal, K.

    2012-04-01

    Detrended Fluctuation Analysis (DFA) and Maximum Likelihood Estimations (MLE) based on instrumental data over the last 160 years indicate that there is Long-Range Persistence (LRP) in Global Mean Surface Temperature (GMST) on time scales of months to decades. The persistence is much higher in sea surface temperature than in land temperatures. Power spectral analysis of multi-model, multi-ensemble runs of global climate models indicate further that this persistence may extend to centennial and maybe even millennial time-scales. We also support these conclusions by wavelet variogram analysis, DFA, and MLE of Northern hemisphere mean surface temperature reconstructions over the last two millennia. These analyses indicate that the GMST is a strongly persistent noise with Hurst exponent H>0.9 on time scales from decades up to at least 500 years. We show that such LRP can be very important for long-term climate prediction and for the establishment of a "time bomb" in the climate system due to a growing energy imbalance caused by the slow relaxation to radiative equilibrium under rising anthropogenic forcing. We do this by the construction of a multi-parameter dynamic-stochastic model for the GMST response to deterministic and stochastic forcing, where LRP is represented by a power-law response function. Reconstructed data for total forcing and GMST over the last millennium are used with this model to estimate trend coefficients and Hurst exponent for the GMST on multi-century time scale by means of MLE. Ensembles of solutions generated from the stochastic model also allow us to estimate confidence intervals for these estimates.

  7. Molecular identification of Theileria and Babesia in ticks collected from sheep and goats in the Black Sea region of Turkey.

    PubMed

    Aydin, Mehmet Fatih; Aktas, Munir; Dumanli, Nazir

    2015-01-01

    A molecular survey was undertaken in the Black Sea region of Turkey to determine the presence of Theileria and Babesia species of medical and veterinary importance. The ticks were removed from sheep and goats, pooled according to species and locations, and analyzed by PCR-based reverse line blot (RLB) and sequencing. A total of 2241 ixodid ticks belonging to 5 genus and 12 species were collected and divided into 310 pools. Infection rates were calculated as the maximum likelihood estimation (MLE) with 95% confidence intervals (CI). Of the 310 pools tested, 46 (14.83%) were found to be infected with Theileria or Babesia species, and the overall MLE of the infection rate was calculated as 2.27% (CI 1.67-2.99). The MLE of the infection rates were calculated as 0.691% (CI 0.171-1.78) in Haemaphysalis parva, 1.47% (CI 0.081-6.37) in Rhipicephalus sanguineus, 1.84% (CI 0.101-7.87) in Ixodes ricinus, 2.86% (CI 1.68-4.48) in Rhipicephalus turanicus, 5.57% (CI 0.941-16.3) in Hyalomma marginatum, and 6.2% (CI 4.02-9.02) in Rhipicephalus bursa. Pathogens identified in ticks included Theileria ovis, Babesia ovis, Babesia bigemina, and Babesia microti. Most tick pools were infected with a single pathogen. However, five pools displayed mixed infections with T. ovis and B. ovis. This study provides the first molecular evidence for the presence of B. microti in ticks in Turkey.

  8. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    USGS Publications Warehouse

    Langbein, John O.

    2017-01-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  9. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  10. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    NASA Astrophysics Data System (ADS)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  11. Audiovisual integration increases the intentional step synchronization of side-by-side walkers.

    PubMed

    Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A

    2017-12-01

    When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Inference of domain-disease associations from domain-protein, protein-disease and disease-disease relationships.

    PubMed

    Zhang, Wangshu; Coba, Marcelo P; Sun, Fengzhu

    2016-01-11

    Protein domains can be viewed as portable units of biological function that defines the functional properties of proteins. Therefore, if a protein is associated with a disease, protein domains might also be associated and define disease endophenotypes. However, knowledge about such domain-disease relationships is rarely available. Thus, identification of domains associated with human diseases would greatly improve our understanding of the mechanism of human complex diseases and further improve the prevention, diagnosis and treatment of these diseases. Based on phenotypic similarities among diseases, we first group diseases into overlapping modules. We then develop a framework to infer associations between domains and diseases through known relationships between diseases and modules, domains and proteins, as well as proteins and disease modules. Different methods including Association, Maximum likelihood estimation (MLE), Domain-disease pair exclusion analysis (DPEA), Bayesian, and Parsimonious explanation (PE) approaches are developed to predict domain-disease associations. We demonstrate the effectiveness of all the five approaches via a series of validation experiments, and show the robustness of the MLE, Bayesian and PE approaches to the involved parameters. We also study the effects of disease modularization in inferring novel domain-disease associations. Through validation, the AUC (Area Under the operating characteristic Curve) scores for Bayesian, MLE, DPEA, PE, and Association approaches are 0.86, 0.84, 0.83, 0.83 and 0.79, respectively, indicating the usefulness of these approaches for predicting domain-disease relationships. Finally, we choose the Bayesian approach to infer domains associated with two common diseases, Crohn's disease and type 2 diabetes. The Bayesian approach has the best performance for the inference of domain-disease relationships. The predicted landscape between domains and diseases provides a more detailed view about the disease mechanisms.

  13. Cue reliability and a landmark stability heuristic determine relative weighting between egocentric and allocentric visual information in memory-guided reach.

    PubMed

    Byrne, Patrick A; Crawford, J Douglas

    2010-06-01

    It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark "shift" during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric-allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration--despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment--had a strong influence on egocentric-allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.

  14. A comparison of Probability Of Detection (POD) data determined using different statistical methods

    NASA Astrophysics Data System (ADS)

    Fahr, A.; Forsyth, D.; Bullock, M.

    1993-12-01

    Different statistical methods have been suggested for determining probability of detection (POD) data for nondestructive inspection (NDI) techniques. A comparative assessment of various methods of determining POD was conducted using results of three NDI methods obtained by inspecting actual aircraft engine compressor disks which contained service induced cracks. The study found that the POD and 95 percent confidence curves as a function of crack size as well as the 90/95 percent crack length vary depending on the statistical method used and the type of data. The distribution function as well as the parameter estimation procedure used for determining POD and the confidence bound must be included when referencing information such as the 90/95 percent crack length. The POD curves and confidence bounds determined using the range interval method are very dependent on information that is not from the inspection data. The maximum likelihood estimators (MLE) method does not require such information and the POD results are more reasonable. The log-logistic function appears to model POD of hit/miss data relatively well and is easy to implement. The log-normal distribution using MLE provides more realistic POD results and is the preferred method. Although it is more complicated and slower to calculate, it can be implemented on a common spreadsheet program.

  15. Estimation of Rank Correlation for Clustered Data

    PubMed Central

    Rosner, Bernard; Glynn, Robert

    2017-01-01

    It is well known that the sample correlation coefficient (Rxy) is the maximum likelihood estimator (MLE) of the Pearson correlation (ρxy) for i.i.d. bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the MLE of ρxy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (a) converting ranks of both X and Y to the probit scale, (b) estimating the Pearson correlation between probit scores for X and Y, and (c) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. PMID:28399615

  16. Correcting for bias in the selection and validation of informative diagnostic tests.

    PubMed

    Robertson, David S; Prevost, A Toby; Bowden, Jack

    2015-04-15

    When developing a new diagnostic test for a disease, there are often multiple candidate classifiers to choose from, and it is unclear if any will offer an improvement in performance compared with current technology. A two-stage design can be used to select a promising classifier (if one exists) in stage one for definitive validation in stage two. However, estimating the true properties of the chosen classifier is complicated by the first stage selection rules. In particular, the usual maximum likelihood estimator (MLE) that combines data from both stages will be biased high. Consequently, confidence intervals and p-values flowing from the MLE will also be incorrect. Building on the results of Pepe et al. (SIM 28:762-779), we derive the most efficient conditionally unbiased estimator and exact confidence intervals for a classifier's sensitivity in a two-stage design with arbitrary selection rules; the condition being that the trial proceeds to the validation stage. We apply our estimation strategy to data from a recent family history screening tool validation study by Walter et al. (BJGP 63:393-400) and are able to identify and successfully adjust for bias in the tool's estimated sensitivity to detect those at higher risk of breast cancer. © 2015 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  17. Robust and efficient estimation with weighted composite quantile regression

    NASA Astrophysics Data System (ADS)

    Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng

    2016-09-01

    In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.

  18. Radar cross section models for limited aspect angle windows

    NASA Astrophysics Data System (ADS)

    Robinson, Mark C.

    1992-12-01

    This thesis presents a method for building Radar Cross Section (RCS) models of aircraft based on static data taken from limited aspect angle windows. These models statistically characterize static RCS. This is done to show that a limited number of samples can be used to effectively characterize static aircraft RCS. The optimum models are determined by performing both a Kolmogorov and a Chi-Square goodness-of-fit test comparing the static RCS data with a variety of probability density functions (pdf) that are known to be effective at approximating the static RCS of aircraft. The optimum parameter estimator is also determined by the goodness of-fit tests if there is a difference in pdf parameters obtained by the Maximum Likelihood Estimator (MLE) and the Method of Moments (MoM) estimators.

  19. Wlan-Based Indoor Localization Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Saleem, Fasiha; Wyne, Shurjeel

    2016-07-01

    Wireless indoor localization has generated recent research interest due to its numerous applications. This work investigates Wi-Fi based indoor localization using two variants of the fingerprinting approach. Specifically, we study the application of an artificial neural network (ANN) for implementing the fingerprinting approach and compare its localization performance with a probabilistic fingerprinting method that is based on maximum likelihood estimation (MLE) of the user location. We incorporate spatial correlation of fading into our investigations, which is often neglected in simulation studies and leads to erroneous location estimates. The localization performance is quantified in terms of accuracy, precision, robustness, and complexity. Multiple methods for handling the case of missing APs in online stage are investigated. Our results indicate that ANN-based fingerprinting outperforms the probabilistic approach for all performance metrics considered in this work.

  20. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    NASA Astrophysics Data System (ADS)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.

    2017-08-01

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.

  1. Changes in seasonal streamflow extremes experienced in rivers of Northwestern South America (Colombia)

    NASA Astrophysics Data System (ADS)

    Pierini, J. O.; Restrepo, J. C.; Aguirre, J.; Bustamante, A. M.; Velásquez, G. J.

    2017-04-01

    A measure of the variability in seasonal extreme streamflow was estimated for the Colombian Caribbean coast, using monthly time series of freshwater discharge from ten watersheds. The aim was to detect modifications in the streamflow monthly distribution, seasonal trends, variance and extreme monthly values. A 20-year length time moving window, with 1-year successive shiftments, was applied to the monthly series to analyze the seasonal variability of streamflow. The seasonal-windowed data were statistically fitted through the Gamma distribution function. Scale and shape parameters were computed using the Maximum Likelihood Estimation (MLE) and the bootstrap method for 1000 resample. A trend analysis was performed for each windowed-serie, allowing to detect the window of maximum absolute values for trends. Significant temporal shifts in seasonal streamflow distribution and quantiles (QT), were obtained for different frequencies. Wet and dry extremes periods increased significantly in the last decades. Such increase did not occur simultaneously through the region. Some locations exhibited continuous increases only at minimum QT.

  2. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.

    PubMed

    Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan

    2016-04-28

    This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  3. Trajectory Dispersed Vehicle Process for Space Launch System

    NASA Technical Reports Server (NTRS)

    Statham, Tamara; Thompson, Seth

    2017-01-01

    The Space Launch System (SLS) vehicle is part of NASA's deep space exploration plans that includes manned missions to Mars. Manufacturing uncertainties in design parameters are key considerations throughout SLS development as they have significant effects on focus parameters such as lift-off-thrust-to-weight, vehicle payload, maximum dynamic pressure, and compression loads. This presentation discusses how the SLS program captures these uncertainties by utilizing a 3 degree of freedom (DOF) process called Trajectory Dispersed (TD) analysis. This analysis biases nominal trajectories to identify extremes in the design parameters for various potential SLS configurations and missions. This process utilizes a Design of Experiments (DOE) and response surface methodologies (RSM) to statistically sample uncertainties, and develop resulting vehicles using a Maximum Likelihood Estimate (MLE) process for targeting uncertainties bias. These vehicles represent various missions and configurations which are used as key inputs into a variety of analyses in the SLS design process, including 6 DOF dispersions, separation clearances, and engine out failure studies.

  4. Prevention Effects and Possible Molecular Mechanism of Mulberry Leaf Extract and its Formulation on Rats with Insulin-Insensitivity.

    PubMed

    Liu, Yan; Li, Xuemei; Xie, Chen; Luo, Xiuzhen; Bao, Yonggang; Wu, Bin; Hu, Yuchi; Zhong, Zhong; Liu, Chang; Li, MinJie

    2016-01-01

    For centuries, mulberry leaf has been used in traditional Chinese medicine for the treatment of diabetes. This study aims to test the prevention effects of a proprietary mulberry leaf extract (MLE) and a formula consisting of MLE, fenugreek seed extract, and cinnamon cassia extract (MLEF) on insulin resistance development in animals. MLE was refined to contain 5% 1-deoxynojirimycin by weight. MLEF was formulated by mixing MLE with cinnamon cassia extract and fenugreek seed extract at a 6:5:3 ratio (by weight). First, the acute toxicity effects of MLE on ICR mice were examined at 5 g/kg BW dose. Second, two groups of normal rats were administrated with water or 150 mg/kg BW MLE per day for 29 days to evaluate MLE's effect on normal animals. Third, to examine the effects of MLE and MLEF on model animals, sixty SD rats were divided into five groups, namely, (1) normal, (2) model, (3) high-dose MLE (75 mg/kg BW) treatment; (4) low-dose MLE (15 mg/kg BW) treatment; and (5) MLEF (35 mg/kg BW) treatment. On the second week, rats in groups (2)-(5) were switched to high-energy diet for three weeks. Afterward, the rats were injected (ip) with a single dose of 105 mg/kg BW alloxan. After four more days, fasting blood glucose, post-prandial blood glucose, serum insulin, cholesterol, and triglyceride levels were measured. Last, liver lysates from animals were screened with 650 antibodies for changes in the expression or phosphorylation levels of signaling proteins. The results were further validated by Western blot analysis. We found that the maximum tolerance dose of MLE was greater than 5 g/kg in mice. The MLE at a 150 mg/kg BW dose showed no effect on fast blood glucose levels in normal rats. The MLE at a 75 mg/kg BW dose and MLEF at a 35 mg/kg BW dose, significantly (p < 0.05) reduced fast blood glucose levels in rats with impaired glucose and lipid metabolism. In total, 34 proteins with significant changes in expression and phosphorylation levels were identified. The changes of JNK, IRS1, and PDK1 were confirmed by western blot analysis. In conclusion, this study demonstrated the potential protective effects of MLE and MLEF against hyperglycemia induced by high-energy diet and toxic chemicals in rats for the first time. The most likely mechanism is the promotion of IRS1 phosphorylation, which leads to insulin sensitivity restoration.

  5. A comparative simulation study of AR(1) estimators in short time series.

    PubMed

    Krone, Tanja; Albers, Casper J; Timmerman, Marieke E

    2017-01-01

    Various estimators of the autoregressive model exist. We compare their performance in estimating the autocorrelation in short time series. In Study 1, under correct model specification, we compare the frequentist r 1 estimator, C-statistic, ordinary least squares estimator (OLS) and maximum likelihood estimator (MLE), and a Bayesian method, considering flat (B f ) and symmetrized reference (B sr ) priors. In a completely crossed experimental design we vary lengths of time series (i.e., T = 10, 25, 40, 50 and 100) and autocorrelation (from -0.90 to 0.90 with steps of 0.10). The results show a lowest bias for the B sr , and a lowest variability for r 1 . The power in different conditions is highest for B sr and OLS. For T = 10, the absolute performance of all measurements is poor, as expected. In Study 2, we study robustness of the methods through misspecification by generating the data according to an ARMA(1,1) model, but still analysing the data with an AR(1) model. We use the two methods with the lowest bias for this study, i.e., B sr and MLE. The bias gets larger when the non-modelled moving average parameter becomes larger. Both the variability and power show dependency on the non-modelled parameter. The differences between the two estimation methods are negligible for all measurements.

  6. Estimating the probability of rare events: addressing zero failure data.

    PubMed

    Quigley, John; Revie, Matthew

    2011-07-01

    Traditional statistical procedures for estimating the probability of an event result in an estimate of zero when no events are realized. Alternative inferential procedures have been proposed for the situation where zero events have been realized but often these are ad hoc, relying on selecting methods dependent on the data that have been realized. Such data-dependent inference decisions violate fundamental statistical principles, resulting in estimation procedures whose benefits are difficult to assess. In this article, we propose estimating the probability of an event occurring through minimax inference on the probability that future samples of equal size realize no more events than that in the data on which the inference is based. Although motivated by inference on rare events, the method is not restricted to zero event data and closely approximates the maximum likelihood estimate (MLE) for nonzero data. The use of the minimax procedure provides a risk adverse inferential procedure where there are no events realized. A comparison is made with the MLE and regions of the underlying probability are identified where this approach is superior. Moreover, a comparison is made with three standard approaches to supporting inference where no event data are realized, which we argue are unduly pessimistic. We show that for situations of zero events the estimator can be simply approximated with 1/2.5n, where n is the number of trials. © 2011 Society for Risk Analysis.

  7. Evaluation of statistical treatments of left-censored environmental data using coincident uncensored data sets. II. Group comparisons

    USGS Publications Warehouse

    Antweiler, Ronald C.

    2015-01-01

    The main classes of statistical treatments that have been used to determine if two groups of censored environmental data arise from the same distribution are substitution methods, maximum likelihood (MLE) techniques, and nonparametric methods. These treatments along with using all instrument-generated data (IN), even those less than the detection limit, were evaluated by examining 550 data sets in which the true values of the censored data were known, and therefore “true” probabilities could be calculated and used as a yardstick for comparison. It was found that technique “quality” was strongly dependent on the degree of censoring present in the groups. For low degrees of censoring (<25% in each group), the Generalized Wilcoxon (GW) technique and substitution of √2/2 times the detection limit gave overall the best results. For moderate degrees of censoring, MLE worked best, but only if the distribution could be estimated to be normal or log-normal prior to its application; otherwise, GW was a suitable alternative. For higher degrees of censoring (each group >40% censoring), no technique provided reliable estimates of the true probability. Group size did not appear to influence the quality of the result, and no technique appeared to become better or worse than other techniques relative to group size. Finally, IN appeared to do very well relative to the other techniques regardless of censoring or group size.

  8. Prevention Effects and Possible Molecular Mechanism of Mulberry Leaf Extract and its Formulation on Rats with Insulin-Insensitivity

    PubMed Central

    Xie, Chen; Luo, Xiuzhen; Bao, Yonggang; Wu, Bin; Hu, Yuchi; Zhong, Zhong; Liu, Chang; Li, MinJie

    2016-01-01

    For centuries, mulberry leaf has been used in traditional Chinese medicine for the treatment of diabetes. This study aims to test the prevention effects of a proprietary mulberry leaf extract (MLE) and a formula consisting of MLE, fenugreek seed extract, and cinnamon cassia extract (MLEF) on insulin resistance development in animals. MLE was refined to contain 5% 1-deoxynojirimycin by weight. MLEF was formulated by mixing MLE with cinnamon cassia extract and fenugreek seed extract at a 6:5:3 ratio (by weight). First, the acute toxicity effects of MLE on ICR mice were examined at 5 g/kg BW dose. Second, two groups of normal rats were administrated with water or 150 mg/kg BW MLE per day for 29 days to evaluate MLE’s effect on normal animals. Third, to examine the effects of MLE and MLEF on model animals, sixty SD rats were divided into five groups, namely, (1) normal, (2) model, (3) high-dose MLE (75 mg/kg BW) treatment; (4) low-dose MLE (15 mg/kg BW) treatment; and (5) MLEF (35 mg/kg BW) treatment. On the second week, rats in groups (2)-(5) were switched to high-energy diet for three weeks. Afterward, the rats were injected (ip) with a single dose of 105 mg/kg BW alloxan. After four more days, fasting blood glucose, post-prandial blood glucose, serum insulin, cholesterol, and triglyceride levels were measured. Last, liver lysates from animals were screened with 650 antibodies for changes in the expression or phosphorylation levels of signaling proteins. The results were further validated by Western blot analysis. We found that the maximum tolerance dose of MLE was greater than 5 g/kg in mice. The MLE at a 150 mg/kg BW dose showed no effect on fast blood glucose levels in normal rats. The MLE at a 75 mg/kg BW dose and MLEF at a 35 mg/kg BW dose, significantly (p < 0.05) reduced fast blood glucose levels in rats with impaired glucose and lipid metabolism. In total, 34 proteins with significant changes in expression and phosphorylation levels were identified. The changes of JNK, IRS1, and PDK1 were confirmed by western blot analysis. In conclusion, this study demonstrated the potential protective effects of MLE and MLEF against hyperglycemia induced by high-energy diet and toxic chemicals in rats for the first time. The most likely mechanism is the promotion of IRS1 phosphorylation, which leads to insulin sensitivity restoration. PMID:27054886

  9. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    DOE PAGES

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; ...

    2017-08-25

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less

  10. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less

  11. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less

  12. Improved Analysis of Time Series with Temporally Correlated Errors: An Algorithm that Reduces the Computation Time.

    NASA Astrophysics Data System (ADS)

    Langbein, J. O.

    2016-12-01

    Most time series of geophysical phenomena are contaminated with temporally correlated errors that limit the precision of any derived parameters. Ignoring temporal correlations will result in biased and unrealistic estimates of velocity and its error estimated from geodetic position measurements. Obtaining better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model when there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fn , with frequency, f. Time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. [2012] demonstrate one technique that substantially increases the efficiency of the MLE methods, but it provides only an approximate solution for power-law indices greater than 1.0. That restriction can be removed by simply forming a data-filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified and it provides robust results for a wide range of power-law indices. With the new formulation, the efficiency is typically improved by about a factor of 8 over previous MLE algorithms [Langbein, 2004]. The new algorithm can be downloaded at http://earthquake.usgs.gov/research/software/#est_noise. The main program provides a number of basic functions that can be used to model the time-dependent part of time series and a variety of models that describe the temporal covariance of the data. In addition, the program is packaged with a few companion programs and scripts that can help with data analysis and with interpretation of the noise modeling.

  13. A new approach for modeling patient overall radiosensitivity and predicting multiple toxicity endpoints for breast cancer patients.

    PubMed

    Mbah, Chamberlain; De Ruyck, Kim; De Schrijver, Silke; De Sutter, Charlotte; Schiettecatte, Kimberly; Monten, Chris; Paelinck, Leen; De Neve, Wilfried; Thierens, Hubert; West, Catharine; Amorim, Gustavo; Thas, Olivier; Veldeman, Liv

    2018-05-01

    Evaluation of patient characteristics inducing toxicity in breast radiotherapy, using simultaneous modeling of multiple endpoints. In 269 early-stage breast cancer patients treated with whole-breast irradiation (WBI) after breast-conserving surgery, toxicity was scored, based on five dichotomized endpoints. Five logistic regression models were fitted, one for each endpoint and the effect sizes of all variables were estimated using maximum likelihood (MLE). The MLEs are improved with James-Stein estimates (JSEs). The method combines all the MLEs, obtained for the same variable but from different endpoints. Misclassification errors were computed using MLE- and JSE-based prediction models. For associations, p-values from the sum of squares of MLEs were compared with p-values from the Standardized Total Average Toxicity (STAT) Score. With JSEs, 19 highest ranked variables were predictive of the five different endpoints. Important variables increasing radiation-induced toxicity were chemotherapy, age, SATB2 rs2881208 SNP and nodal irradiation. Treatment position (prone position) was most protective and ranked eighth. Overall, the misclassification errors were 45% and 34% for the MLE- and JSE-based models, respectively. p-Values from the sum of squares of MLEs and p-values from STAT score led to very similar conclusions, except for the variables nodal irradiation and treatment position, for which STAT p-values suggested an association with radiosensitivity, whereas p-values from the sum of squares indicated no association. Breast volume was ranked as the most significant variable in both strategies. The James-Stein estimator was used for selecting variables that are predictive for multiple toxicity endpoints. With this estimator, 19 variables were predictive for all toxicities of which four were significantly associated with overall radiosensitivity. JSEs led to almost 25% reduction in the misclassification error rate compared to conventional MLEs. Finally, patient characteristics that are associated with radiosensitivity were identified without explicitly quantifying radiosensitivity.

  14. WE-H-207A-06: Hypoxia Quantification in Static PET Images: The Signal in the Noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, H; Yeung, I; Milosevic, M

    2016-06-15

    Purpose: Quantification of hypoxia from PET images is of considerable clinical interest. In the absence of dynamic PET imaging the hypoxic fraction (HF) of a tumor has to be estimated from voxel values of activity concentration of a radioactive hypoxia tracer. This work is part of an effort to standardize quantification of tumor hypoxic fraction from PET images. Methods: A simple hypoxia imaging model in the tumor was developed. The distribution of the tracer activity was described as the sum of two different probability distributions, one for the normoxic (and necrotic), the other for the hypoxic voxels. The widths ofmore » the distributions arise due to variability of the transport, tumor tissue inhomogeneity, tracer binding kinetics, and due to PET image noise. Quantification of HF was performed for various levels of variability using two different methodologies: a) classification thresholds between normoxic and hypoxic voxels based on a non-hypoxic surrogate (muscle), and b) estimation of the (posterior) probability distributions based on maximizing likelihood optimization that does not require a surrogate. Data from the hypoxia imaging model and from 27 cervical cancer patients enrolled in a FAZA PET study were analyzed. Results: In the model, where the true value of HF is known, thresholds usually underestimate the value for large variability. For the patients, a significant uncertainty of the HF values (an average intra-patient range of 17%) was caused by spatial non-uniformity of image noise which is a hallmark of all PET images. Maximum likelihood estimation (MLE) is able to directly optimize for the weights of both distributions, however, may suffer from poor optimization convergence. For some patients, MLE-based HF values showed significant differences to threshold-based HF-values. Conclusion: HF-values depend critically on the magnitude of the different sources of tracer uptake variability. A measure of confidence should also be reported.« less

  15. Study on constant-step stress accelerated life tests in white organic light-emitting diodes.

    PubMed

    Zhang, J P; Liu, C; Chen, X; Cheng, G L; Zhou, A X

    2014-11-01

    In order to obtain reliability information for a white organic light-emitting diode (OLED), two constant and one step stress tests were conducted with its working current increased. The Weibull function was applied to describe the OLED life distribution, and the maximum likelihood estimation (MLE) and its iterative flow chart were used to calculate shape and scale parameters. Furthermore, the accelerated life equation was determined using the least squares method, a Kolmogorov-Smirnov test was performed to assess if the white OLED life follows a Weibull distribution, and self-developed software was used to predict the average and the median lifetimes of the OLED. The numerical results indicate that white OLED life conforms to a Weibull distribution, and that the accelerated life equation completely satisfies the inverse power law. The estimated life of a white OLED may provide significant guidelines for its manufacturers and customers. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Spatiotemporal Co-occurrence of Flanders and West Nile Viruses Within Culex Populations in Shelby County, Tennessee.

    PubMed

    Lucero, D E; Carlson, T C; Delisle, J; Poindexter, S; Jones, T F; Moncayo, A C

    2016-05-01

    West Nile virus (WNV) and Flanders virus (FLAV) can cocirculate in Culex mosquitoes in parts of North America. A large dataset of mosquito pools tested for WNV and FLAV was queried to understand the spatiotemporal relationship between these two viruses in Shelby County, TN. We found strong evidence of global clustering (i.e., spatial autocorrelation) and overlapping of local clustering (i.e., Hot Spots based on Getis Ord Gi*) of maximum likelihood estimates (MLE) of infection rates (IR) during 2008-2013. Temporally, FLAV emerges and peaks on average 10.2 wk prior to WNV based on IR. Higher levels of WNV IR were detected within 3,000 m of FLAV-positive pool buffers than outside these buffers. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. The predictive performance of a path-dependent exotic-option credit risk model in the emerging market

    NASA Astrophysics Data System (ADS)

    Chen, Dar-Hsin; Chou, Heng-Chih; Wang, David; Zaabar, Rim

    2011-06-01

    Most empirical research of the path-dependent, exotic-option credit risk model focuses on developed markets. Taking Taiwan as an example, this study investigates the bankruptcy prediction performance of the path-dependent, barrier option model in the emerging market. We adopt Duan's (1994) [11], (2000) [12] transformed-data maximum likelihood estimation (MLE) method to directly estimate the unobserved model parameters, and compare the predictive ability of the barrier option model to the commonly adopted credit risk model, Merton's model. Our empirical findings show that the barrier option model is more powerful than Merton's model in predicting bankruptcy in the emerging market. Moreover, we find that the barrier option model predicts bankruptcy much better for highly-leveraged firms. Finally, our findings indicate that the prediction accuracy of the credit risk model can be improved by higher asset liquidity and greater financial transparency.

  18. A Novel Method for Block Size Forensics Based on Morphological Operations

    NASA Astrophysics Data System (ADS)

    Luo, Weiqi; Huang, Jiwu; Qiu, Guoping

    Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.

  19. Subpixel based defocused points removal in photon-limited volumetric dataset

    NASA Astrophysics Data System (ADS)

    Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Maraka, Harsha Vardhan R.; Ryle, James P.; Sheridan, John T.

    2017-03-01

    The asymptotic property of the maximum likelihood estimator (MLE) has been utilized to reconstruct three-dimensional (3D) sectional images in the photon counting imaging (PCI) regime. At first, multiple 2D intensity images, known as Elemental images (EI), are captured. Then the geometric ray-tracing method is employed to reconstruct the 3D sectional images at various depth cues. We note that a 3D sectional image consists of both focused and defocused regions, depending on the reconstructed depth position. The defocused portion is redundant and should be removed in order to facilitate image analysis e.g., 3D object tracking, recognition, classification and navigation. In this paper, we present a subpixel level three-step based technique (i.e. involving adaptive thresholding, boundary detection and entropy based segmentation) to discard the defocused sparse-samples from the reconstructed photon-limited 3D sectional images. Simulation results are presented demonstrating the feasibility and efficiency of the proposed method.

  20. Cytoprophet: a Cytoscape plug-in for protein and domain interaction networks inference.

    PubMed

    Morcos, Faruck; Lamanna, Charles; Sikora, Marcin; Izaguirre, Jesús

    2008-10-01

    Cytoprophet is a software tool that allows prediction and visualization of protein and domain interaction networks. It is implemented as a plug-in of Cytoscape, an open source software framework for analysis and visualization of molecular networks. Cytoprophet implements three algorithms that predict new potential physical interactions using the domain composition of proteins and experimental assays. The algorithms for protein and domain interaction inference include maximum likelihood estimation (MLE) using expectation maximization (EM); the set cover approach maximum specificity set cover (MSSC) and the sum-product algorithm (SPA). After accepting an input set of proteins with Uniprot ID/Accession numbers and a selected prediction algorithm, Cytoprophet draws a network of potential interactions with probability scores and GO distances as edge attributes. A network of domain interactions between the domains of the initial protein list can also be generated. Cytoprophet was designed to take advantage of the visual capabilities of Cytoscape and be simple to use. An example of inference in a signaling network of myxobacterium Myxococcus xanthus is presented and available at Cytoprophet's website. http://cytoprophet.cse.nd.edu.

  1. Weak and Dynamic GNSS Signal Tracking Strategies for Flight Missions in the Space Service Volume

    PubMed Central

    Jing, Shuai; Zhan, Xingqun; Liu, Baoyu; Chen, Maolin

    2016-01-01

    Weak-signal and high-dynamics are of two primary concerns of space navigation using GNSS (Global Navigation Satellite System) in the space service volume (SSV). The paper firstly defines a reference assumption third-order phase-locked loop (PLL) as the baseline of an onboard GNSS receiver, and proves the incompetence of this conventional architecture. Then an adaptive four-state Kalman filter (KF)-based algorithm is introduced to realize the optimization of loop noise bandwidth, which can adaptively regulate its filter gain according to the received signal power and line-of-sight (LOS) dynamics. To overcome the matter of losing lock in weak-signal and high-dynamic environments, an open loop tracking strategy aided by an inertial navigation system (INS) is recommended, and the traditional maximum likelihood estimation (MLE) method is modified in a non-coherent way by reconstructing the likelihood cost function. Furthermore, a typical mission with combined orbital maneuvering and non-maneuvering arcs is taken as a destination object to test the two proposed strategies. Finally, the experiment based on computer simulation identifies the effectiveness of an adaptive four-state KF-based strategy under non-maneuvering conditions and the virtue of INS-assisted methods under maneuvering conditions. PMID:27598164

  2. Weak and Dynamic GNSS Signal Tracking Strategies for Flight Missions in the Space Service Volume.

    PubMed

    Jing, Shuai; Zhan, Xingqun; Liu, Baoyu; Chen, Maolin

    2016-09-02

    Weak-signal and high-dynamics are of two primary concerns of space navigation using GNSS (Global Navigation Satellite System) in the space service volume (SSV). The paper firstly defines a reference assumption third-order phase-locked loop (PLL) as the baseline of an onboard GNSS receiver, and proves the incompetence of this conventional architecture. Then an adaptive four-state Kalman filter (KF)-based algorithm is introduced to realize the optimization of loop noise bandwidth, which can adaptively regulate its filter gain according to the received signal power and line-of-sight (LOS) dynamics. To overcome the matter of losing lock in weak-signal and high-dynamic environments, an open loop tracking strategy aided by an inertial navigation system (INS) is recommended, and the traditional maximum likelihood estimation (MLE) method is modified in a non-coherent way by reconstructing the likelihood cost function. Furthermore, a typical mission with combined orbital maneuvering and non-maneuvering arcs is taken as a destination object to test the two proposed strategies. Finally, the experiment based on computer simulation identifies the effectiveness of an adaptive four-state KF-based strategy under non-maneuvering conditions and the virtue of INS-assisted methods under maneuvering conditions.

  3. An optimal algorithm for reconstructing images from binary measurements

    NASA Astrophysics Data System (ADS)

    Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin

    2010-01-01

    We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.

  4. Distribution of ixodid ticks on dogs in Nuevo León, Mexico, and their association with Borrelia burgdorferi sensu lato.

    PubMed

    Galaviz-Silva, Lucio; Pérez-Treviño, Karla Carmelita; Molina-Garza, Zinnia J

    2013-12-01

    This study aimed to document the geographic distribution of Ixodes tick species in dogs and the prevalence of Borrelia burgdorferi s.l. in adult ticks and blood samples by amplification of the ospA region of the B. burgdorferi genome. The study area included nine localities in Nuevo León state. DNA amplification was performed on pools of ticks to calculate the maximum likelihood estimation (MLE), and the community composition (prevalence, abundance, and intensity of infestation) was recorded. A total of 2,543 adult ticks, representing four species, Rhipicephalus sanguineus, Dermacentor variabilis, Rhipicephalus (Boophilus) annulatus, and Amblyomma cajennense, were recorded from 338 infested dogs. Statistically significant correlations were observed between female dogs and infestation (P = 0.0003) and between R. sanguineus and locality (P = 0.0001). Dogs sampled in Guadalupe and Estanzuela were positive by PCR (0.9 %) for B. burgdorferi. Rhipicephalus sanguineus had the highest abundance, intensity, and prevalence (10.57, 7.12 and 94.6, respectively). PCR results from 256 pools showed that four pools were positive for D. variabilis (1.6 %), with an MLE of 9.2 %; nevertheless, it is important to consider that in the area under examination probably other reservoir hosts for D. variabilis and B. burgdorferi are present that, very likely, play a much more important role in the ecology of Lyme borreliosis than dogs, which could be considered in future studies.

  5. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    PubMed

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  6. SPOTting model parameters using a ready-made Python package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for optimization methods. Here we see simple algorithms like the MCMC struggling to find the global optimum of the function, while algorithms like SCE-UA and DE-MCZ show their strengths. Thirdly, we apply an uncertainty analysis of a one-dimensional physically based hydrological model build with the Catchment Modelling Framework (CMF). The model is driven by meteorological and groundwater data from a Free Air Carbon Enrichment (FACE) experiment in Linden (Hesse, Germany). Simulation results are evaluated with measured soil moisture data. We search for optimal parameter sets of the van Genuchten-Mualem function and find different equally optimal solutions with some of the algorithms. The case studies reveal that the implemented SPOT methods work sufficiently well. They further show the benefit of having one tool at hand that includes a number of parameter search methods, likelihood functions and a priori parameter distributions within one platform independent package.

  7. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.

    PubMed

    Han, Qiyang; Wellner, Jon A

    2016-01-01

    In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.

  8. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES

    PubMed Central

    Han, Qiyang; Wellner, Jon A.

    2017-01-01

    In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410

  9. Generalized weighted likelihood density estimators with application to finite mixture of exponential family distributions

    PubMed Central

    Zhan, Tingting; Chevoneva, Inna; Iglewicz, Boris

    2010-01-01

    The family of weighted likelihood estimators largely overlaps with minimum divergence estimators. They are robust to data contaminations compared to MLE. We define the class of generalized weighted likelihood estimators (GWLE), provide its influence function and discuss the efficiency requirements. We introduce a new truncated cubic-inverse weight, which is both first and second order efficient and more robust than previously reported weights. We also discuss new ways of selecting the smoothing bandwidth and weighted starting values for the iterative algorithm. The advantage of the truncated cubic-inverse weight is illustrated in a simulation study of three-components normal mixtures model with large overlaps and heavy contaminations. A real data example is also provided. PMID:20835375

  10. Experimental Validation of Pulse Phase Tracking for X-Ray Pulsar Based

    NASA Technical Reports Server (NTRS)

    Anderson, Kevin

    2012-01-01

    Pulsars are a form of variable celestial source that have shown to be usable as aids for autonomous, deep space navigation. Particularly those sources emitting in the X-ray band are ideal for navigation due to smaller detector sizes. In this paper X-ray photons arriving from a pulsar are modeled as a non-homogeneous Poisson process. The method of pulse phase tracking is then investigated as a technique to measure the radial distance traveled by a spacecraft over an observation interval. A maximum-likelihood phase estimator (MLE) is used for the case where the observed frequency signal is constant. For the varying signal frequency case, an algorithm is used in which the observation window is broken up into smaller blocks over which an MLE is used. The outputs of this phase estimation process were then looped through a digital phase-locked loop (DPLL) in order to reduce the errors and produce estimates of the doppler frequency. These phase tracking algorithms were tested both in a computer simulation environment and using the NASA Goddard Space flight Center X-ray Navigation Laboratory Testbed (GXLT). This provided an experimental validation with photons being emitted by a modulated X-ray source and detected by a silicon-drift detector. Models of the Crab pulsar and the pulsar B1821-24 were used in order to generate test scenarios. Three different simulated detector trajectories were used to be tracked by the phase tracking algorithm: a stationary case, one with constant velocity, and one with constant acceleration. All three were performed in one-dimension along the line of sight to the pulsar. The first two had a constant signal frequency and the third had a time varying frequency. All of the constant frequency cases were processed using the MLE, and it was shown that they tracked the initial phase within 0.15% for the simulations and 2.5% in the experiments, based on an average of ten runs. The MLE-DPLL cascade version of the phase tracking algorithm was used in the varying frequency case. This resulted in tracking of the phase and frequency by the DPLL outputs in both the simulation and experimental environments. The crab pulsar was experimentally tested with a trajectory with a higher acceleration. In this case the phase error tended toward zero as the observation extended to 250 seconds and the doppler frequency error tended to zero in under 100 seconds.

  11. Real-Time Airborne Gamma-Ray Background Estimation Using NASVD with MLE and Radiation Transport for Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulisek, Jonathan A.; Schweppe, John E.; Stave, Sean C.

    2015-06-01

    Helicopter-mounted gamma-ray detectors can provide law enforcement officials the means to quickly and accurately detect, identify, and locate radiological threats over a wide geographical area. The ability to accurately distinguish radiological threat-generated gamma-ray signatures from background gamma radiation in real time is essential in order to realize this potential. This problem is non-trivial, especially in urban environments for which the background may change very rapidly during flight. This exacerbates the challenge of estimating background due to the poor counting statistics inherent in real-time airborne gamma-ray spectroscopy measurements. To address this, we have developed a new technique for real-time estimation ofmore » background gamma radiation from aerial measurements. This method is built upon on the noise-adjusted singular value decomposition (NASVD) technique that was previously developed for estimating the potassium (K), uranium (U), and thorium (T) concentrations in soil post-flight. The method can be calibrated using K, U, and T spectra determined from radiation transport simulations along with basis functions, which may be determined empirically by applying maximum likelihood estimation (MLE) to previously measured airborne gamma-ray spectra. The method was applied to both measured and simulated airborne gamma-ray spectra, with and without man-made radiological source injections. Compared to schemes based on simple averaging, this technique was less sensitive to background contamination from the injected man-made sources and may be particularly useful when the gamma-ray background frequently changes during the course of the flight.« less

  12. Multi- and monofractal indices of short-term heart rate variability.

    PubMed

    Fischer, R; Akay, M; Castiglioni, P; Di Rienzo, M

    2003-09-01

    Indices of heart rate variability (HRV) based on fractal signal models have recently been shown to possess value as predictors of mortality in specific patient populations. To develop more powerful clinical indices of HRV based on a fractal signal model, the study investigated two HRV indices based on a monofractal signal model called fractional Brownian motion and an index based on a multifractal signal model called multifractional Brownian motion. The performance of the indices was compared with an HRV index in common clinical use. To compare the indices, 18 normal subjects were subjected to postural changes, and the indices were compared on their ability to respond to the resulting autonomic events in HRV recordings. The magnitude of the response to postural change (normalised by the measurement variability) was assessed by analysis of variance and multiple comparison testing. Four HRV indices were investigated for this study: the standard deviation of all normal R-R intervals; an HRV index commonly used in the clinic; detrended fluctuation analysis, an HRV index found to be the most powerful predictor of mortality in a study of patients with depressed left ventricular function; an HRV index developed using the maximum likelihood estimation (MLE) technique for a monofractal signal model; and an HRV index developed for the analysis of multifractional Brownian motion signals. The HRV index based on the MLE technique was found to respond most strongly to the induced postural changes (95% CI). The magnitude of its response (normalised by the measurement variability) was at least 25% greater than any of the other indices tested.

  13. Shrinkage estimation of effect sizes as an alternative to hypothesis testing followed by estimation in high-dimensional biology: applications to differential gene expression.

    PubMed

    Montazeri, Zahra; Yanofsky, Corey M; Bickel, David R

    2010-01-01

    Research on analyzing microarray data has focused on the problem of identifying differentially expressed genes to the neglect of the problem of how to integrate evidence that a gene is differentially expressed with information on the extent of its differential expression. Consequently, researchers currently prioritize genes for further study either on the basis of volcano plots or, more commonly, according to simple estimates of the fold change after filtering the genes with an arbitrary statistical significance threshold. While the subjective and informal nature of the former practice precludes quantification of its reliability, the latter practice is equivalent to using a hard-threshold estimator of the expression ratio that is not known to perform well in terms of mean-squared error, the sum of estimator variance and squared estimator bias. On the basis of two distinct simulation studies and data from different microarray studies, we systematically compared the performance of several estimators representing both current practice and shrinkage. We find that the threshold-based estimators usually perform worse than the maximum-likelihood estimator (MLE) and they often perform far worse as quantified by estimated mean-squared risk. By contrast, the shrinkage estimators tend to perform as well as or better than the MLE and never much worse than the MLE, as expected from what is known about shrinkage. However, a Bayesian measure of performance based on the prior information that few genes are differentially expressed indicates that hard-threshold estimators perform about as well as the local false discovery rate (FDR), the best of the shrinkage estimators studied. Based on the ability of the latter to leverage information across genes, we conclude that the use of the local-FDR estimator of the fold change instead of informal or threshold-based combinations of statistical tests and non-shrinkage estimators can be expected to substantially improve the reliability of gene prioritization at very little risk of doing so less reliably. Since the proposed replacement of post-selection estimates with shrunken estimates applies as well to other types of high-dimensional data, it could also improve the analysis of SNP data from genome-wide association studies.

  14. Estimating Animal Abundance in Ground Beef Batches Assayed with Molecular Markers

    PubMed Central

    Hu, Xin-Sheng; Simila, Janika; Platz, Sindey Schueler; Moore, Stephen S.; Plastow, Graham; Meghen, Ciaran N.

    2012-01-01

    Estimating animal abundance in industrial scale batches of ground meat is important for mapping meat products through the manufacturing process and for effectively tracing the finished product during a food safety recall. The processing of ground beef involves a potentially large number of animals from diverse sources in a single product batch, which produces a high heterogeneity in capture probability. In order to estimate animal abundance through DNA profiling of ground beef constituents, two parameter-based statistical models were developed for incidence data. Simulations were applied to evaluate the maximum likelihood estimate (MLE) of a joint likelihood function from multiple surveys, showing superiority in the presence of high capture heterogeneity with small sample sizes, or comparable estimation in the presence of low capture heterogeneity with a large sample size when compared to other existing models. Our model employs the full information on the pattern of the capture-recapture frequencies from multiple samples. We applied the proposed models to estimate animal abundance in six manufacturing beef batches, genotyped using 30 single nucleotide polymorphism (SNP) markers, from a large scale beef grinding facility. Results show that between 411∼1367 animals were present in six manufacturing beef batches. These estimates are informative as a reference for improving recall processes and tracing finished meat products back to source. PMID:22479559

  15. Calibration of a stochastic health evolution model using NHIS data

    NASA Astrophysics Data System (ADS)

    Gupta, Aparna; Li, Zhisheng

    2011-10-01

    This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.

  16. A Tool for Determining the Number of Contributors: Interpreting Complex, Compromised Low-Template Dna Samples

    DTIC Science & Technology

    2017-09-28

    SECURITY CLASSIFICATION OF: In forensic DNA analysis, the interpretation of a sample acquired from the environment may be dependent upon the...sample acquired from the environment may be dependent upon the assumption on the number of individuals from which the evidence arose. Degraded and...NOCIt results to those obtained when allele counting or maxiumum likelihood estimator (MLE) methods are employed. NOCIt does not depend upon an AT and

  17. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  18. On the existence, uniqueness, and asymptotic normality of a consistent solution of the likelihood equations for nonidentically distributed observations: Applications to missing data problems

    NASA Technical Reports Server (NTRS)

    Peters, C. (Principal Investigator)

    1980-01-01

    A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.

  19. Evaluation of some random effects methodology applicable to bird ringing data

    USGS Publications Warehouse

    Burnham, K.P.; White, Gary C.

    2002-01-01

    Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S1,..., Sk; random effects can then be a useful model: Si = E(S) + ??i. Here, the temporal variation in survival probability is treated as random with average value E(??2) = ??2. This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, ??2, estimation of E(S) and var (E??(S)) where the latter includes a component for ??2 as well as the traditional component for v??ar(S??\\S??). Furthermore, the random effects model leads to shrinkage estimates, Si, as improved (in mean square error) estimators of Si compared to the MLE, S??i, from the unrestricted time-effects model. Appropriate confidence intervals based on the Si are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of ??s2, confidence interval coverage on ??2, coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: Si ??? S (no effects), Si = E(S) + ??i (random effects), and S1,..., Sk (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the Si.

  20. Integrating non-colocated well and geophysical data to capture subsurface heterogeneity at an aquifer recharge and recovery site

    NASA Astrophysics Data System (ADS)

    Gottschalk, Ian P.; Hermans, Thomas; Knight, Rosemary; Caers, Jef; Cameron, David A.; Regnery, Julia; McCray, John E.

    2017-12-01

    Geophysical data have proven to be very useful for lithological characterization. However, quantitatively integrating the information gained from acquiring geophysical data generally requires colocated lithological and geophysical data for constructing a rock-physics relationship. In this contribution, the issue of integrating noncolocated geophysical and lithological data is addressed, and the results are applied to simulate groundwater flow in a heterogeneous aquifer in the Prairie Waters Project North Campus aquifer recharge site, Colorado. Two methods of constructing a rock-physics transform between electrical resistivity tomography (ERT) data and lithology measurements are assessed. In the first approach, a maximum likelihood estimation (MLE) is used to fit a bimodal lognormal distribution to horizontal crosssections of the ERT resistivity histogram. In the second approach, a spatial bootstrap is applied to approximate the rock-physics relationship. The rock-physics transforms provide soft data for multiple point statistics (MPS) simulations. Subsurface models are used to run groundwater flow and tracer test simulations. Each model's uncalibrated, predicted breakthrough time is evaluated based on its agreement with measured subsurface travel time values from infiltration basins to selected groundwater recovery wells. We find that incorporating geophysical information into uncalibrated flow models reduces the difference with observed values, as compared to flow models without geophysical information incorporated. The integration of geophysical data also narrows the variance of predicted tracer breakthrough times substantially. Accuracy is highest and variance is lowest in breakthrough predictions generated by the MLE-based rock-physics transform. Calibrating the ensemble of geophysically constrained models would help produce a suite of realistic flow models for predictive purposes at the site. We find that the success of breakthrough predictions is highly sensitive to the definition of the rock-physics transform; it is therefore important to model this transfer function accurately.

  1. Generalizing boundaries for triangular designs, and efficacy estimation at extended follow-ups.

    PubMed

    Allison, Annabel; Edwards, Tansy; Omollo, Raymond; Alves, Fabiana; Magirr, Dominic; E Alexander, Neal D

    2015-11-16

    Visceral leishmaniasis (VL) is a parasitic disease transmitted by sandflies and is fatal if left untreated. Phase II trials of new treatment regimens for VL are primarily carried out to evaluate safety and efficacy, while pharmacokinetic data are also important to inform future combination treatment regimens. The efficacy of VL treatments is evaluated at two time points, initial cure, when treatment is completed and definitive cure, commonly 6 months post end of treatment, to allow for slow response to treatment and detection of relapses. This paper investigates a generalization of the triangular design to impose a minimum sample size for pharmacokinetic or other analyses, and methods to estimate efficacy at extended follow-up accounting for the sequential design and changes in cure status during extended follow-up. We provided R functions that generalize the triangular design to impose a minimum sample size before allowing stopping for efficacy. For estimation of efficacy at a second, extended, follow-up time, the performance of a shrinkage estimator (SHE), a probability tree estimator (PTE) and the maximum likelihood estimator (MLE) for estimation was assessed by simulation. The SHE and PTE are viable approaches to estimate an extended follow-up although the SHE performed better than the PTE: the bias and root mean square error were lower and coverage probabilities higher. Generalization of the triangular design is simple to implement for adaptations to meet requirements for pharmacokinetic analyses. Using the simple MLE approach to estimate efficacy at extended follow-up will lead to biased results, generally over-estimating treatment success. The SHE is recommended in trials of two or more treatments. The PTE is an acceptable alternative for one-arm trials or where use of the SHE is not possible due to computational complexity. NCT01067443 , February 2010.

  2. Using a multinomial tree model for detecting mixtures in perceptual detection

    PubMed Central

    Chechile, Richard A.

    2014-01-01

    In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741

  3. Performance-informed EEG analysis reveals mixed evidence for EEG signatures unique to the processing of time.

    PubMed

    Schlichting, Nadine; de Jong, Ritske; van Rijn, Hedderik

    2018-06-20

    Certain EEG components (e.g., the contingent negative variation, CNV, or beta oscillations) have been linked to the perception of temporal magnitudes specifically. However, it is as of yet unclear whether these EEG components are really unique to time perception or reflect the perception of magnitudes in general. In the current study we recorded EEG while participants had to make judgments about duration (time condition) or numerosity (number condition) in a comparison task. This design allowed us to directly compare EEG signals between the processing of time and number. Stimuli consisted of a series of blue dots appearing and disappearing dynamically on a black screen. Each stimulus was characterized by its duration and the total number of dots that it consisted of. Because it is known that tasks like these elicit perceptual interference effects that we used a maximum-likelihood estimation (MLE) procedure to determine, for each participant and dimension separately, to what extent time and numerosity information were taken into account when making a judgement in an extensive post hoc analysis. This approach enabled us to capture individual differences in behavioral performance and, based on the MLE estimates, to select a subset of participants who suppressed task-irrelevant information. Even for this subset of participants, who showed no or only small interference effects and thus were thought to truly process temporal information in the time condition and numerosity information in the number condition, we found CNV patterns in the time-domain EEG signals for both tasks that was more pronounced in the time-task. We found no substantial evidence for differences between the processing of temporal and numerical information in the time-frequency domain.

  4. Association between -174G/C and -572G/C interleukin 6 gene polymorphisms and severe radiographic damage to the hands of Mexican patients with rheumatoid arthritis: a preliminary report.

    PubMed

    Zavaleta-Muñiz, S A; Gonzalez-Lopez, L; Murillo-Vazquez, J D; Saldaña-Cruz, A M; Vazquez-Villegas, M L; Martín-Márquez, B T; Vasquez-Jimenez, J C; Sandoval-Garcia, F; Ruiz-Padilla, A J; Fajardo-Robledo, N S; Ponce-Guarneros, J M; Rocha-Muñoz, A D; Alcaraz-Lopez, M F; Cardona-Müller, D; Totsuka-Sutto, S E; Rubio-Arellano, E D; Gamez-Nava, J I

    2016-12-19

    Several interleukin 6 gene (IL6) polymorphisms are implicated in susceptibility to rheumatoid arthritis (RA). It has not yet been established with certainty if these polymorphisms are associated with the severe radiographic damage observed in some RA patients, particularly those with the development of joint bone ankylosis (JBA). The objective of the present study was to evaluate the association between severe radiographic damage in hands and the -174G/C and -572G/C IL6 polymorphisms in Mexican Mestizo people with RA. Mestizo adults with RA and long disease duration (>5 years) were classified into two groups according to the radiographic damage in their hands: a) severe radiographic damage (JBA and/or joint bone subluxations) and b) mild or moderate radiographic damage. We compared the differences in genotype and allele frequencies of -174G/C and -572G/C IL6 polymorphisms (genotyped using polymerase chain reaction-restriction fragment length polymorphism) between these two groups. Our findings indicated that the -174G/C polymorphism of IL6 is associated with severe joint radiographic damage [maximum likelihood odds ratios (MLE_OR): 8.03; 95%CI 1.22-187.06; P = 0.03], whereas the -572G/C polymorphism of IL6 exhibited no such association (MLE_OR: 1.5; 95%CI 0.52-4.5; P = 0.44). Higher anti-cyclic citrullinated peptide antibody levels were associated with more severe joint radiographic damage (P = 0.04). We conclude that there is a relevant association between the -174G/C IL6 polymorphism and severe radiographic damage. Future studies in other populations are required to confirm our findings.

  5. Treatment effect heterogeneity for univariate subgroups in clinical trials: Shrinkage, standardization, or else

    PubMed Central

    Varadhan, Ravi; Wang, Sue-Jane

    2016-01-01

    Treatment effect heterogeneity is a well-recognized phenomenon in randomized controlled clinical trials. In this paper, we discuss subgroup analyses with prespecified subgroups of clinical or biological importance. We explore various alternatives to the naive (the traditional univariate) subgroup analyses to address the issues of multiplicity and confounding. Specifically, we consider a model-based Bayesian shrinkage (Bayes-DS) and a nonparametric, empirical Bayes shrinkage approach (Emp-Bayes) to temper the optimism of traditional univariate subgroup analyses; a standardization approach (standardization) that accounts for correlation between baseline covariates; and a model-based maximum likelihood estimation (MLE) approach. The Bayes-DS and Emp-Bayes methods model the variation in subgroup-specific treatment effect rather than testing the null hypothesis of no difference between subgroups. The standardization approach addresses the issue of confounding in subgroup analyses. The MLE approach is considered only for comparison in simulation studies as the “truth” since the data were generated from the same model. Using the characteristics of a hypothetical large outcome trial, we perform simulation studies and articulate the utilities and potential limitations of these estimators. Simulation results indicate that Bayes-DS and Emp-Bayes can protect against optimism present in the naïve approach. Due to its simplicity, the naïve approach should be the reference for reporting univariate subgroup-specific treatment effect estimates from exploratory subgroup analyses. Standardization, although it tends to have a larger variance, is suggested when it is important to address the confounding of univariate subgroup effects due to correlation between baseline covariates. The Bayes-DS approach is available as an R package (DSBayes). PMID:26485117

  6. Mitigating Multipath Bias Using a Dual-Polarization Antenna: Theoretical Performance, Algorithm Design, and Simulation

    PubMed Central

    Xie, Lin; Cui, Xiaowei; Zhao, Sihao; Lu, Mingquan

    2017-01-01

    It is well known that multipath effect remains a dominant error source that affects the positioning accuracy of Global Navigation Satellite System (GNSS) receivers. Significant efforts have been made by researchers and receiver manufacturers to mitigate multipath error in the past decades. Recently, a multipath mitigation technique using dual-polarization antennas has become a research hotspot for it provides another degree of freedom to distinguish the line-of-sight (LOS) signal from the LOS and multipath composite signal without extensively increasing the complexity of the receiver. Numbers of multipath mitigation techniques using dual-polarization antennas have been proposed and all of them report performance improvement over the single-polarization methods. However, due to the unpredictability of multipath, multipath mitigation techniques based on dual-polarization are not always effective while few studies discuss the condition under which the multipath mitigation using a dual-polarization antenna can outperform that using a single-polarization antenna, which is a fundamental question for dual-polarization multipath mitigation (DPMM) and the design of multipath mitigation algorithms. In this paper we analyze the characteristics of the signal received by a dual-polarization antenna and use the maximum likelihood estimation (MLE) to assess the theoretical performance of DPMM in different received signal cases. Based on the assessment we answer this fundamental question and find the dual-polarization antenna’s capability in mitigating short delay multipath—the most challenging one among all types of multipath for the majority of the multipath mitigation techniques. Considering these effective conditions, we propose a dual-polarization sequential iterative maximum likelihood estimation (DP-SIMLE) algorithm for DPMM. The simulation results verify our theory and show superior performance of the proposed DP-SIMLE algorithm over the traditional one using only an RHCP antenna. PMID:28208832

  7. Ability evaluation by binary tests: Problems, challenges & recent advances

    NASA Astrophysics Data System (ADS)

    Bashkansky, E.; Turetsky, V.

    2016-11-01

    Binary tests designed to measure abilities of objects under test (OUTs) are widely used in different fields of measurement theory and practice. The number of test items in such tests is usually very limited. The response to each test item provides only one bit of information per OUT. The problem of correct ability assessment is even more complicated, when the levels of difficulty of the test items are unknown beforehand. This fact makes the search for effective ways of planning and processing the results of such tests highly relevant. In recent years, there has been some progress in this direction, generated by both the development of computational tools and the emergence of new ideas. The latter are associated with the use of so-called “scale invariant item response models”. Together with maximum likelihood estimation (MLE) approach, they helped to solve some problems of engineering and proficiency testing. However, several issues related to the assessment of uncertainties, replications scheduling, the use of placebo, as well as evaluation of multidimensional abilities still present a challenge for researchers. The authors attempt to outline the ways to solve the above problems.

  8. Multiple Two-Way Time Message Exchange (TTME) Time Synchronization for Bridge Monitoring Wireless Sensor Networks

    PubMed Central

    Shi, Fanrong; Tuo, Xianguo; Yang, Simon X.; Li, Huailiang; Shi, Rui

    2017-01-01

    Wireless sensor networks (WSNs) have been widely used to collect valuable information in Structural Health Monitoring (SHM) of bridges, using various sensors, such as temperature, vibration and strain sensors. Since multiple sensors are distributed on the bridge, accurate time synchronization is very important for multi-sensor data fusion and information processing. Based on shape of the bridge, a spanning tree is employed to build linear topology WSNs and achieve time synchronization in this paper. Two-way time message exchange (TTME) and maximum likelihood estimation (MLE) are employed for clock offset estimation. Multiple TTMEs are proposed to obtain a subset of TTME observations. The time out restriction and retry mechanism are employed to avoid the estimation errors that are caused by continuous clock offset and software latencies. The simulation results show that the proposed algorithm could avoid the estimation errors caused by clock drift and minimize the estimation error due to the large random variable delay jitter. The proposed algorithm is an accurate and low complexity time synchronization algorithm for bridge health monitoring. PMID:28471418

  9. Multiple Two-Way Time Message Exchange (TTME) Time Synchronization for Bridge Monitoring Wireless Sensor Networks.

    PubMed

    Shi, Fanrong; Tuo, Xianguo; Yang, Simon X; Li, Huailiang; Shi, Rui

    2017-05-04

    Wireless sensor networks (WSNs) have been widely used to collect valuable information in Structural Health Monitoring (SHM) of bridges, using various sensors, such as temperature, vibration and strain sensors. Since multiple sensors are distributed on the bridge, accurate time synchronization is very important for multi-sensor data fusion and information processing. Based on shape of the bridge, a spanning tree is employed to build linear topology WSNs and achieve time synchronization in this paper. Two-way time message exchange (TTME) and maximum likelihood estimation (MLE) are employed for clock offset estimation. Multiple TTMEs are proposed to obtain a subset of TTME observations. The time out restriction and retry mechanism are employed to avoid the estimation errors that are caused by continuous clock offset and software latencies. The simulation results show that the proposed algorithm could avoid the estimation errors caused by clock drift and minimize the estimation error due to the large random variable delay jitter. The proposed algorithm is an accurate and low complexity time synchronization algorithm for bridge health monitoring.

  10. Radiance and atmosphere propagation-based method for the target range estimation

    NASA Astrophysics Data System (ADS)

    Cho, Hoonkyung; Chun, Joohwan

    2012-06-01

    Target range estimation is traditionally based on radar and active sonar systems in modern combat system. However, the performance of such active sensor devices is degraded tremendously by jamming signal from the enemy. This paper proposes a simple range estimation method between the target and the sensor. Passive IR sensors measures infrared (IR) light radiance radiating from objects in dierent wavelength and this method shows robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and is attenuated by various factors, in particular the distance between the sensor and the target and atmosphere environment. MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the result from MODTRAN and measured radiance, the target range is estimated. To statistically analyze the performance of proposed method, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao Lower Bound (CRLB) via the probability density function of measured radiance. And we also compare CRLB and the variance of and ML estimation using Monte-Carlo.

  11. A comparison of PCA/ICA for data preprocessing in remote sensing imagery classification

    NASA Astrophysics Data System (ADS)

    He, Hui; Yu, Xianchuan

    2005-10-01

    In this paper a performance comparison of a variety of data preprocessing algorithms in remote sensing image classification is presented. These selected algorithms are principal component analysis (PCA) and three different independent component analyses, ICA (Fast-ICA (Aapo Hyvarinen, 1999), Kernel-ICA (KCCA and KGV (Bach & Jordan, 2002), EFFICA (Aiyou Chen & Peter Bickel, 2003). These algorithms were applied to a remote sensing imagery (1600×1197), obtained from Shunyi, Beijing. For classification, a MLC method is used for the raw and preprocessed data. The results show that classification with the preprocessed data have more confident results than that with raw data and among the preprocessing algorithms, ICA algorithms improve on PCA and EFFICA performs better than the others. The convergence of these ICA algorithms (for data points more than a million) are also studied, the result shows EFFICA converges much faster than the others. Furthermore, because EFFICA is a one-step maximum likelihood estimate (MLE) which reaches asymptotic Fisher efficiency (EFFICA), it computers quite small so that its demand of memory come down greatly, which settled the "out of memory" problem occurred in the other algorithms.

  12. Skew-t fits to mortality data--can a Gaussian-related distribution replace the Gompertz-Makeham as the basis for mortality studies?

    PubMed

    Clark, Jeremy S C; Kaczmarczyk, Mariusz; Mongiało, Zbigniew; Ignaczak, Paweł; Czajkowski, Andrzej A; Klęsk, Przemysław; Ciechanowicz, Andrzej

    2013-08-01

    Gompertz-related distributions have dominated mortality studies for 187 years. However, nonrelated distributions also fit well to mortality data. These compete with the Gompertz and Gompertz-Makeham data when applied to data with varying extents of truncation, with no consensus as to preference. In contrast, Gaussian-related distributions are rarely applied, despite the fact that Lexis in 1879 suggested that the normal distribution itself fits well to the right of the mode. Study aims were therefore to compare skew-t fits to Human Mortality Database data, with Gompertz-nested distributions, by implementing maximum likelihood estimation functions (mle2, R package bbmle; coding given). Results showed skew-t fits obtained lower Bayesian information criterion values than Gompertz-nested distributions, applied to low-mortality country data, including 1711 and 1810 cohorts. As Gaussian-related distributions have now been found to have almost universal application to error theory, one conclusion could be that a Gaussian-related distribution might replace Gompertz-related distributions as the basis for mortality studies.

  13. Estimating relative risks for common outcome using PROC NLP.

    PubMed

    Yu, Binbing; Wang, Zhuoqiao

    2008-05-01

    In cross-sectional or cohort studies with binary outcomes, it is biologically interpretable and of interest to estimate the relative risk or prevalence ratio, especially when the response rates are not rare. Several methods have been used to estimate the relative risk, among which the log-binomial models yield the maximum likelihood estimate (MLE) of the parameters. Because of restrictions on the parameter space, the log-binomial models often run into convergence problems. Some remedies, e.g., the Poisson and Cox regressions, have been proposed. However, these methods may give out-of-bound predicted response probabilities. In this paper, a new computation method using the SAS Nonlinear Programming (NLP) procedure is proposed to find the MLEs. The proposed NLP method was compared to the COPY method, a modified method to fit the log-binomial model. Issues in the implementation are discussed. For illustration, both methods were applied to data on the prevalence of microalbuminuria (micro-protein leakage into urine) for kidney disease patients from the Diabetes Control and Complications Trial. The sample SAS macro for calculating relative risk is provided in the appendix.

  14. Single Tracking Location Acoustic Radiation Force Impulse Viscoelasticity Estimation (STL-VE): A Method for Measuring Tissue Viscoelastic Parameters

    PubMed Central

    Langdon, Jonathan H; Elegbe, Etana; McAleavey, Stephen A

    2015-01-01

    Single Tracking Location (STL) Shear wave Elasticity Imaging (SWEI) is a method for detecting elastic differences between tissues. It has the advantage of intrinsic speckle bias suppression compared to Multiple Tracking Location (MTL) variants of SWEI. However, the assumption of a linear model leads to an overestimation of the shear modulus in viscoelastic media. A new reconstruction technique denoted Single Tracking Location Viscosity Estimation (STL-VE) is introduced to correct for this overestimation. This technique utilizes the same raw data generated in STL-SWEI imaging. Here, the STL-VE technique is developed by way of a Maximum Likelihood Estimation (MLE) for general viscoelastic materials. The method is then implemented for the particular case of the Kelvin-Voigt Model. Using simulation data, the STL-VE technique is demonstrated and the performance of the estimator is characterized. Finally, the STL-VE method is used to estimate the viscoelastic parameters of ex-vivo bovine liver. We find good agreement between the STL-VE results and the simulation parameters as well as between the liver shear wave data and the modeled data fit. PMID:26168170

  15. Parent-child mediated learning interactions as determinants of cognitive modifiability: recent research and future directions.

    PubMed

    Tzuriel, D

    1999-05-01

    The main objectives of this article are to describe the effects of mediated learning experience (MLE) strategies in mother-child interactions on the child's cognitive modifiability, the effects of distal factors (e.g., socioeconomic status, mother's intelligence, child's personality) on MLE interactions, and the effects of situational variables on MLE processes. Methodological aspects of measurement of MLE interactions and of cognitive modifiability, using a dynamic assessment approach, are discussed. Studies with infants showed that the quality of mother-infant MLE interactions predict later cognitive functioning and that MLE patterns and children's cognitive performance change as a result of intervention programs. Studies with preschool and school-aged children showed that MLE interactions predict cognitive modifiability and that distal factors predict MLE interactions but not the child's cognitive modifiability. The child's cognitive modifiability was predicted by MLE interactions in a structured but not in a free-play situation. Mediation for transcendence (e.g., teaching rules and generalizations) appeared to be the strongest predictor of children's cognitive modifiability. Discussion of future research includes the consideration of a holistic transactional approach, which refers to MLE processes, personality, and motivational-affective factors, the cultural context of mediation, perception of the whole family as a mediational unit, and the "mediational normative scripts."

  16. Test of the Hill Stability Criterion against Chaos Indicators

    NASA Astrophysics Data System (ADS)

    Satyal, Suman; Quarles, Billy; Hinse, Tobias

    2012-10-01

    The efficacy of Hill Stability (HS) criterion is tested against other known chaos indicators such as Maximum Lyapunov Exponents (MLE) and Mean Exponential Growth of Nearby Orbits (MEGNO) maps. First, orbits of four observationally verified binary star systems: γ Cephei, Gliese-86, HD41004, and HD196885 are integrated using standard integration packages (MERCURY, SWIFTER, NBI, C/C++). The HS which measures orbital perturbation of a planet around the primary star due to the secondary star is calculated for each system. The LEs spectra are generated to measure the divergence/convergence rate of stable manifolds and the MEGNO maps are generated by using the variational equations of the system during the integration process. These maps allow to accurately differentiate between stable and unstable dynamical systems. Then the results obtained from the analysis of HS, MLE, and MEGNO maps are checked for their dynamical variations and resemblance. The HS of most of the planets seems to be stable, quasi-periodic for at least ten million years. The MLE and the MEGNO maps also indicate the local quasi-periodicity and global stability in relatively short integration period. The HS criterion is found to be a comparably efficient tool to measure the stability of planetary orbits.

  17. Joint sparsity based heterogeneous data-level fusion for target detection and estimation

    NASA Astrophysics Data System (ADS)

    Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe

    2017-05-01

    Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.

  18. Growth promoting potential of fresh and stored Moringa oleifera leaf extracts in improving seedling vigor, growth and productivity of wheat crop.

    PubMed

    Khan, Shahbaz; Basra, Shahzad Maqsood Ahmed; Afzal, Irfan; Nawaz, Muhammad; Rehman, Hafeez Ur

    2017-12-01

    Wheat is staple food of region, as it contributes 60% of daily caloric intake, but its delayed sowing reduces yield due to short life span. Moringa leaf extract (MLE) is considered to improve growth and development of field crops. Study comprised of two experiments. First experiment, freshly extracted MLE and in combination with growth-promoting substances were stored at two temperature regimes. Chemical analysis, after 1, 2, and 3 months' storage period, showed that phenolics and ascorbic acid concentrations decreased with increasing storage period. Fresh extracts improved speed and spread of emergence and seedling vigor. Effectiveness of MLE in terms of phenolics and ascorbate concentrations was highest up to 1 month which decreased with prolonged storage. Growth enhancing potential of MLE also reduced with increasing storage duration. Under field conditions, the bio-efficacy of these fresh and stored MLE was compared when applied as foliar spray at tillering and booting stages of wheat. Foliar applied fresh MLE was the most effective in improving growth parameters. Fresh MLE enhanced biochemical and yield attributes in late sown wheat. This growth-promoting potential of MLE decreased with storage time. Application of fresh MLE helped to achieve higher economic yield.

  19. The ASEAN economic community and medical qualification

    PubMed Central

    Kittrakulrat, Jathurong; Jongjatuporn, Witthawin; Jurjai, Ravipol; Jarupanich, Nicha; Pongpirul, Krit

    2014-01-01

    Background In the regional movement toward ASEAN Economic Community (AEC), medical professions including physicians can be qualified to practice medicine in another country. Ensuring comparable, excellent medical qualification systems is crucial but the availability and analysis of relevant information has been lacking. Objective This study had the following aims: 1) to comparatively analyze information on Medical Licensing Examinations (MLE) across ASEAN countries and 2) to assess stakeholders’ view on potential consequences of AEC on the medical profession from a Thai perspective. Design To search for relevant information on MLE, we started with each country's national body as the primary data source. In case of lack of available data, secondary data sources including official websites of medical universities, colleagues in international and national medical student organizations, and some other appropriate Internet sources were used. Feasibility and concerns about validity and reliability of these sources were discussed among investigators. Experts in the region invited through HealthSpace.Asia conducted the final data validation. For the second objective, in-depth interviews were conducted with 13 Thai stakeholders, purposely selected based on a maximum variation sampling technique to represent the points of view of the medical licensing authority, the medical profession, ethicists and economists. Results MLE systems exist in all ASEAN countries except Brunei, but vary greatly. Although the majority has a national MLE system, Singapore, Indonesia, and Vietnam accept results of MLE conducted at universities. Thailand adopted the USA's 3-step approach that aims to check pre-clinical knowledge, clinical knowledge, and clinical skills. Most countries, however, require only one step. A multiple choice question (MCQ) is the most commonly used method of assessment; a modified essay question (MEQ) is the next most common. Although both tests assess candidate's knowledge, the Objective Structured Clinical Examination (OSCE) is used to verify clinical skills of the examinee. The validity of the medical license and that it reflects a consistent and high standard of medical knowledge is a sensitive issue because of potentially unfair movement of physicians and an embedded sense of domination, at least from a Thai perspective. Conclusions MLE systems differ across ASEAN countries in some important aspects that might be of concern from a fairness viewpoint and therefore should be addressed in the movement toward AEC. PMID:25215908

  20. A health risk benchmark for the neurologic effects of styrene: comparison with NOAEL/LOAEL approach.

    PubMed

    Rabovsky, J; Fowles, J; Hill, M D; Lewis, D C

    2001-02-01

    Benchmark dose (BMD) analysis was used to estimate an inhalation benchmark concentration for styrene neurotoxicity. Quantal data on neuropsychologic test results from styrene-exposed workers [Mutti et al. (1984). American Journal of Industrial Medicine, 5, 275-286] were used to quantify neurotoxicity, defined as the percent of tested workers who responded abnormally to > or = 1, > or = 2, or > or = 3 out of a battery of eight tests. Exposure was based on previously published results on mean urinary mandelic- and phenylglyoxylic acid levels in the workers, converted to air styrene levels (15, 44, 74, or 115 ppm). Nonstyrene-exposed workers from the same region served as a control group. Maximum-likelihood estimates (MLEs) and BMDs at 5 and 10% response levels of the exposed population were obtained from log-normal analysis of the quantal data. The highest MLE was 9 ppm (BMD = 4 ppm) styrene and represents abnormal responses to > or = 3 tests by 10% of the exposed population. The most health-protective MLE was 2 ppm styrene (BMD = 0.3 ppm) and represents abnormal responses to > or = 1 test by 5% of the exposed population. A no observed adverse effect level/lowest observed adverse effect level (NOAEL/LOAEL) analysis of the same quantal data showed workers in all styrene exposure groups responded abnormally to > or = 1, > or = 2, or > or = 3 tests, compared to controls, and the LOAEL was 15 ppm. A comparison of the BMD and NOAEL/LOAEL analyses suggests that at air styrene levels below the LOAEL, a segment of the worker population may be adversely affected. The benchmark approach will be useful for styrene noncancer risk assessment purposes by providing a more accurate estimate of potential risk that should, in turn, help to reduce the uncertainty that is a common problem in setting exposure levels.

  1. The transmission process: A combinatorial stochastic process for the evolution of transmission trees over networks.

    PubMed

    Sainudiin, Raazesh; Welch, David

    2016-12-07

    We derive a combinatorial stochastic process for the evolution of the transmission tree over the infected vertices of a host contact network in a susceptible-infected (SI) model of an epidemic. Models of transmission trees are crucial to understanding the evolution of pathogen populations. We provide an explicit description of the transmission process on the product state space of (rooted planar ranked labelled) binary transmission trees and labelled host contact networks with SI-tags as a discrete-state continuous-time Markov chain. We give the exact probability of any transmission tree when the host contact network is a complete, star or path network - three illustrative examples. We then develop a biparametric Beta-splitting model that directly generates transmission trees with exact probabilities as a function of the model parameters, but without explicitly modelling the underlying contact network, and show that for specific values of the parameters we can recover the exact probabilities for our three example networks through the Markov chain construction that explicitly models the underlying contact network. We use the maximum likelihood estimator (MLE) to consistently infer the two parameters driving the transmission process based on observations of the transmission trees and use the exact MLE to characterize equivalence classes over the space of contact networks with a single initial infection. An exploratory simulation study of the MLEs from transmission trees sampled from three other deterministic and four random families of classical contact networks is conducted to shed light on the relation between the MLEs of these families with some implications for statistical inference along with pointers to further extensions of our models. The insights developed here are also applicable to the simplest models of "meme" evolution in online social media networks through transmission events that can be distilled from observable actions such as "likes", "mentions", "retweets" and "+1s" along with any concomitant comments. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Isolation of deer tick virus (Powassan virus, lineage II) from Ixodes scapularis and detection of antibody in vertebrate hosts sampled in the Hudson Valley, New York State

    PubMed Central

    2013-01-01

    Background Deer tick virus, DTV, is a genetically and ecologically distinct lineage of Powassan virus (POWV) also known as lineage II POWV. Human incidence of POW encephalitis has increased in the last 15 years potentially due to the emergence of DTV, particularly in the Hudson Valley of New York State. We initiated an extensive sampling campaign to determine whether POWV was extant throughout the Hudson Valley in tick vectors and/or vertebrate hosts. Methods More than 13,000 ticks were collected from hosts or vegetation and tested for the presence of DTV using molecular and virus isolation techniques. Vertebrate hosts of Ixodes scapularis (black-legged tick) were trapped (mammals) or netted (birds) and blood samples analyzed for the presence of neutralizing antibodies to POWV. Maximum likelihood estimates (MLE) were calculated to determine infection rates in ticks at each study site. Results Evidence of DTV was identified each year from 2007 to 2012, in nymphal and adult I. scapularis collected from the Hudson Valley. 58 tick pools were positive for virus and/or RNA. Infection rates were higher in adult ticks collected from areas east of the Hudson River. MLE limits ranged from 0.2-6.0 infected adults per 100 at sites where DTV was detected. Virginia opossums, striped skunks and raccoons were the source of infected nymphal ticks collected as replete larvae. Serologic evidence of POWV infection was detected in woodchucks (4/6), an opossum (1/6), and birds (4/727). Lineage I, prototype POWV, was not detected. Conclusions These data demonstrate widespread enzootic transmission of DTV throughout the Hudson Valley, in particular areas east of the river. High infection rates were detected in counties where recent POW encephalitis cases have been identified, supporting the hypothesis that lineage II POWV, DTV, is responsible for these human infections. PMID:24016533

  3. Isolation of deer tick virus (Powassan virus, lineage II) from Ixodes scapularis and detection of antibody in vertebrate hosts sampled in the Hudson Valley, New York State.

    PubMed

    Dupuis, Alan P; Peters, Ryan J; Prusinski, Melissa A; Falco, Richard C; Ostfeld, Richard S; Kramer, Laura D

    2013-07-15

    Deer tick virus, DTV, is a genetically and ecologically distinct lineage of Powassan virus (POWV) also known as lineage II POWV. Human incidence of POW encephalitis has increased in the last 15 years potentially due to the emergence of DTV, particularly in the Hudson Valley of New York State. We initiated an extensive sampling campaign to determine whether POWV was extant throughout the Hudson Valley in tick vectors and/or vertebrate hosts. More than 13,000 ticks were collected from hosts or vegetation and tested for the presence of DTV using molecular and virus isolation techniques. Vertebrate hosts of Ixodes scapularis (black-legged tick) were trapped (mammals) or netted (birds) and blood samples analyzed for the presence of neutralizing antibodies to POWV. Maximum likelihood estimates (MLE) were calculated to determine infection rates in ticks at each study site. Evidence of DTV was identified each year from 2007 to 2012, in nymphal and adult I. scapularis collected from the Hudson Valley. 58 tick pools were positive for virus and/or RNA. Infection rates were higher in adult ticks collected from areas east of the Hudson River. MLE limits ranged from 0.2-6.0 infected adults per 100 at sites where DTV was detected. Virginia opossums, striped skunks and raccoons were the source of infected nymphal ticks collected as replete larvae. Serologic evidence of POWV infection was detected in woodchucks (4/6), an opossum (1/6), and birds (4/727). Lineage I, prototype POWV, was not detected. These data demonstrate widespread enzootic transmission of DTV throughout the Hudson Valley, in particular areas east of the river. High infection rates were detected in counties where recent POW encephalitis cases have been identified, supporting the hypothesis that lineage II POWV, DTV, is responsible for these human infections.

  4. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  5. Experimental and computational correlation of fracture parameters KIc, JIc, and GIc for unimodular and bimodular graphite components

    NASA Astrophysics Data System (ADS)

    Bhushan, Awani; Panda, S. K.

    2018-05-01

    The influence of bimodularity (different stress ∼ strain behaviour in tension and compression) on fracture behaviour of graphite specimens has been studied with fracture toughness (KIc), critical J-integral (JIc) and critical strain energy release rate (GIc) as the characterizing parameter. Bimodularity index (ratio of tensile Young's modulus to compression Young's modulus) of graphite specimens has been obtained from the normalized test data of tensile and compression experimentation. Single edge notch bend (SENB) testing of pre-cracked specimens from the same lot have been carried out as per ASTM standard D7779-11 to determine the peak load and critical fracture parameters KIc, GIc and JIc using digital image correlation technology of crack opening displacements. Weibull weakest link theory has been used to evaluate the mean peak load, Weibull modulus and goodness of fit employing two parameter least square method (LIN2), biased (MLE2-B) and unbiased (MLE2-U) maximum likelihood estimator. The stress dependent elasticity problem of three-dimensional crack progression behaviour for the bimodular graphite components has been solved as an iterative finite element procedure. The crack characterizing parameters critical stress intensity factor and critical strain energy release rate have been estimated with the help of Weibull distribution plot between peak loads versus cumulative probability of failure. Experimental and Computational fracture parameters have been compared qualitatively to describe the significance of bimodularity. The bimodular influence on fracture behaviour of SENB graphite has been reflected on the experimental evaluation of GIc values only, which has been found to be different from the calculated JIc values. Numerical evaluation of bimodular 3D J-integral value is found to be close to the GIc value whereas the unimodular 3D J-value is nearer to the JIc value. The significant difference between the unimodular JIc and bimodular GIc indicates that GIc should be considered as the standard fracture parameter for bimodular brittle specimens.

  6. Bulk flow in the combined 2MTF and 6dFGSv surveys

    NASA Astrophysics Data System (ADS)

    Qin, Fei; Howlett, Cullan; Staveley-Smith, Lister; Hong, Tao

    2018-07-01

    We create a combined sample of 10 904 late- and early-type galaxies from the 2MTF and 6dFGSv surveys in order to accurately measure bulk flow in the local Universe. Galaxies and groups of galaxies common between the two surveys are used to verify that the difference in zero-points is <0.02 dex. We introduce a maximum likelihood estimator (ηMLE) for bulk flow measurements that allows for more accurate measurement in the presence of non-Gaussian measurement errors. To calibrate out residual biases due to the subtle interaction of selection effects, Malmquist bias and anisotropic sky distribution, the estimator is tested on mock catalogues generated from 16 independent large-scale GiggleZ and SURFS simulations. The bulk flow of the local Universe using the combined data set, corresponding to a scale size of 40 h-1 Mpc, is 288 ± 24 km s-1 in the direction (l, b) = (296 ± 6°, 21 ± 5°). This is the most accurate bulk flow measurement to date, and the amplitude of the flow is consistent with the Λ cold dark matter expectation for similar size scales.

  7. Degradation data analysis based on a generalized Wiener process subject to measurement error

    NASA Astrophysics Data System (ADS)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  8. Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.

    PubMed

    Hack, C Eric

    2006-04-17

    Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.

  9. Bulk flow in the combined 2MTF and 6dFGSv surveys

    NASA Astrophysics Data System (ADS)

    Qin, Fei; Howlett, Cullan; Staveley-Smith, Lister; Hong, Tao

    2018-04-01

    We create a combined sample of 10,904 late and early-type galaxies from the 2MTF and 6dFGSv surveys in order to accurately measure bulk flow in the local Universe. Galaxies and groups of galaxies common between the two surveys are used to verify that the difference in zero-points is <0.02 dex. We introduce a new maximum likelihood estimator (ηMLE) for bulk flow measurements which allows for more accurate measurement in the presence non-Gaussian measurement errors. To calibrate out residual biases due to the subtle interaction of selection effects, Malmquist bias and anisotropic sky distribution, the estimator is tested on mock catalogues generated from 16 independent large-scale GiggleZ and SURFS simulations. The bulk flow of the local Universe using the combined data set, corresponding to a scale size of 40 h-1 Mpc, is 288 ± 24 km s-1 in the direction (l, b) = (296 ± 6°, 21 ± 5°). This is the most accurate bulk flow measurement to date, and the amplitude of the flow is consistent with the ΛCDM expectation for similar size scales.

  10. Influence of Iterative Reconstruction Algorithms on PET Image Resolution

    NASA Astrophysics Data System (ADS)

    Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.

  11. Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)

    NASA Astrophysics Data System (ADS)

    Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi

    2017-06-01

    Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.

  12. Mangifera indica L. Leaf Extract in Combination With Luteolin or Quercetin Enhances VO2peak and Peak Power Output, and Preserves Skeletal Muscle Function During Ischemia-Reperfusion in Humans.

    PubMed

    Gelabert-Rebato, Miriam; Wiebe, Julia C; Martin-Rincon, Marcos; Gericke, Nigel; Perez-Valera, Mario; Curtelin, David; Galvan-Alvarez, Victor; Lopez-Rios, Laura; Morales-Alamo, David; Calbet, Jose A L

    2018-01-01

    It remains unknown whether polyphenols such as luteolin (Lut), mangiferin and quercetin (Q) have ergogenic effects during repeated all-out prolonged sprints. Here we tested the effect of Mangifera indica L. leaf extract (MLE) rich in mangiferin (Zynamite®) administered with either quercetin (Q) and tiger nut extract (TNE), or with luteolin (Lut) on sprint performance and recovery from ischemia-reperfusion. Thirty young volunteers were randomly assigned to three treatments 48 h before exercise. Treatment A: placebo (500 mg of maltodextrin/day); B: 140 mg of MLE (60% mangiferin) and 50 mg of Lut/day; and C: 140 mg of MLE, 600 mg of Q and 350 mg of TNE/day. After warm-up, subjects performed two 30 s Wingate tests and a 60 s all-out sprint interspaced by 4 min recovery periods. At the end of the 60 s sprint the circulation of both legs was instantaneously occluded for 20 s. Then, the circulation was re-opened and a 15 s sprint performed, followed by 10 s recovery with open circulation, and another 15 s final sprint. MLE supplements enhanced peak (Wpeak) and mean (Wmean) power output by 5.0-7.0% ( P < 0.01). After ischemia, MLE+Q+TNE increased Wpeak by 19.4 and 10.2% compared with the placebo ( P < 0.001) and MLE+Lut ( P < 0.05), respectively. MLE+Q+TNE increased Wmean post-ischemia by 11.2 and 6.7% compared with the placebo ( P < 0.001) and MLE+Lut ( P = 0.012). Mean VO 2 during the sprints was unchanged, suggesting increased efficiency or recruitment of the anaerobic capacity after MLE ingestion. In women, peak VO 2 during the repeated sprints was 5.8% greater after the administration of MLE, coinciding with better brain oxygenation. MLE attenuated the metaboreflex hyperpneic response post-ischemia, may have improved O 2 extraction by the Vastus Lateralis (MLE+Q+TNE vs. placebo, P = 0.056), and reduced pain during ischemia ( P = 0.068). Blood lactate, acid-base balance, and plasma electrolytes responses were not altered by the supplements. In conclusion, a MLE extract rich in mangiferin combined with either quercetin and tiger nut extract or luteolin exerts a remarkable ergogenic effect, increasing muscle power in fatigued subjects and enhancing peak VO 2 and brain oxygenation in women during prolonged sprinting. Importantly, the combination of MLE+Q+TNE improves skeletal muscle contractile function during ischemia/reperfusion.

  13. The MLE Teacher: An Agent of Change or a Cog in the Wheel?

    ERIC Educational Resources Information Center

    Bedamatta, Urmishree

    2014-01-01

    This article examines the role of the multilingual education (MLE) teacher in the mother tongue-based MLE program for the Juangas, a tribe in Odisha, an eastern state of India, and is part of a broader study of the MLE program in the state. For the specific purpose of this article, I have adopted Welmond's (2002) three-step process: identifying…

  14. Propagating Water Quality Analysis Uncertainty Into Resource Management Decisions Through Probabilistic Modeling

    NASA Astrophysics Data System (ADS)

    Gronewold, A. D.; Wolpert, R. L.; Reckhow, K. H.

    2007-12-01

    Most probable number (MPN) and colony-forming-unit (CFU) are two estimates of fecal coliform bacteria concentration commonly used as measures of water quality in United States shellfish harvesting waters. The MPN is the maximum likelihood estimate (or MLE) of the true fecal coliform concentration based on counts of non-sterile tubes in serial dilution of a sample aliquot, indicating bacterial metabolic activity. The CFU is the MLE of the true fecal coliform concentration based on the number of bacteria colonies emerging on a growth plate after inoculation from a sample aliquot. Each estimating procedure has intrinsic variability and is subject to additional uncertainty arising from minor variations in experimental protocol. Several versions of each procedure (using different sized aliquots or different numbers of tubes, for example) are in common use, each with its own levels of probabilistic and experimental error and uncertainty. It has been observed empirically that the MPN procedure is more variable than the CFU procedure, and that MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the observed variability in, and discrepancy between, MPN and CFU measurements. We then explore how this variability and uncertainty might propagate into shellfish harvesting area management decisions through a two-phased modeling strategy. First, we apply our probabilistic model in a simulation-based analysis of future water quality standard violation frequencies under alternative land use scenarios, such as those evaluated under guidelines of the total maximum daily load (TMDL) program. Second, we apply our model to water quality data from shellfish harvesting areas which at present are closed (either conditionally or permanently) to shellfishing, to determine if alternative laboratory analysis procedures might have led to different management decisions. Our research results indicate that the (often large) observed differences between MPN and CFU values for the same water body are well within the ranges predicted by our probabilistic model. Our research also indicates that the probability of violating current water quality guidelines at specified true fecal coliform concentrations depends on the laboratory procedure used. As a result, quality-based management decisions, such as opening or closing a shellfishing area, may also depend on the laboratory procedure used.

  15. Multilayered epithelium in a rat model and human Barrett's esophagus: Similar expression patterns of transcription factors and differentiation markers

    PubMed Central

    Chen, Xiaoxin; Qin, Rong; Liu, Ba; Ma, Yan; Su, Yinghao; Yang, Chung S; Glickman, Jonathan N; Odze, Robert D; Shaheen, Nicholas J

    2008-01-01

    Background In rats, esophagogastroduodenal anastomosis (EGDA) without concomitant chemical carcinogen treatment leads to gastroesophageal reflux disease, multilayered epithelium (MLE, a presumed precursor in intestinal metaplasia), columnar-lined esophagus, dysplasia, and esophageal adenocarcinoma. Previously we have shown that columnar-lined esophagus in EGDA rats resembled human Barrett's esophagus (BE) in its morphology, mucin features and expression of differentiation markers (Lab. Invest. 2004;84:753–765). The purpose of this study was to compare the phenotype of rat MLE with human MLE, in order to gain insight into the nature of MLE and its potential role in the development of BE. Methods Serial sectioning was performed on tissue samples from 32 EGDA rats and 13 patients with established BE. Tissue sections were immunohistochemically stained for a variety of transcription factors and differentiation markers of esophageal squamous epithelium and intestinal columnar epithelium. Results We detected MLE in 56.3% (18/32) of EGDA rats, and in all human samples. As expected, both rat and human squamous epithelium, but not intestinal metaplasia, expressed squamous transcription factors and differentiation markers (p63, Sox2, CK14 and CK4) in all cases. Both rat and human intestinal metaplasia, but not squamous epithelium, expressed intestinal transcription factors and differentiation markers (Cdx2, GATA4, HNF1α, villin and Muc2) in all cases. Rat MLE shared expression patterns of Sox2, CK4, Cdx2, GATA4, villin and Muc2 with human MLE. However, p63 and CK14 were expressed in a higher proportion of rat MLE compared to humans. Conclusion These data indicate that rat MLE shares similar properties to human MLE in its expression pattern of these markers, not withstanding small differences, and support the concept that MLE may be a transitional stage in the metaplastic conversion of squamous to columnar epithelium in BE. PMID:18190713

  16. Multilayered epithelium in a rat model and human Barrett's esophagus: similar expression patterns of transcription factors and differentiation markers.

    PubMed

    Chen, Xiaoxin; Qin, Rong; Liu, Ba; Ma, Yan; Su, Yinghao; Yang, Chung S; Glickman, Jonathan N; Odze, Robert D; Shaheen, Nicholas J

    2008-01-11

    In rats, esophagogastroduodenal anastomosis (EGDA) without concomitant chemical carcinogen treatment leads to gastroesophageal reflux disease, multilayered epithelium (MLE, a presumed precursor in intestinal metaplasia), columnar-lined esophagus, dysplasia, and esophageal adenocarcinoma. Previously we have shown that columnar-lined esophagus in EGDA rats resembled human Barrett's esophagus (BE) in its morphology, mucin features and expression of differentiation markers (Lab. Invest. 2004;84:753-765). The purpose of this study was to compare the phenotype of rat MLE with human MLE, in order to gain insight into the nature of MLE and its potential role in the development of BE. Serial sectioning was performed on tissue samples from 32 EGDA rats and 13 patients with established BE. Tissue sections were immunohistochemically stained for a variety of transcription factors and differentiation markers of esophageal squamous epithelium and intestinal columnar epithelium. We detected MLE in 56.3% (18/32) of EGDA rats, and in all human samples. As expected, both rat and human squamous epithelium, but not intestinal metaplasia, expressed squamous transcription factors and differentiation markers (p63, Sox2, CK14 and CK4) in all cases. Both rat and human intestinal metaplasia, but not squamous epithelium, expressed intestinal transcription factors and differentiation markers (Cdx2, GATA4, HNF1alpha, villin and Muc2) in all cases. Rat MLE shared expression patterns of Sox2, CK4, Cdx2, GATA4, villin and Muc2 with human MLE. However, p63 and CK14 were expressed in a higher proportion of rat MLE compared to humans. These data indicate that rat MLE shares similar properties to human MLE in its expression pattern of these markers, not withstanding small differences, and support the concept that MLE may be a transitional stage in the metaplastic conversion of squamous to columnar epithelium in BE.

  17. Enrichment of common carp (Cyprinus carpio) diet with medlar (Mespilus germanica) leaf extract: Effects on skin mucosal immunity and growth performance.

    PubMed

    Hoseinifar, Seyed Hossein; Khodadadian Zou, Hassan; Kolangi Miandare, Hamed; Van Doan, Hien; Romano, Nicholas; Dadar, Maryam

    2017-08-01

    A feeding trial was performed to assess the effects of dietary Medlar (Mespilus germanica) leaf extract (MLE) on the growth performance, skin mucus non-specific immune parameters as well as mRNA levels of immune and antioxidant related genes in the skin of common carp (Cyprinus carpio) fingerlings. Fish were fed diets supplemented with graded levels (0, 0.25, 0.50, and 1.00%) of MLE for 49 days. The results revealed an improvement to the growth performance and feed conversion ratio in MLE fed carps (P < 0.05), regardless of the inclusion level. The immunoglobulin levels and interleukin 8 levels in the skin mucous and skin, respectively, revealed significant increment in fish fed 1% MLE (P < 0.05) in comparison with the other MLE treatments and control group. Also, feeding on 0.25% and 0.50% MLE remarkably increased skin mucus lysozyme activity (P < 0.05). However, there were no significant difference between MLE treated groups and control (P > 0.05) in case protease activity in the skin mucous or tumor necrosis factor alpha and interleukin 1 beta gene expression in the skin of carps (P > 0.05). The expression of genes encoding glutathione reductase and glutathione S-transferase alpha were remarkably increased in MLE fed carps compared to the control group (P < 0.05) while carp fed 0.50% or 1.00% MLE had significantly increased glutathione peroxidase expression in their skin (P < 0.05). The present results revealed the potentially beneficial effects of MLE on the mucosal immune system and growth performance in common carp fingerlings. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Posterior propriety for hierarchical models with log-likelihoods that have norm bounds

    DOE PAGES

    Michalak, Sarah E.; Morris, Carl N.

    2015-07-17

    Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less

  19. Accumulation of Major Life Events in Childhood and Adult Life and Risk of Type 2 Diabetes Mellitus

    PubMed Central

    Masters Pedersen, Jolene; Hulvej Rod, Naja; Andersen, Ingelise; Lange, Theis; Poulsen, Gry; Prescott, Eva; Lund, Rikke

    2015-01-01

    Background The aim of the study was to estimate the effect of the accumulation of major life events (MLE) in childhood and adulthood, in both the private and working domains, on risk of type 2 diabetes mellitus (T2DM). Furthermore, we aimed to test the possible interaction between childhood and adult MLE and to investigate modification of these associations by educational attainment. Methods The study was based on 4,761 participants from the Copenhagen City Heart Study free of diabetes at baseline and followed for 10 years. MLE were categorized as 0, 1, 2, 3 or more events. Multivariate logistic regression models adjusted for age, sex, education and family history of diabetes were used to estimate the association between MLE and T2DM. Results In childhood, experiencing 3 or more MLE was associated with a 69% higher risk of developing T2DM (Odds Ratio (OR) 1.69; 95% Confidence Interval (CI) 1.60, 3.27). The accumulation of MLE in adult private (p-trend = 0.016) and work life (p-trend = 0.049) was associated with risk of T2DM in a dose response manner. There was no evidence that experiencing MLE in both childhood and adult life was more strongly associated with T2DM than experiencing events at only one time point. There was some evidence that being simultaneously exposed to childhood MLE and short education (OR 2.28; 95% C.I. 1.45, 3.59) and work MLE and short education (OR 2.86; 95% C.I. 1.62, 5.03) was associated with higher risk of T2DM, as the joint effects were greater than the sum of their individual effects. Conclusions Findings from this study suggest that the accumulation of MLE in childhood, private adult life and work life, respectively, are risk factors for developing T2DM. PMID:26394040

  20. Medical Literature Evaluation Education at US Schools of Pharmacy

    PubMed Central

    Phillips, Jennifer; Demaris, Kendra

    2016-01-01

    Objective. To determine how medical literature evaluation (MLE) is being taught across the United States and to summarize methods for teaching and assessing MLE. Methods. An 18-question survey was administered to faculty members whose primary responsibility was teaching MLE at schools and colleges of pharmacy. Results. Responses were received from 90 (71%) US schools of pharmacy. The most common method of integrating MLE into the curriculum was as a stand-alone course (49%). The most common placement was during the second professional year (43%) or integrated throughout the curriculum (25%). The majority (77%) of schools used a team-based approach. The use of active-learning strategies was common as was the use of multiple methods of evaluation. Responses varied regarding what role the course director played in incorporating MLE into advanced pharmacy practice experiences (APPEs). Conclusion. There is a trend toward incorporating MLE education components throughout the pre-APPE curriculum and placement of literature review/evaluation exercises into therapeutics practice skills laboratories to help students see how this skill integrates into other patient care skills. Several pre-APPE educational standards for MLE education exist, including journal club activities, a team-based approach to teaching and evaluation, and use of active-learning techniques. PMID:26941431

  1. Improved ultrasound transducer positioning by fetal heart location estimation during Doppler based heart rate measurements.

    PubMed

    Hamelmann, Paul; Vullings, Rik; Schmitt, Lars; Kolen, Alexander F; Mischi, Massimo; van Laar, Judith O E H; Bergmans, Jan W M

    2017-09-21

    Doppler ultrasound (US) is the most commonly applied method to measure the fetal heart rate (fHR). When the fetal heart is not properly located within the ultrasonic beam, fHR measurements often fail. As a consequence, clinical staff need to reposition the US transducer on the maternal abdomen, which can be a time consuming and tedious task. In this article, a method is presented to aid clinicians with the positioning of the US transducer to produce robust fHR measurements. A maximum likelihood estimation (MLE) algorithm is developed, which provides information on fetal heart location using the power of the Doppler signals received in the individual elements of a standard US transducer for fHR recordings. The performance of the algorithm is evaluated with simulations and in vitro experiments performed on a beating-heart setup. Both the experiments and the simulations show that the heart location can be accurately determined with an error of less than 7 mm within the measurement volume of the employed US transducer. The results show that the developed algorithm can be used to provide accurate feedback on fetal heart location for improved positioning of the US transducer, which may lead to improved measurements of the fHR.

  2. Large signal-to-noise ratio quantification in MLE for ARARMAX models

    NASA Astrophysics Data System (ADS)

    Zou, Yiqun; Tang, Xiafei

    2014-06-01

    It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.

  3. Time-Varying Delay Estimation Applied to the Surface Electromyography Signals Using the Parametric Approach

    NASA Astrophysics Data System (ADS)

    Luu, Gia Thien; Boualem, Abdelbassit; Duy, Tran Trung; Ravier, Philippe; Butteli, Olivier

    Muscle Fiber Conduction Velocity (MFCV) can be calculated from the time delay between the surface electromyographic (sEMG) signals recorded by electrodes aligned with the fiber direction. In order to take into account the non-stationarity during the dynamic contraction (the most daily life situation) of the data, the developed methods have to consider that the MFCV changes over time, which induces time-varying delays and the data is non-stationary (change of Power Spectral Density (PSD)). In this paper, the problem of TVD estimation is considered using a parametric method. First, the polynomial model of TVD has been proposed. Then, the TVD model parameters are estimated by using a maximum likelihood estimation (MLE) strategy solved by a deterministic optimization technique (Newton) and stochastic optimization technique, called simulated annealing (SA). The performance of the two techniques is also compared. We also derive two appropriate Cramer-Rao Lower Bounds (CRLB) for the estimated TVD model parameters and for the TVD waveforms. Monte-Carlo simulation results show that the estimation of both the model parameters and the TVD function is unbiased and that the variance obtained is close to the derived CRBs. A comparison with non-parametric approaches of the TVD estimation is also presented and shows the superiority of the method proposed.

  4. Weakly Supervised Dictionary Learning

    NASA Astrophysics Data System (ADS)

    You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub

    2018-05-01

    We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.

  5. Morinda citrifolia L. leaf extract prevent weight gain in Sprague-Dawley rats fed a high fat diet.

    PubMed

    Jambocus, Najla Gooda Sahib; Ismail, Amin; Khatib, Alfi; Mahomoodally, Fawzi; Saari, Nazamid; Mumtaz, Muhammad Waseem; Hamid, Azizah Abdul

    2017-01-01

    Background : Morinda citrifolia L. is widely used as a folk medicinal food plant to manage a panoply of diseases, though no concrete reports on its potential anti-obesity activity. This study aimed to evaluate the potential of M. citrifolia leaf extracts (MLE60) in the prevention of weight gain in vivo and establish its phytochemical profile. Design : Male Sprague-Dawley rats were divided into groups based on a normal diet (ND) or high fat diet (HFD), with or without MLE60 supplementation (150 and 350 mg/kg body weight) and assessed for any reduction in weight gain. Plasma leptin, insulin, adiponectin, and ghrelin of all groups were determined. 1 H NMR and LCMS methods were employed for phytochemical profiling of MLE60. Results : The supplementation of MLE60 did not affect food intake indicating that appetite suppression might not be the main anti-obesity mechanism involved. In the treated groups, MLE60 prevented weight gain, most likely through an inhibition of pancreatic and lipoprotein activity with a positive influence on the lipid profiles and a reduction in LDL levels . MLE60 also attenuated visceral fat deposition in treated subjects with improvement in the plasma levels of obesity-linked factors . 1 Spectral analysis showed the presence of several bioactive compounds with rutin being more predominant. Conclusion : MLE60 shows promise as an anti-obesity agents and warrants further research.

  6. The Effects of Parent-Child Mediated Learning Experience (MLE) Interaction on Young Children's Cognitive Development

    ERIC Educational Resources Information Center

    Russell, Christina; Amod, Zaytoon; Rosenthal, Lesley

    2008-01-01

    This study addressed the effect of parent-child Mediated Learning Experience (MLE) interaction on cognitive development in early childhood. It measured the MLE interactions of 14 parents with their preschool children in the contexts of free-play and structured tasks. The children were assessed for their manifest cognitive performance and learning…

  7. Do bacterial cell numbers follow a theoretical Poisson distribution? Comparison of experimentally obtained numbers of single cells with random number generation via computer simulation.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2016-12-01

    We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. The structure of Pseudomonas P51 Cl-muconate lactonizing enzyme: Co-evolution of structure and dynamics with the dehalogenation function

    PubMed Central

    Kajander, Tommi; Lehtiö, Lari; Schlömann, Michael; Goldman, Adrian

    2003-01-01

    Bacterial muconate lactonizing enzymes (MLEs) catalyze the conversion of cis,cis-muconate as a part of the β-ketoadipate pathway, and some MLEs are also able to dehalogenate chlorinated muconates (Cl-MLEs). The basis for the Cl-MLEs dehalogenating activity is still unclear. To further elucidate the differences between MLEs and Cl-MLEs, we have solved the structure of Pseudomonas P51 Cl-MLE at 1.95 Å resolution. Comparison of Pseudomonas MLE and Cl-MLE structures reveals the presence of a large cavity in the Cl-MLEs. The cavity may be related to conformational changes on substrate binding in Cl-MLEs, at Gly52. Site-directed mutagenesis on Pseudomonas MLE core positions to the equivalent Cl-MLE residues showed that the variant Thr52Gly was rather inactive, whereas the Thr52Gly-Phe103Ser variant had regained part of the activity. These residues form a hydrogen bond in the Cl-MLEs. The Cl-MLE structure, as a result of the Thr-to-Gly change, is more flexible than MLE: As a mobile loop closes over the active site, a conformational change at Gly52 is observed in Cl-MLEs. The loose packing and structural motions in Cl-MLE may be required for the rotation of the lactone ring in the active site necessary for the dehalogenating activity of Cl-MLEs. Furthermore, we also suggest that differences in the active site mobile loop sequence between MLEs and Cl-MLEs result in lower active site polarity in Cl-MLEs, possibly affecting catalysis. These changes could result in slower product release from Cl-MLEs and make it a better enzyme for dehalogenation of substrate. PMID:12930985

  9. Estimating cost ratio distribution between fatal and non-fatal road accidents in Malaysia

    NASA Astrophysics Data System (ADS)

    Hamdan, Nurhidayah; Daud, Noorizam

    2014-07-01

    Road traffic crashes are a global major problem, and should be treated as a shared responsibility. In Malaysia, road accident tragedies kill 6,917 people and injure or disable 17,522 people in year 2012, and government spent about RM9.3 billion in 2009 which cost the nation approximately 1 to 2 percent loss of gross domestic product (GDP) reported annually. The current cost ratio for fatal and non-fatal accident used by Ministry of Works Malaysia simply based on arbitrary value of 6:4 or equivalent 1.5:1 depends on the fact that there are six factors involved in the calculation accident cost for fatal accident while four factors for non-fatal accident. The simple indication used by the authority to calculate the cost ratio is doubted since there is lack of mathematical and conceptual evidence to explain how this ratio is determined. The main aim of this study is to determine the new accident cost ratio for fatal and non-fatal accident in Malaysia based on quantitative statistical approach. The cost ratio distributions will be estimated based on Weibull distribution. Due to the unavailability of official accident cost data, insurance claim data both for fatal and non-fatal accident have been used as proxy information for the actual accident cost. There are two types of parameter estimates used in this study, which are maximum likelihood (MLE) and robust estimation. The findings of this study reveal that accident cost ratio for fatal and non-fatal claim when using MLE is 1.33, while, for robust estimates, the cost ratio is slightly higher which is 1.51. This study will help the authority to determine a more accurate cost ratio between fatal and non-fatal accident as compared to the official ratio set by the government, since cost ratio is an important element to be used as a weightage in modeling road accident related data. Therefore, this study provides some guidance tips to revise the insurance claim set by the Malaysia road authority, hence the appropriate method that suitable to implement in Malaysia can be analyzed.

  10. The effects of mother-child mediated learning strategies on psychological resilience and cognitive modifiability of boys with learning disability.

    PubMed

    Tzuriel, David; Shomron, Vered

    2018-06-01

    The theoretical framework of the current study is based on mediated learning experience (MLE) theory, which is similar to the scaffolding concept. The main question of the current study was to what extent mother-child MLE strategies affect psychological resilience and cognitive modifiability of boys with learning disability (LD). Secondary questions were to what extent the home environment, severity of boy's LD, and mother's attitude towards her child's LD affect her MLE strategies and consequently the child's psychological resilience and cognitive modifiability. The main objectives of this study were the following: (a) to investigate the effects of mother-child MLE strategies on psychological resilience and cognitive modifiability among 7- to 10-year-old boys with LD, (b) to study the causal effects of distal factors (i.e., socio-economic status [SES], home environment, severity of child's LD, mother's attitude towards LD) and proximal factors (i.e., MLE strategies) on psychological resilience and cognitive modifiability. A sample of mother-child dyads (n = 100) were videotaped during a short teaching interaction. All children were boys diagnosed as children with LD. The interaction was analysed for MLE strategies by the Observation of Mediation Interaction scale. Children were administered psychological resilience tests and their cognitive modifiability was measured by dynamic assessment using the Analogies subtest from the Cognitive Modifiability Battery. Home environment was rated by the Home Observation for Measurement of the Environment (HOME), and mothers answered a questionnaire of attitudes towards child's LD. The findings showed that mother-child MLE strategies, HOME, and socio-economic level contributed significantly to prediction of psychological resilience (78%) and cognitive modifiability (51%). Psychological resilience was positively correlated with cognitive modifiability (Rc = 0.67). Structural equation modelling analysis supported, in general, the hypotheses about the causal effects of distal and proximal factors of psychological resilience and cognitive modifiability. The findings validate and extend the MLE theory by showing that mother-child MLE strategies significantly predict psychological resilience and cognitive modifiability among boys with LD. Significant correlation between psychological resilience and cognitive modifiability calls for further research exploring the role of MLE strategies in development of both. © 2018 The British Psychological Society.

  11. Morinda citrifolia L. leaf extract prevent weight gain in Sprague-Dawley rats fed a high fat diet

    PubMed Central

    Jambocus, Najla Gooda Sahib; Ismail, Amin; Khatib, Alfi; Mahomoodally, Fawzi; Saari, Nazamid; Mumtaz, Muhammad Waseem; Hamid, Azizah Abdul

    2017-01-01

    ABSTRACT Background: Morinda citrifolia L. is widely used as a folk medicinal food plant to manage a panoply of diseases, though no concrete reports on its potential anti-obesity activity. This study aimed to evaluate the potential of M. citrifolia leaf extracts (MLE60) in the prevention of weight gain in vivo and establish its phytochemical profile. Design: Male Sprague-Dawley rats were divided into groups based on a normal diet (ND) or high fat diet (HFD), with or without MLE60 supplementation (150 and 350 mg/kg body weight) and assessed for any reduction in weight gain. Plasma leptin, insulin, adiponectin, and ghrelin of all groups were determined. 1H NMR and LCMS methods were employed for phytochemical profiling of MLE60. Results: The supplementation of MLE60 did not affect food intake indicating that appetite suppression might not be the main anti-obesity mechanism involved. In the treated groups, MLE60 prevented weight gain, most likely through an inhibition of pancreatic and lipoprotein activity with a positive influence on the lipid profiles and a reduction in LDL levels . MLE60 also attenuated visceral fat deposition in treated subjects with improvement in the plasma levels of obesity-linked factors . 1Spectral analysis showed the presence of several bioactive compounds with rutin being more predominant. Conclusion: MLE60 shows promise as an anti-obesity agents and warrants further research. PMID:28814950

  12. Sci-Fri PM: Radiation Therapy, Planning, Imaging, and Special Techniques - 01: On the use of proton radiography to reduce beam range uncertainties and improve patient positioning accuracy in proton therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins-Fekete, Charles-Antoine; Beaulieu, Luc; Se

    2016-08-15

    To present two related developments of proton radiography (pRad) to minimize range uncertainty in proton therapy. The first combines a pRad with an X-ray CT to produce a patient-specific relative stopping power (RSP) map. The second aims to improve the pRad spatial resolution for accurate registration prior to the first. The enhanced-pRad can also be used in a novel proton-CT reconstruction algorithm. Monte Carlo pRad were computed from three phantoms; the Gammex, the Catphan and an anthropomorphic head. An optimized cubic-spline estimator derives the most likely path. The length crossed by the protons voxel-by-voxel was calculated by combining their estimatedmore » paths with the CT. The difference between the theoretical (length×RSP) and measured energy loss was minimized through a least squares optimization (LSO) algorithm yielding the RSP map. To increase pRad spatial resolution for registration with the CT, the phantom was discretized into voxels columns. The average column RSP was optimized to maximize the proton energy loss likelihood (MLE). Simulations showed precise RSP (<0.75%) for Gammex materials except low-density lung (<1.2%). For the head, accurate RSP were obtained (µ=−0.10%1.5σ=1.12%) and the range precision was improved (ΔR80 of −0.20±0.35%). Spatial resolution was increased in pRad (2.75 to 6.71 lp/cm) and pCT from MLE-enhanced pRad (2.83 to 5.86 lp/cm). The LSO decreases the range uncertainty (R80σ<1.0%) while the MLE-enhanced pRad spatial resolution (+244%) and is a great candidate for pCT reconstruction.« less

  13. Teachers' Stages of Concern for Media Literacy Education and the Integration of MLE in Chinese Primary Schools

    ERIC Educational Resources Information Center

    Zhang, Hui; Zhu, Chang; Sang, Guoyuan

    2014-01-01

    Media literacy is an essential skill for living in the twenty-first century. School-based instruction is a critical part of media literacy education (MLE), while research on teachers' concerns and integration of MLE is not sufficient. The objective of this study is to investigate teachers' stages of concern (SoC), perceived need, school context,…

  14. Analysis of Primary School Curriculum of Turkey, Finland, and Ireland in Terms of Media Literacy Education

    ERIC Educational Resources Information Center

    Tanriverdi, Belgin; Apak, Ozlem

    2010-01-01

    The purpose of this study is to evaluate the implications of Media Literacy Education (MLE) in Turkey by analyzing the Primary School Curricula in terms of MLE comparatively in Turkey, Ireland and Finland. In this study, the selection of Finland and Ireland curricula is related with those countries' being the pioneering countries in MLE and the…

  15. A mutually exclusive stem–loop arrangement in roX2 RNA is essential for X-chromosome regulation in Drosophila

    PubMed Central

    Ilik, Ibrahim Avsar; Maticzka, Daniel; Georgiev, Plamen; Gutierrez, Noel Marie; Backofen, Rolf; Akhtar, Asifa

    2017-01-01

    The X chromosome provides an ideal model system to study the contribution of RNA–protein interactions in epigenetic regulation. In male flies, roX long noncoding RNAs (lncRNAs) harbor several redundant domains to interact with the ubiquitin ligase male-specific lethal 2 (MSL2) and the RNA helicase Maleless (MLE) for X-chromosomal regulation. However, how these interactions provide the mechanics of spreading remains unknown. By using the uvCLAP (UV cross-linking and affinity purification) methodology, which provides unprecedented information about RNA secondary structures in vivo, we identified the minimal functional unit of roX2 RNA. By using wild-type and various MLE mutant derivatives, including a catalytically inactive MLE derivative, MLEGET, we show that the minimal roX RNA contains two mutually exclusive stem–loops that exist in a peculiar structural arrangement: When one stem–loop is unwound by MLE, an alternate structure can form, likely trapping MLE in this perpetually structured region. We show that this functional unit is necessary for dosage compensation, as mutations that disrupt this formation lead to male lethality. Thus, we propose that roX2 lncRNA contains an MLE-dependent affinity switch to enable reversible interactions of the MSL complex to allow dosage compensation of the X chromosome. PMID:29066499

  16. Influence of cigarette filter ventilation on smokers' mouth level exposure to tar and nicotine.

    PubMed

    Caraway, John W; Ashley, Madeleine; Bowman, Sheri A; Chen, Peter; Errington, Graham; Prasad, Krishna; Nelson, Paul R; Shepperd, Christopher J; Fearon, Ian M

    2017-12-01

    Cigarette filter ventilation allows air to be drawn into the filter, diluting the cigarette smoke. Although machine smoking reveals that toxicant yields are reduced, it does not predict human yields. The objective of this study was to investigate the relationship between cigarette filter ventilation and mouth level exposure (MLE) to tar and nicotine in cigarette smokers. We collated and reviewed data from 11 studies across 9 countries, in studies performed between 2005 and 2013 which contained data on MLE from 156 products with filter ventilation between 0% and 87%. MLE among 7534 participants to tar and nicotine was estimated using the part-filter analysis method from spent filter tips. For each of the countries, MLE to tar and nicotine tended to decrease as filter ventilation increased. Across countries, per-cigarette MLE to tar and nicotine decreased as filter ventilation increased from 0% to 87%. Daily MLE to tar and nicotine also decreased across the range of increasing filter ventilation. These data suggest that on average smokers of highly ventilated cigarettes are exposed to lower amounts of nicotine and tar per cigarette and per day than smokers of cigarettes with lower levels of ventilation. Copyright © 2017 British American Tobacco. Published by Elsevier Inc. All rights reserved.

  17. Exogenous application of plant growth regulators (PGRs) induces chilling tolerance in short-duration hybrid maize.

    PubMed

    Waqas, Muhammad Ahmed; Khan, Imran; Akhter, Muhammad Javaid; Noor, Mehmood Ali; Ashraf, Umair

    2017-04-01

    Chilling stress hampers the optimal performance of maize under field conditions precipitously by inducing oxidative stress. To confer the damaging effects of chilling stress, the present study aimed to investigate the effects of some natural and synthetic plant growth regulators, i.e., salicylic acid (SA), thiourea (TU), sorghum water extract (SWE), and moringa leaf extract (MLE) on chilling stress tolerance in autumn maize hybrid. Foliar application of growth regulators at low concentrations was carried out at six leaf (V6) and tasseling stages. An increase in crop growth rate (CGR), leaf area index (LAI), leaf area duration (LAD), plant height (PH), grain yield (GY), and total dry matter accumulation (TDM) was observed in exogenously applied plants as compared to control. In addition, improved physio-biochemical, phenological, and grain nutritional quality attributes were noticed in foliar-treated maize plots as compared to non-treated ones. SA-treated plants reduced 20% electrolyte leakage in cell membrane against control. MLE and SA were proved best in improving total phenolic, relative water (19-23%), and chlorophyll contents among other applications. A similar trend was found for photosynthetic and transpiration rates, whereas MLE and SWE were found better in improving CGR, LAI, LAD, TDM, PH, GY, grains per cob, 1000 grain weight, and biological yield among all treatments including control. TU and MLE have significantly reduced the duration in phenological events of crop at the reproductive stage. MLE, TU, and SA also improved the grain protein, oil, and starch contents as compared to control. Enhanced crop water productivity was also observed in MLE-treated plants. Economic analysis suggested that MLE and SA applications were more economical in inducing chilling stress tolerance under field conditions. Although eliciting behavior of all growth regulators improved morpho-physiological attributes against suboptimal temperature stress conditions, MLE and SA acted as leading agents which proved to be better stress alleviators by improving plant physio-biochemical attributes and maize growth.

  18. Evaluation of Coastal Sea Level from Jason-2 Altimetry Offshore Hong Kong

    NASA Astrophysics Data System (ADS)

    Birol, F.; Xu, X. Y., , Dr; Cazenave, A. A.

    2017-12-01

    In the recent years, several coastal altimetry products of Jason-2 mission have been distributed by different agencies, the most advance ones of which are XTRACK, PISTACH and ALES. Each product represents extraordinary endeavors on some aspects of retracking or advanced geophysical corrections, and each has its advantage. The motivation of this presentation is to evaluate these products in order to refine the sea level measurements at the coast. Three retrackers: MLE4, MLE3 and ALES are focused on. Within 20km coastward, neither GDR nor ALES readily provides sea level anomaly (SLA) measurements, so we recomputed the 20Hz GDR and ALES SLA from the raw data, adopting auxiliary information (such as waveform classification and wet tropospheric delay) from PISTACH. The region of interest is track #153 of the Jason-2 satellite (offshore Hong Kong, China), and the altimetry products are processed over seven years (2008-2015, cycles 1-252). The coastline offshore Hong Kong is rather complicated and we feel that it can be a good indicator of the performance of coastal altimetry under undesirable coast conditions. We computed the bias and noise level of ALES, MLE3 and MLE4 SLA over open ocean and in the coastal zone (within 10km or 5km coast-ward). The results showed that, after outlier-editing, ALES performs better than MLE4 and MLE3 both in terms of noise level and uncertainty in sea level trend estimation. We validated the coastal altimetry-based SLA by comparing with data from the Hong Kong tide gauge (located 10km across-track). An interesting , but still preliminary, result is that the computed sea level trend within 5 km from the coast is significantly larger than the trend estimated at larger distances from the coast. Keywords: Jason-2, Hong Kong coast, ALES, MLE3, MLE4

  19. Hyperspherical von Mises-Fisher mixture (HvMF) modelling of high angular resolution diffusion MRI.

    PubMed

    Bhalerao, Abhir; Westin, Carl-Fredrik

    2007-01-01

    A mapping of unit vectors onto a 5D hypersphere is used to model and partition ODFs from HARDI data. This mapping has a number of useful and interesting properties and we make a link to interpretation of the second order spherical harmonic decompositions of HARDI data. The paper presents the working theory and experiments of using a von Mises-Fisher mixture model for directional samples. The MLE of the second moment of the HvMF pdf can also be related to fractional anisotropy. We perform error analysis of the estimation scheme in single and multi-fibre regions and then show how a penalised-likelihood model selection method can be employed to differentiate single and multiple fibre regions.

  20. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    PubMed

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  1. Nonparametric predictive inference for combining diagnostic tests with parametric copula

    NASA Astrophysics Data System (ADS)

    Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.

    2017-09-01

    Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.

  2. Modelling of PM10 concentration for industrialized area in Malaysia: A case study in Shah Alam

    NASA Astrophysics Data System (ADS)

    N, Norazian Mohamed; Abdullah, M. M. A.; Tan, Cheng-yau; Ramli, N. A.; Yahaya, A. S.; Fitri, N. F. M. Y.

    In Malaysia, the predominant air pollutants are suspended particulate matter (SPM) and nitrogen dioxide (NO2). This research is on PM10 as they may trigger harm to human health as well as environment. Six distributions, namely Weibull, log-normal, gamma, Rayleigh, Gumbel and Frechet were chosen to model the PM10 observations at the chosen industrial area i.e. Shah Alam. One-year period hourly average data for 2006 and 2007 were used for this research. For parameters estimation, method of maximum likelihood estimation (MLE) was selected. Four performance indicators that are mean absolute error (MAE), root mean squared error (RMSE), coefficient of determination (R2) and prediction accuracy (PA), were applied to determine the goodness-of-fit criteria of the distributions. The best distribution that fits with the PM10 observations in Shah Alamwas found to be log-normal distribution. The probabilities of the exceedences concentration were calculated and the return period for the coming year was predicted from the cumulative density function (cdf) obtained from the best-fit distributions. For the 2006 data, Shah Alam was predicted to exceed 150 μg/m3 for 5.9 days in 2007 with a return period of one occurrence per 62 days. For 2007, the studied area does not exceed the MAAQG of 150 μg/m3

  3. Stochastic generation of hourly rainstorm events in Johor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nojumuddin, Nur Syereena; Yusof, Fadhilah; Yusop, Zulkifli

    2015-02-03

    Engineers and researchers in water-related studies are often faced with the problem of having insufficient and long rainfall record. Practical and effective methods must be developed to generate unavailable data from limited available data. Therefore, this paper presents a Monte-Carlo based stochastic hourly rainfall generation model to complement the unavailable data. The Monte Carlo simulation used in this study is based on the best fit of storm characteristics. Hence, by using the Maximum Likelihood Estimation (MLE) and Anderson Darling goodness-of-fit test, lognormal appeared to be the best rainfall distribution. Therefore, the Monte Carlo simulation based on lognormal distribution was usedmore » in the study. The proposed model was verified by comparing the statistical moments of rainstorm characteristics from the combination of the observed rainstorm events under 10 years and simulated rainstorm events under 30 years of rainfall records with those under the entire 40 years of observed rainfall data based on the hourly rainfall data at the station J1 in Johor over the period of 1972–2011. The absolute percentage error of the duration-depth, duration-inter-event time and depth-inter-event time will be used as the accuracy test. The results showed the first four product-moments of the observed rainstorm characteristics were close with the simulated rainstorm characteristics. The proposed model can be used as a basis to derive rainfall intensity-duration frequency in Johor.« less

  4. Capillary array scanner for time-resolved detection and identification of fluorescently labelled DNA fragments.

    PubMed

    Neumann, M; Herten, D P; Dietrich, A; Wolfrum, J; Sauer, M

    2000-02-25

    The first capillary array scanner for time-resolved fluorescence detection in parallel capillary electrophoresis based on semiconductor technology is described. The system consists essentially of a confocal fluorescence microscope and a x,y-microscope scanning stage. Fluorescence of the labelled probe molecules was excited using a short-pulse diode laser emitting at 640 nm with a repetition rate of 50 MHz. Using a single filter system the fluorescence decays of different labels were detected by an avalanche photodiode in combination with a PC plug-in card for time-correlated single-photon counting (TCSPC). The time-resolved fluorescence signals were analyzed and identified by a maximum likelihood estimator (MLE). The x,y-microscope scanning stage allows for discontinuous, bidirectional scanning of up to 16 capillaries in an array, resulting in longer fluorescence collection times per capillary compared to scanners working in a continuous mode. Synchronization of the alignment and measurement process were developed to allow for data acquisition without overhead. Detection limits in the subzeptomol range for different dye molecules separated in parallel capillaries have been achieved. In addition, we report on parallel time-resolved detection and separation of more than 400 bases of single base extension DNA fragments in capillary array electrophoresis. Using only semiconductor technology the presented technique represents a low-cost alternative for high throughput DNA sequencing in parallel capillaries.

  5. Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2006-12-01

    Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.

  6. Combatting nonlinear phase noise in coherent optical systems with an optimized decision processor based on machine learning

    NASA Astrophysics Data System (ADS)

    Wang, Danshi; Zhang, Min; Cai, Zhongle; Cui, Yue; Li, Ze; Han, Huanhuan; Fu, Meixia; Luo, Bin

    2016-06-01

    An effective machine learning algorithm, the support vector machine (SVM), is presented in the context of a coherent optical transmission system. As a classifier, the SVM can create nonlinear decision boundaries to mitigate the distortions caused by nonlinear phase noise (NLPN). Without any prior information or heuristic assumptions, the SVM can learn and capture the link properties from only a few training data. Compared with the maximum likelihood estimation (MLE) algorithm, a lower bit-error rate (BER) is achieved by the SVM for a given launch power; moreover, the launch power dynamic range (LPDR) is increased by 3.3 dBm for 8 phase-shift keying (8 PSK), 1.2 dBm for QPSK, and 0.3 dBm for BPSK. The maximum transmission distance corresponding to a BER of 1 ×10-3 is increased by 480 km for the case of 8 PSK. The larger launch power range and longer transmission distance improve the tolerance to amplitude and phase noise, which demonstrates the feasibility of the SVM in digital signal processing for M-PSK formats. Meanwhile, in order to apply the SVM method to 16 quadratic amplitude modulation (16 QAM) detection, we propose a parameter optimization scheme. By utilizing a cross-validation and grid-search techniques, the optimal parameters of SVM can be selected, thus leading to the LPDR improvement by 2.8 dBm. Additionally, we demonstrate that the SVM is also effective in combating the laser phase noise combined with the inphase and quadrature (I/Q) modulator imperfections, but the improvement is insignificant for the linear noise and separate I/Q imbalance. The computational complexity of SVM is also discussed. The relatively low complexity makes it possible for SVM to implement the real-time processing.

  7. Biomass characteristics of two types of submerged membrane bioreactors for nitrogen removal from wastewater.

    PubMed

    Liang, Zhihua; Das, Atreyee; Beerman, Daniel; Hu, Zhiqiang

    2010-06-01

    Biomass characteristics and microbial community diversity between a submerged membrane bioreactor with mixed liquor recirculation (MLE/MBR) and a membrane bioreactor with the addition of integrated fixed biofilm medium (IFMBR) were compared for organic carbon and nitrogen removal from wastewater. The two bench-scale MBRs were continuously operated in parallel at a hydraulic retention time (HRT) of 24h and solids retention time (SRT) of 20d. Both MBRs demonstrated good COD removal efficiencies (>97.7%) at incremental inflow organic loading rates. The total nitrogen removal efficiencies were 67% for MLE/MBR and 41% for IFMBR. The recirculation of mixed liquor from aerobic zone to anoxic zone in the MLE/MBR resulted in higher microbial activities of heterotrophic (46.96mgO(2)/gVSSh) and autotrophic bacteria (30.37mgO(2)/gVSSh) in the MLE/MBR compared to those from IFMBR. Terminal Restriction Fragment Length Polymorphism analysis indicated that the higher nitrifying activities were correlated with more diversity of nitrifying bacterial populations in the MLE/MBR. Membrane fouling due to bacterial growth was evident in both the reactors. Even though the trans-membrane pressure and flux profiles of MLE/MBR and IFMBR were different, the patterns of total membrane resistance changes had no considerable difference under the same operating conditions. The results suggest that metabolic selection via alternating anoxic/aerobic processes has the potential of having higher bacterial activities and improved nutrient removal in MBR systems. Copyright 2010 Elsevier Ltd. All rights reserved.

  8. Age of the Mono Lake excursion and associated tephra

    USGS Publications Warehouse

    Benson, L.; Liddicoat, J.; Smoot, J.; Sarna-Wojcicki, A.; Negrini, R.; Lund, S.

    2003-01-01

    The Mono Lake excursion (MLE) is an important time marker that has been found in lake and marine sediments across much of the Northern Hemisphere. Dating of this event at its type locality, the Mono Basin of California, has yielded controversial results with the most recent effort concluding that the MLE may actually be the Laschamp excursion (Earth Planet. Sci. Lett. 197 (2002) 151). We show that a volcanic tephra (Ash #15) that occurs near the midpoint of the MLE has a date (not corrected for reservoir effect) of 28,620 ?? 300 14C yr BP (??? 32,400 GISP2 yr BP) in the Pyramid Lake Basin of Nevada. Given the location of Ash #15 and the duration of the MLE in the Mono Basin, the event occurred between 31,500 and 33,300 GISP2 yr BP, an age range consistent with the position and age of the uppermost of two paleointensity minima in the NAPIS-75 stack that has been associated with the MLE (Philos. Trans. R. Soc. London Ser. A 358 (2000) 1009). The lower paleointensity minimum in the NAPIS-75 stack is considered to be the Laschamp excursion (Philos. Trans. R. Soc. London Ser. A 358 (2000) 1009).

  9. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies

    PubMed Central

    Rukhin, Andrew L.

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583

  10. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    PubMed

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  11. Inter-laboratory calibration of natural gas round robins for δ2H and δ13C using off-line and on-line techniques

    USGS Publications Warehouse

    Dai, Jinxing; Xia, Xinyu; Li, Zhisheng; Coleman, Dennis D.; Dias, Robert F.; Gao, Ling; Li, Jian; Deev, Andrei; Li, Jin; Dessort, Daniel; Duclerc, Dominique; Li, Liwu; Liu, Jinzhong; Schloemer, Stefan; Zhang, Wenlong; Ni, Yunyan; Hu, Guoyi; Wang, Xiaobo; Tang, Yongchun

    2012-01-01

    Compound-specific carbon and hydrogen isotopic compositions of three natural gas round robins were calibrated by ten laboratories carrying out more than 800 measurements including both on-line and off-line methods. Two-point calibrations were performed with international measurement standards for hydrogen isotope ratios (VSMOW and SLAP) and carbon isotope ratios (NBS 19 and L-SVEC CO2). The consensus δ13C values and uncertainties were derived from the Maximum Likelihood Estimation (MLE) based on off-line measurements; the consensus δ2H values and uncertainties were derived from MLE of both off-line and on-line measurements, taking the bias of on-line measurements into account. The calibrated consensus values in ‰ relative to VSMOW and VPDB are: NG1 (coal-related gas): Methane: δ2HVSMOW = − 185.1‰ ± 1.2‰, δ13CVPDB = − 34.18‰ ± 0.10‰ Ethane: δ2HVSMOW = − 156.3‰ ± 1.8‰, δ13CVPDB = − 24.66‰ ± 0.11‰ Propane: δ2HVSMOW = − 143.6‰ ± 3.3‰, δ13CVPDB = − 22.21‰ ± 0.11‰ i-Butane: δ13CVPDB = − 21.62‰ ± 0.12‰ n-Butane: δ13CVPDB = − 21.74‰ ± 0.13‰ CO2: δ13CVPDB = − 5.00‰ ± 0.12‰ NG2 (biogas): Methane: δ2HVSMOW = − 237.0‰ ± 1.2‰, δ13CVPDB = − 68.89‰ ± 0.12‰ NG3 (oil-related gas): Methane: δ2HVSMOW = − 167.6‰ ± 1.0‰, δ13CVPDB = − 43.61‰ ± 0.09‰ Ethane: δ2HVSMOW = − 164.1‰ ± 2.4‰, δ13CVPDB = − 40.24‰ ± 0.10‰ Propane: δ2HVSMOW = − 138.4‰ ± 3.0‰, δ13CVPDB = − 33.79‰ ± 0.09‰ All of the assigned values are traceable to the international carbon isotope standard of VPDB and hydrogen isotope standard of VSMOW.

  12. High-Performance Clock Synchronization Algorithms for Distributed Wireless Airborne Computer Networks with Applications to Localization and Tracking of Targets

    DTIC Science & Technology

    2010-06-01

    GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non

  13. MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavel, D.T.

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.

  14. Anti-inflammatory and antiobesity effects of mulberry leaf and fruit extract on high fat diet-induced obesity.

    PubMed

    Lim, Hyun Hwa; Lee, Sung Ok; Kim, Sun Yeou; Yang, Soo Jin; Lim, Yunsook

    2013-10-01

    The purpose of this study was to investigate the anti-inflammatory and antiobesity effect of combinational mulberry leaf extract (MLE) and mulberry fruit extract (MFE) in a high-fat (HF) diet-induced obese mice. Mice were fed a control diet or a HF diet for nine weeks. After obesity was induced, the mice were administered with single MLE at low dose (133 mg/kg/day, LMLE) and high dose (333 mg/kg/day, HMLE) or combinational MLE and MFE (MLFE) at low dose (133 mg MLE and 67 mg MFE/kg/day, LMLFE) and high dose (333 mg MLE and 167 mg MFE/kg/day, HMLFE) by stomach gavage for 12 weeks. The mulberry leaf and fruit extract treatment for 12 weeks did not show liver toxicity. The single MLE and combinational MLFE treatments significantly decreased plasma triglyceride, liver lipid peroxidation levels and adipocyte size and improved hepatic steatosis as compared with the HF group. The combinational MLFE treatment significantly decreased body weight gain, fasting plasma glucose and insulin, and homeostasis model assessment of insulin resistance. HMLFE treatment significantly improved glucose control during intraperitoneal glucose tolerance test compared with the HF group. Moreover, HMLFE treatment reduced protein levels of oxidative stress markers (manganese superoxide dismutase) and inflammatory markers (monocyte chemoattractant protein-1, inducible nitric oxide synthase, C-reactive protein, tumour necrosis factor-α and interleukin-1) in liver and adipose tissue. Taken together, combinational MLFE treatment has potential antiobesity and antidiabetic effects through modulation of obesity-induced inflammation and oxidative stress in HF diet-induced obesity.

  15. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.

  16. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  17. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  18. Novel optical-based methods and analyses for elucidating cellular mechanics and dynamics

    NASA Astrophysics Data System (ADS)

    Koo, Peter K.

    Resolving distinct biochemical interaction states by analyzing the diffusive behaviors of individual protein trajectories is challenging due to the limited statistics provided by short trajectories and experimental noise sources, which are intimately coupled into each proteins localization. In the first part of this thesis, we introduce a novel, a machine-learning based classification methodology, called perturbation expectation-maximization (pEM), which simultaneously analyzes a population of protein trajectories to uncover the system of short-time diffusive behaviors which collectively result from distinct biochemical interactions. We then discuss an experimental application of pEM to Rho GTPase, an integral regulator of cytoskeletal dynamics and cellular homeostasis, inside live cells. We also derive the maximum likelihood estimator (MLE) for driven diffusion, confined diffusion, and fractional Brownian motion. We demonstrate that MLE yields improved estimates in comparison with traditional diffusion analysis, namely mean squared displacement analysis. In addition, we also introduce mleBayes, which is an empirical Bayesian model selection scheme to classify an individual protein trajectory to a given diffusion mode. By employing mleBayes on simulated data, we demonstrate that accurate determination of the underlying diffusive properties, beyond normal diffusion, remains challenging when analyzing particle trajectories on an individual basis. To improve upon the statistical limitations of classification from analyzing trajectories on an individual basis, we extend pEM with a new version (pEMv2) to simultaneously analyzing a collection of particle trajectories to uncover the system of interactions which give rise to unique normal or non-normal diffusive states. We test the performance of pEMv2 on various sets of simulated particle trajectories which transition between various modes of normal and non-normal diffusive states to highlight considerations when employing pEMv2 analysis. We envision the presented methodologies will be applicable to a wide range of single protein tracking data where different interactions result in distinct diffusive behaviors. More generally, this study brings us an important step closer to the possibility of monitoring the endogenous biochemistry of diffusing proteins within live cells with single molecule resolution. In the second part of this thesis, the role of chromatin association to the nuclear envelope in nuclear mechanics is explored. Changes in the mechanical properties of the nucleus are increasingly found to be critical for development and disease. However, relatively little is known about the variables that cells modulate to define nuclear mechanics. The best understood player is lamin A, a protein linked to a diverse set of genetic diseases termed laminopathies. The properties of lamin A that are compromised in these diseases (and therefore underlie their pathology) remains poorly understood. One model focuses on a mechanical role for a polymeric network of lamins associated with the nuclear envelope (NE), which supports nuclear integrity. However, because heterochromatin is strongly associated with lamina, it remains unclear whether it is the lamin polymer, the associated chromatin, or both that allow the lamina to mechanically stabilize nuclei. Decoupling the impact of the lamin polymer itself from that of the associated chromatin has proven very challenging. Here, we take advantage of the model organism, S pombe, which does not express lamies, as an experimental framework in which to address the impact of chromatin and its association with the nuclear periphery on nuclear mechanics. Using a combination of new image analysis tools for in vivo imaging of nuclear dynamics and a novel optical tweezers assay capable of directly probing nuclear mechanics, we find that the association of chromatin with the NE through integral membrane proteins plays a critical role in supporting nuclear integrity. When chromatin is decoupled from the NE, nuclei are softer, undergo much larger nuclear fluctuations in vivo in response to microtubule forces, and are defective at resolving nuclear deformations. Our data further suggest that association of chromatin with the NE attenuates the flow of chromatin into nuclear fluctuations thereby preventing permanent changes in nuclear shape.

  19. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  20. Comparison of discriminant analysis methods: Application to occupational exposure to particulate matter

    NASA Astrophysics Data System (ADS)

    Ramos, M. Rosário; Carolino, E.; Viegas, Carla; Viegas, Sandra

    2016-06-01

    Health effects associated with occupational exposure to particulate matter have been studied by several authors. In this study were selected six industries of five different areas: Cork company 1, Cork company 2, poultry, slaughterhouse for cattle, riding arena and production of animal feed. The measurements tool was a portable device for direct reading. This tool provides information on the particle number concentration for six different diameters, namely 0.3 µm, 0.5 µm, 1 µm, 2.5 µm, 5 µm and 10 µm. The focus on these features is because they might be more closely related with adverse health effects. The aim is to identify the particles that better discriminate the industries, with the ultimate goal of classifying industries regarding potential negative effects on workers' health. Several methods of discriminant analysis were applied to data of occupational exposure to particulate matter and compared with respect to classification accuracy. The selected methods were linear discriminant analyses (LDA); linear quadratic discriminant analysis (QDA), robust linear discriminant analysis with selected estimators (MLE (Maximum Likelihood Estimators), MVE (Minimum Volume Elipsoid), "t", MCD (Minimum Covariance Determinant), MCD-A, MCD-B), multinomial logistic regression and artificial neural networks (ANN). The predictive accuracy of the methods was accessed through a simulation study. ANN yielded the highest rate of classification accuracy in the data set under study. Results indicate that the particle number concentration of diameter size 0.5 µm is the parameter that better discriminates industries.

  1. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

    PubMed

    Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

    2012-01-01

    Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.

  2. Common mode error in Antarctic GPS coordinate time series on its effect on bedrock-uplift estimates

    NASA Astrophysics Data System (ADS)

    Liu, Bin; King, Matt; Dai, Wujiao

    2018-05-01

    Spatially-correlated common mode error always exists in regional, or-larger, GPS networks. We applied independent component analysis (ICA) to GPS vertical coordinate time series in Antarctica from 2010 to 2014 and made a comparison with the principal component analysis (PCA). Using PCA/ICA, the time series can be decomposed into a set of temporal components and their spatial responses. We assume the components with common spatial responses are common mode error (CME). An average reduction of ˜40% about the RMS values was achieved in both PCA and ICA filtering. However, the common mode components obtained from the two approaches have different spatial and temporal features. ICA time series present interesting correlations with modeled atmospheric and non-tidal ocean loading displacements. A white noise (WN) plus power law noise (PL) model was adopted in the GPS velocity estimation using maximum likelihood estimation (MLE) analysis, with ˜55% reduction of the velocity uncertainties after filtering using ICA. Meanwhile, spatiotemporal filtering reduces the amplitude of PL and periodic terms in the GPS time series. Finally, we compare the GPS uplift velocities, after correction for elastic effects, with recent models of glacial isostatic adjustment (GIA). The agreements of the GPS observed velocities and four GIA models are generally improved after the spatiotemporal filtering, with a mean reduction of ˜0.9 mm/yr of the WRMS values, possibly allowing for more confident separation of various GIA model predictions.

  3. Distributed Practicum Supervision in a Managed Learning Environment (MLE)

    ERIC Educational Resources Information Center

    Carter, David

    2005-01-01

    This evaluation-research feasibility study piloted the creation of a technology-mediated managed learning environment (MLE) involving the implementation of one of a new generation of instructionally driven management information systems (IMISs). The system, and supporting information and communications technology (ICT) was employed to support…

  4. A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…

  5. Relationship between cigarette format and mouth-level exposure to tar and nicotine in smokers of Russian king-size cigarettes.

    PubMed

    Ashley, Madeleine; Dixon, Mike; Prasad, Krishna

    2014-10-01

    Differences in length and circumference of cigarettes may influence smoker behaviour and exposure to smoke constituents. Superslim king-size (KSSS) cigarettes (17mm circumference versus 25mm circumference of conventional king-size [KS] cigarettes), have gained popularity in several countries, including Russia. Some smoke constituents are lower in machine-smoked KSSS versus KS cigarettes, but few data exist on actual exposure in smokers. We investigated mouth-level exposure (MLE) to tar and nicotine in Russian smokers of KSSS versus KS cigarettes and measured smoke constituents under machine-smoking conditions. MLE to tar was similar for smokers of 1mg ISO tar yield products, but lower for smokers of 4mg and 7mg KSSS versus KS cigarettes. MLE to nicotine was lower in smokers of 4mg KSSS versus KS cigarettes, but not for other tar bands. No gender differences were observed for nicotine or tar MLE. Under International Organization for Standardization, Health Canada Intense and Massachusetts regimes, KSSS cigarettes tended to yield less carbon monoxide, acetaldehyde, nitric oxide, acrylonitrile, benzene, 1,3-butadiene and tobacco-specific nitrosamines, but more formaldehyde, than KS cigarettes. In summary, differences in MLE were observed between cigarette formats, but not systematically across pack tar bands. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Estimation of mouth level exposure to smoke constituents of cigarettes with different tar levels using filter analysis.

    PubMed

    Hyodo, T; Minagawa, K; Inoue, T; Fujimoto, J; Minami, N; Bito, R; Mikita, A

    2013-12-01

    A nicotine part-filter method can be applied to estimate smokers' mouth level exposure (MLE) to smoke constituents. The objectives of this study were (1) to generate calibration curves for 47 smoke constituents, (2) to estimate MLE to selected smoke constituents using Japanese smokers of commercially available cigarettes covering a wide range of International Organization for Standardization tar yields (1-21mg/cigarette), and (3) to investigate relationships between MLE estimates and various machine-smoking yields. Five cigarette brands were machine-smoked under 7 different smoking regimes and smoke constituents and nicotine content in part-filters were measured. Calibration curves were then generated. Spent cigarette filters were collected from a target of 50 smokers for each of the 15 brands and a total of 780 filters were obtained. Nicotine content in part-filters was then measured and MLE to each smoke constituent was estimated. Strong correlations were identified between nicotine content in part-filters and 41 out of the 47 smoke constituent yields. Estimates of MLE to acetaldehyde, acrolein, 1,3-butadiene, benzene, benzo[a]pyrene, carbon monoxide, and tar showed significant negative correlations with corresponding constituent yields per mg nicotine under the Health Canada Intense smoking regime, whereas significant positive correlations were observed for N-nitrosonornicotine and (4-methylnitrosoamino)-1-(3-pyridyl)-1-butanone. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  8. Using Mediated Learning Experiences To Enhance Children's Thinking.

    ERIC Educational Resources Information Center

    Seng, SeokHoon

    This paper focuses on the relationship between adult-child interactions and the developing cognitive competence of young children as rated by the Mediated Learning Experience (MLE) Scale. The scale was devised to reflect 10 criteria of adult-child interaction hypothesized to comprise an MLE and therefore to enhance children's cognitive…

  9. The recursive maximum likelihood proportion estimator: User's guide and test results

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.

  10. New applications of maximum likelihood and Bayesian statistics in macromolecular crystallography.

    PubMed

    McCoy, Airlie J

    2002-10-01

    Maximum likelihood methods are well known to macromolecular crystallographers as the methods of choice for isomorphous phasing and structure refinement. Recently, the use of maximum likelihood and Bayesian statistics has extended to the areas of molecular replacement and density modification, placing these methods on a stronger statistical foundation and making them more accurate and effective.

  11. On the existence of maximum likelihood estimates for presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.

  12. The Coming of Age of Media Literacy

    ERIC Educational Resources Information Center

    Domine, Vanessa

    2011-01-01

    A decade into a new millennium marks a coming of age for media literacy education (MLE). Born from teaching the critical analysis of media texts, MLE has evolved into helping individuals of all ages "develop the habits of inquiry and skills of expression that they need to be critical thinkers, effective communicators and active citizens in…

  13. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  14. Computation of nonparametric convex hazard estimators via profile methods.

    PubMed

    Jankowski, Hanna K; Wellner, Jon A

    2009-05-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.

  15. Intrinsic Lens Forming Potential of Mouse Lens Epithelial versus Newt Iris Pigment Epithelial Cells in Three-Dimensional Culture

    PubMed Central

    Nakamura, Kenta; Tsonis, Panagiotis A.

    2014-01-01

    Adult newts (Notophthalmus viridescens) are capable of complete lens regeneration that is mediated through dorsal iris pigment epithelial (IPE) cells transdifferentiation. In contrast, higher vertebrates such as mice demonstrate only limited lens regeneration in the presence of an intact lens capsule with remaining lens epithelial cells. To compare the intrinsic lens regeneration potential of newt IPE versus mouse lens epithelial cells (MLE), we have established a novel culture method that uses cell aggregation before culture in growth factor-reduced Matrigel™. Dorsal newt IPE aggregates demonstrated complete lens formation within 1 to 2 weeks of Matrigel culture without basic fibroblast growth factor (bFGF) supplementation, including the establishment of a peripheral cuboidal epithelial cell layer, and the appearance of central lens fibers that were positive for αA-crystallin. In contrast, the lens-forming potential of MLE cell aggregates cultured in Matrigel was incomplete and resulted in the formation of defined-size lentoids with partial optical transparency. While the peripheral cell layers of MLE aggregates were nucleated, cells in the center of aggregates demonstrated a nonapoptotic nuclear loss over a time period of 3 weeks that was representative of lens fiber formation. Matrigel culture supplementation with bFGF resulted in higher transparent bigger-size MLE aggregates that demonstrated increased appearance of βB1-crystallin expression. Our study demonstrates that bFGF is not required for induction of newt IPE aggregate-dependent lens formation in Matrigel, while the addition of bFGF seems to be beneficial for the formation of MLE aggregate-derived lens-like structures. In conclusion, the three-dimensional aggregate culture of IPE and MLE in Matrigel allows to a higher extent than older models the indepth study of the intrinsic lens-forming potential and the corresponding identification of lentogenic factors. PMID:23672748

  16. A maximum likelihood map of chromosome 1.

    PubMed Central

    Rao, D C; Keats, B J; Lalouel, J M; Morton, N E; Yee, S

    1979-01-01

    Thirteen loci are mapped on chromosome 1 from genetic evidence. The maximum likelihood map presented permits confirmation that Scianna (SC) and a fourteenth locus, phenylketonuria (PKU), are on chromosome 1, although the location of the latter on the PGM1-AMY segment is uncertain. Eight other controversial genetic assignments are rejected, providing a practical demonstration of the resolution which maximum likelihood theory brings to mapping. PMID:293128

  17. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  18. Maximum likelihood estimation of signal-to-noise ratio and combiner weight

    NASA Technical Reports Server (NTRS)

    Kalson, S.; Dolinar, S. J.

    1986-01-01

    An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.

  19. Comparison of Maximum Likelihood Estimation Approach and Regression Approach in Detecting Quantitative Trait Lco Using RAPD Markers

    Treesearch

    Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine

    1999-01-01

    Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...

  20. Undesirable Features of the Medical Learning Environment: A Narrative Review of the Literature

    ERIC Educational Resources Information Center

    Benbassat, Jochanan

    2013-01-01

    The objective of this narrative review of the literature is to draw attention to four undesirable features of the medical learning environment (MLE). First, students' fears of personal inadequacy and making errors are enhanced rather than alleviated by the hidden curriculum of the clinical teaching setting; second, the MLE projects a denial…

  1. Games and Machine Learning: A Powerful Combination in an Artificial Intelligence Course

    ERIC Educational Resources Information Center

    Wallace, Scott A.; McCartney, Robert; Russell, Ingrid

    2010-01-01

    Project MLeXAI [Machine Learning eXperiences in Artificial Intelligence (AI)] seeks to build a set of reusable course curriculum and hands on laboratory projects for the artificial intelligence classroom. In this article, we describe two game-based projects from the second phase of project MLeXAI: Robot Defense--a simple real-time strategy game…

  2. Games and machine learning: a powerful combination in an artificial intelligence course

    NASA Astrophysics Data System (ADS)

    Wallace, Scott A.; McCartney, Robert; Russell, Ingrid

    2010-03-01

    Project MLeXAI (Machine Learning eXperiences in Artificial Intelligence (AI)) seeks to build a set of reusable course curriculum and hands on laboratory projects for the artificial intelligence classroom. In this article, we describe two game-based projects from the second phase of project MLeXAI: Robot Defense - a simple real-time strategy game and Checkers - a classic turn-based board game. From the instructors' prospective, we examine aspects of design and implementation as well as the challenges and rewards of using the curricula. We explore students' responses to the projects via the results of a common survey. Finally, we compare the student perceptions from the game-based projects to non-game based projects from the first phase of Project MLeXAI.

  3. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  4. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  5. Effect of radiance-to-reflectance transformation and atmosphere removal on maximum likelihood classification accuracy of high-dimensional remote sensing data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1994-01-01

    Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.

  6. The mariner transposons belonging to the irritans subfamily were maintained in chordate genomes by vertical transmission.

    PubMed

    Sinzelle, Ludivine; Chesneau, Albert; Bigot, Yves; Mazabraud, André; Pollet, Nicolas

    2006-01-01

    Mariner-like elements (MLEs) belong to the Tc1-mariner superfamily of DNA transposons, which is very widespread in animal genomes. We report here the first complete description of a MLE, Xtmar1, within the genome of a poikilotherm vertebrate, the amphibian Xenopus tropicalis. A close relative, XlMLE, is also characterized within the genome of a sibling species, Xenopus laevis. The phylogenetic analysis of the relationships between MLE transposases reveals that Xtmar1 is closely related to Hsmar2 and Bytmar1 and that together they form a second distinct lineage of the irritans subfamily. All members of this lineage are also characterized by the 36- to 43-bp size of their imperfectly conserved inverted terminal repeats and by the -8-bp motif located at their outer extremity. Since XlMLE, Xlmar1, and Hsmar2 are present in species located at both extremities of the vertebrate evolutionary tree, we looked for MLE relatives belonging to the same subfamily in the available sequencing projects using the amino acid consensus sequence of the Hsmar2 transposase as an in silico probe. We found that irritans MLEs are present in chordate genomes including most craniates. This therefore suggests that these elements have been present within chordate genomes for 750 Myr and that the main way they have been maintained in these species has been via vertical transmission. The very small number of stochastic losses observed in the data available suggests that their inactivation during evolution has been very slow.

  7. SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction

    PubMed Central

    Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.

    2015-01-01

    Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831

  8. The Effects of Mother-Child Mediated Learning Strategies on Psychological Resilience and Cognitive Modifiability of Boys with Learning Disability

    ERIC Educational Resources Information Center

    Tzuriel, David; Shomron, Vered

    2018-01-01

    Background: The theoretical framework of the current study is based on mediated learning experience (MLE) theory, which is similar to the scaffolding concept. The main question of the current study was to what extent mother-child MLE strategies affect psychological resilience and cognitive modifiability of boys with learning disability (LD).…

  9. An evaluation of several different classification schemes - Their parameters and performance. [maximum likelihood decision for crop identification

    NASA Technical Reports Server (NTRS)

    Scholz, D.; Fuhs, N.; Hixson, M.

    1979-01-01

    The overall objective of this study was to apply and evaluate several of the currently available classification schemes for crop identification. The approaches examined were: (1) a per point Gaussian maximum likelihood classifier, (2) a per point sum of normal densities classifier, (3) a per point linear classifier, (4) a per point Gaussian maximum likelihood decision tree classifier, and (5) a texture sensitive per field Gaussian maximum likelihood classifier. Three agricultural data sets were used in the study: areas from Fayette County, Illinois, and Pottawattamie and Shelby Counties in Iowa. The segments were located in two distinct regions of the Corn Belt to sample variability in soils, climate, and agricultural practices.

  10. Stochastic Models in the DORIS Position Time Series: Estimates from the IDS Contribution to the ITRF2014

    NASA Astrophysics Data System (ADS)

    Klos, A.; Bogusz, J.; Moreaux, G.

    2017-12-01

    This research focuses on the investigation of the deterministic and stochastic parts of the DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite) weekly coordinate time series from the IDS contribution to the ITRF2014A set of 90 stations was divided into three groups depending on when the data was collected at an individual station. To reliably describe the DORIS time series, we employed a mathematical model that included the long-term nonlinear signal, linear trend, seasonal oscillations (these three sum up to produce the Polynomial Trend Model) and a stochastic part, all being resolved with Maximum Likelihood Estimation (MLE). We proved that the values of the parameters delivered for DORIS data are strictly correlated with the time span of the observations, meaning that the most recent data are the most reliable ones. Not only did the seasonal amplitudes decrease over the years, but also, and most importantly, the noise level and its type changed significantly. We examined five different noise models to be applied to the stochastic part of the DORIS time series: a pure white noise (WN), a pure power-law noise (PL), a combination of white and power-law noise (WNPL), an autoregressive process of first order (AR(1)) and a Generalized Gauss Markov model (GGM). From our study it arises that the PL process may be chosen as the preferred one for most of the DORIS data. Moreover, the preferred noise model has changed through the years from AR(1) to pure PL with few stations characterized by a positive spectral index.

  11. Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.

    PubMed

    Vercillo, Tiziana; Gori, Monica

    2015-01-01

    The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.

  12. Probabilistic modelling of drought events in China via 2-dimensional joint copula

    NASA Astrophysics Data System (ADS)

    Ayantobo, Olusola O.; Li, Yi; Song, Songbai; Javed, Tehseen; Yao, Ning

    2018-04-01

    Probabilistic modelling of drought events is a significant aspect of water resources management and planning. In this study, popularly applied and several relatively new bivariate Archimedean copulas were employed to derive regional and spatial based copula models to appraise drought risk in mainland China over 1961-2013. Drought duration (Dd), severity (Ds), and peak (Dp), as indicated by Standardized Precipitation Evapotranspiration Index (SPEI), were extracted according to the run theory and fitted with suitable marginal distributions. The maximum likelihood estimation (MLE) and curve fitting method (CFM) were used to estimate the copula parameters of nineteen bivariate Archimedean copulas. Drought probabilities and return periods were analysed based on appropriate bivariate copula in sub-region I-VII and entire mainland China. The goodness-of-fit tests as indicated by the CFM showed that copula NN19 in sub-regions III, IV, V, VI and mainland China, NN20 in sub-region I and NN13 in sub-region VII are the best for modeling drought variables. Bivariate drought probability across mainland China is relatively high, and the highest drought probabilities are found mainly in the Northwestern and Southwestern China. Besides, the result also showed that different sub-regions might suffer varying drought risks. The drought risks as observed in Sub-region III, VI and VII, are significantly greater than other sub-regions. Higher probability of droughts of longer durations in the sub-regions also corresponds to shorter return periods with greater drought severity. These results may imply tremendous challenges for the water resources management in different sub-regions, particularly the Northwestern and Southwestern China.

  13. Maximum-Likelihood Detection Of Noncoherent CPM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  14. Cramer-Rao Bound, MUSIC, and Maximum Likelihood. Effects of Temporal Phase Difference

    DTIC Science & Technology

    1990-11-01

    Technical Report 1373 November 1990 Cramer-Rao Bound, MUSIC , And Maximum Likelihood Effects of Temporal Phase o Difference C. V. TranI OTIC Approved... MUSIC , and Maximum Likelihood (ML) asymptotic variances corresponding to the two-source direction-of-arrival estimation where sources were modeled as...1pI = 1.00, SNR = 20 dB ..................................... 27 2. MUSIC for two equipowered signals impinging on a 5-element ULA (a) IpI = 0.50, SNR

  15. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  16. A general methodology for maximum likelihood inference from band-recovery data

    USGS Publications Warehouse

    Conroy, M.J.; Williams, B.K.

    1984-01-01

    A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.

  17. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  18. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  19. Multimodal Likelihoods in Educational Assessment: Will the Real Maximum Likelihood Score Please Stand up?

    ERIC Educational Resources Information Center

    Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike

    2011-01-01

    It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…

  20. Acute Dermal Toxicity of Diethyleneglycol Dinitrate in Rabbits

    DTIC Science & Technology

    1988-09-01

    ACC# ANtAL ID SNc DLW21OSIS 38260 85F157 Mile Not remarkable (NR) 38261 85F158 Mile Purulent otitis media , bilateral 38262 85F159 Mle NR 38263 85F160...Mle NR 38264 85F161 Male Purulent otitis media , left ear 38263 85F164 Female MR 38266 85F166 Female NR 38267 85F167 Female NR 38268 85F168 Female t

  1. The associations between perceived distributive, procedural, and interactional organizational justice, self-rated health and burnout.

    PubMed

    Liljegren, Mats; Ekberg, Kerstin

    2009-01-01

    The aim of the present study was to examine the cross-sectional and 2-year longitudinal associations between perceived organizational justice, self-rated health and burnout. The study used questionnaire data from 428 Swedish employment officers and the data was analyzed with Structural Equation Modeling, SEM. Two different models were tested: a global organizational justice model (with and without correlated measurement errors) and a differentiated (distributive, procedural and interactional organizational justice) justice model (with and without correlated measurement errors). The global justice model with autocorrelations had the most satisfactory goodness-of-fit indices. Global justice showed statistically significant (p < 0.01) cross-sectional (0.80 {mle 0.84) and longitudinal positive associations (0.76 mle 0.82) between organizational justice and self-rated health, and significant (p < 0.01) negative associations between organizational justice and burnout (cross-sectional: mle = -0.85, longitudinal -0.83 mle -0.84). The global justice construct showed better goodness-of-fit indices than the threefold justice construct but a differentiated organizational justice concept could give valuable information about health related risk factors: if they are structural (distributive justice), procedural (procedural justice) or inter-personal (interactional justice). The two approaches to study organizational justice should therefore be regarded as complementary rather than exclusive.

  2. Genetic Manipulation of Lactococcus lactis by Using Targeted Group II Introns: Generation of Stable Insertions without Selection

    PubMed Central

    Frazier, Courtney L.; San Filippo, Joseph; Lambowitz, Alan M.; Mills, David A.

    2003-01-01

    Despite their commercial importance, there are relatively few facile methods for genomic manipulation of the lactic acid bacteria. Here, the lactococcal group II intron, Ll.ltrB, was targeted to insert efficiently into genes encoding malate decarboxylase (mleS) and tetracycline resistance (tetM) within the Lactococcus lactis genome. Integrants were readily identified and maintained in the absence of a selectable marker. Since splicing of the Ll.ltrB intron depends on the intron-encoded protein, targeted invasion with an intron lacking the intron open reading frame disrupted TetM and MleS function, and MleS activity could be partially restored by expressing the intron-encoded protein in trans. Restoration of splicing from intron variants lacking the intron-encoded protein illustrates how targeted group II introns could be used for conditional expression of any gene. Furthermore, the modified Ll.ltrB intron was used to separately deliver a phage resistance gene (abiD) and a tetracycline resistance marker (tetM) into mleS, without the need for selection to drive the integration or to maintain the integrant. Our findings demonstrate the utility of targeted group II introns as a potential food-grade mechanism for delivery of industrially important traits into the genomes of lactococci. PMID:12571038

  3. Treatment of oil sands process-affected water (OSPW) using a membrane bioreactor with a submerged flat-sheet ceramic microfiltration membrane.

    PubMed

    Xue, Jinkai; Zhang, Yanyan; Liu, Yang; Gamal El-Din, Mohamed

    2016-01-01

    The release of oil sands process-affected water (OSPW) into the environment is a concern because it contains persistent organic pollutants that are toxic to aquatic life. A modified Ludzack-Ettinger membrane bioreactor (MLE-MBR) with a submerged ceramic membrane was continuously operated for 425 days to evaluate its feasibility on OSPW treatment. A stabilized biomass concentration of 3730 mg mixed liquor volatile suspended solids per litre and a naphthenic acid (NA) removal of 24.7% were observed in the reactor after 361 days of operation. Ultra Performance Liquid Chromatography/High Resolution Mass Spectrometry analysis revealed that the removal of individual NA species declined with increased ring numbers. Pyrosequencing analysis revealed that Betaproteobacteria were dominant in sludge samples from the MLE-MBR, with microorganisms such as Rhodocyclales and Sphingobacteriales capable of degrading hydrocarbon and aromatic compounds. During 425 days of continuous operation, no severe membrane fouling was observed as the transmembrane pressure (TMP) of the MLE-MBR never exceeded -20 kPa given that the manufacturer's suggested critical TMP for chemical cleaning is -35 kPa. Our results indicated that the proposed MLE-MBR has a good potential for removing recalcitrant organics in OSPW. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Asymptotic Properties of Induced Maximum Likelihood Estimates of Nonlinear Models for Item Response Variables: The Finite-Generic-Item-Pool Case.

    ERIC Educational Resources Information Center

    Jones, Douglas H.

    The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…

  5. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  6. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  7. Are the fluctuations in dynamic anterior surface aberrations of the human eye chaotic?

    PubMed

    Jayakumar, Varadharajan; Thapa, Damber; Hutchings, Natalie; Lakshminarayanan, Vasudevan

    2013-12-15

    The purpose of the study is to measure chaos in dynamic anterior surface aberrations and examine how it varies between the eyes of an individual. Noninvasive tear breakup time and dynamic corneal surface aberrations were measured for two open-eye intervals of 15 s. The maximal Lyapunov exponent (MLE) was calculated to test the nature of the fluctuations of the dynamic anterior surface aberrations. The average MLE for total higher-order aberration (HOA) was found to be small (+0.0102±0.0072) μm/s. No significant difference in MLE was found between the eyes for HOA (t-test; p=0.131). Data analysis was carried out for individual Zernike coefficients, including vertical prism as it gives a direct measure of the thickness of the tear film over time. The results show that the amount of chaos was small for each Zernike coefficient and not significantly correlated between the eyes.

  8. Removing the Threat of Diclofenac to Critically Endangered Asian Vultures

    PubMed Central

    Swan, Gerry; Naidoo, Vinasan; Cuthbert, Richard; Pain, Deborah J; Swarup, Devendra; Prakash, Vibhu; Taggart, Mark; Bekker, Lizette; Das, Devojit; Diekmann, Jörg; Diekmann, Maria; Killian, Elmarié; Meharg, Andy; Patra, Ramesh Chandra; Saini, Mohini; Wolter, Kerri

    2006-01-01

    Veterinary use of the nonsteroidal anti-inflammatory (NSAID) drug diclofenac in South Asia has resulted in the collapse of populations of three vulture species of the genusGyps to the most severe category of global extinction risk. Vultures are exposed to diclofenac when scavenging on livestock treated with the drug shortly before death. Diclofenac causes kidney damage, increased serum uric acid concentrations, visceral gout, and death. Concern about this issue led the Indian Government to announce its intention to ban the veterinary use of diclofenac by September 2005. Implementation of a ban is still in progress late in 2005, and to facilitate this we sought potential alternative NSAIDs by obtaining information from captive bird collections worldwide. We found that the NSAID meloxicam had been administered to 35 captiveGyps vultures with no apparent ill effects. We then undertook a phased programme of safety testing of meloxicam on the African white-backed vultureGyps africanus, which we had previously established to be as susceptible to diclofenac poisoning as the endangered AsianGyps vultures. We estimated the likely maximum level of exposure (MLE) of wild vultures and dosed birds by gavage (oral administration) with increasing quantities of the drug until the likely MLE was exceeded in a sample of 40G. africanus. Subsequently, sixG. africanus were fed tissues from cattle which had been treated with a higher than standard veterinary course of meloxicam prior to death. In the final phase, ten Asian vultures of two of the endangered species(Gyps bengalensis,Gyps indicus) were dosed with meloxicam by gavage; five of them at more than the likely MLE dosage. All meloxicam-treated birds survived all treatments, and none suffered any obvious clinical effects. Serum uric acid concentrations remained within the normal limits throughout, and were significantly lower than those from birds treated with diclofenac in other studies. We conclude that meloxicam is of low toxicity toGyps vultures and that its use in place of diclofenac would reduce vulture mortality substantially in the Indian subcontinent. Meloxicam is already available for veterinary use in India. PMID:16435886

  9. Closed-loop carrier phase synchronization techniques motivated by likelihood functions

    NASA Technical Reports Server (NTRS)

    Tsou, H.; Hinedi, S.; Simon, M.

    1994-01-01

    This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.

  10. Low-complexity approximations to maximum likelihood MPSK modulation classification

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2004-01-01

    We present a new approximation to the maximum likelihood classifier to discriminate between M-ary and M'-ary phase-shift-keying transmitted on an additive white Gaussian noise (AWGN) channel and received noncoherentl, partially coherently, or coherently.

  11. Validation of Sea levels from coastal altimetry waveform retracking expert system: a case study around the Prince William Sound in Alaska

    NASA Astrophysics Data System (ADS)

    Idris, N. H.; Deng, X.; Idris, N. H.

    2017-05-01

    This paper presents the validation of Coastal Altimetry Waveform Retracking Expert System (CAWRES), a novel method to optimize the Jason satellite altimetric sea levels from multiple retracking solutions. The validation is conducted over the region of Prince William Sound in Alaska, USA, where altimetric waveforms are perturbed by emerged land and sea states. Validation is performed in twofold. First, comparison with existing retrackers (i.e. MLE4 and Ice) from the Sensor Geophysical Data Records (SGDR), and second, comparison with in-situ tide gauge data. From the first validation assessment, in general, CAWRES outperforms the MLE4 and Ice retrackers. In 4 out of 6 cases, the value of improvement percentage (standard deviation of difference) is higher (lower) than those of the SGDR retrackers. CAWRES also presents the best performance in producing valid observations, and has the lowest noise when compared to the SGDR retrackers. From the second assessment with tide gauge, CAWRES retracked sea level anomalies (SLAs) are consistent with those of the tide gauge. The accuracy of CAWRES retracked SLAs is slightly better than those of the MLE4. However, the performance of Ice retracker is better than those of CAWRES and MLE4, suggesting the empirical-based retracker is more effective. The results demonstrate that the CAWRES would have potential to be applied to coastal regions elsewhere.

  12. Maximum likelihood decoding analysis of accumulate-repeat-accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, A.; Divsalar, D.; Yao, K.

    2004-01-01

    In this paper, the performance of the repeat-accumulate codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. Some simple codes are shown that perform very close to Shannon limit with maximum likelihood decoding.

  13. Maximum-likelihood block detection of noncoherent continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K.; Divsalar, Dariush

    1993-01-01

    This paper examines maximum-likelihood block detection of uncoded full response CPM over an additive white Gaussian noise (AWGN) channel. Both the maximum-likelihood metrics and the bit error probability performances of the associated detection algorithms are considered. The special and popular case of minimum-shift-keying (MSK) corresponding to h = 0.5 and constant amplitude frequency pulse is treated separately. The many new receiver structures that result from this investigation can be compared to the traditional ones that have been used in the past both from the standpoint of simplicity of implementation and optimality of performance.

  14. Design of simplified maximum-likelihood receivers for multiuser CPM systems.

    PubMed

    Bing, Li; Bai, Baoming

    2014-01-01

    A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.

  15. Maximum likelihood clustering with dependent feature trees

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.

  16. Superresolution microscope image reconstruction by spatiotemporal object decomposition and association: application in resolving t-tubule structure in skeletal muscle

    PubMed Central

    Sun, Mingzhai; Huang, Jiaqing; Bunyak, Filiz; Gumpper, Kristyn; De, Gejing; Sermersheim, Matthew; Liu, George; Lin, Pei-Hui; Palaniappan, Kannappan; Ma, Jianjie

    2014-01-01

    One key factor that limits resolution of single-molecule superresolution microscopy relates to the localization accuracy of the activated emitters, which is usually deteriorated by two factors. One originates from the background noise due to out-of-focus signals, sample auto-fluorescence, and camera acquisition noise; and the other is due to the low photon count of emitters at a single frame. With fast acquisition rate, the activated emitters can last multiple frames before they transiently switch off or permanently bleach. Effectively incorporating the temporal information of these emitters is critical to improve the spatial resolution. However, majority of the existing reconstruction algorithms locate the emitters frame by frame, discarding or underusing the temporal information. Here we present a new image reconstruction algorithm based on tracklets, short trajectories of the same objects. We improve the localization accuracy by associating the same emitters from multiple frames to form tracklets and by aggregating signals to enhance the signal to noise ratio. We also introduce a weighted mean-shift algorithm (WMS) to automatically detect the number of modes (emitters) in overlapping regions of tracklets so that not only well-separated single emitters but also individual emitters within multi-emitter groups can be identified and tracked. In combination with a maximum likelihood estimator method (MLE), we are able to resolve low to medium density of overlapping emitters with improved localization accuracy. We evaluate the performance of our method with both synthetic and experimental data, and show that the tracklet-based reconstruction is superior in localization accuracy, particularly for weak signals embedded in a strong background. Using this method, for the first time, we resolve the transverse tubule structure of the mammalian skeletal muscle. PMID:24921337

  17. Molecular detection of severe fever with thrombocytopenia syndrome and tick-borne encephalitis viruses in ixodid ticks collected from vegetation, Republic of Korea, 2014.

    PubMed

    Yun, Seok-Min; Lee, Ye-Ji; Choi, WooYoung; Kim, Heung-Chul; Chong, Sung-Tae; Chang, Kyu-Sik; Coburn, Jordan M; Klein, Terry A; Lee, Won-Ja

    2016-07-01

    Ticks play an important role in transmission of arboviruses responsible for emerging infectious diseases, and have a significant impact on human, veterinary, and wildlife health. In the Republic of Korea (ROK), little is known about information regarding the presence of tick-borne viruses and their vectors. A total of 21,158 ticks belonging to 3 genera and 6 species collected at 6 provinces and 4 metropolitan areas in the ROK from March to October 2014 were assayed for selected tick-borne pathogens. Haemaphysalis longicornis (n=17,570) was the most numerously collected, followed by Haemaphysalis flava (n=3317), Ixodes nipponensis (n=249), Amblyomma testudinarium (n=11), Haemaphysalis phasiana (n=8), and Ixodes turdus (n=3). Ticks were pooled (adults 1-5, nymphs 1-30, and larvae 1-50) and tested by one-step reverse transcription polymerase chain reaction (RT-PCR) or nested RT-PCR for the detection of severe fever with thrombocytopenia virus (SFTSV), tick-borne encephalitis virus (TBEV), Powassan virus (POWV), Omsk hemorrhagic fever virus (OHFV), and Langat virus (LGTV). The overall maximum likelihood estimation (MLE) [estimated numbers of viral RNA positive ticks/1000 ticks] for SFTSV and TBEV was 0.95 and 0.43, respectively, while, all pools were negative for POWV, OHFV, and LGTV. The purpose of this study was to determine the prevalence of SFTSV, TBEV, POWV, OHFV, and LGTV in ixodid ticks collected from vegetation in the ROK to aid our understanding of the epidemiology of tick-borne viral diseases. Results from this study emphasize the need for continuous tick-based arbovirus surveillance to monitor the emergence of tick-borne diseases in the ROK. Copyright © 2016 The Authors. Published by Elsevier GmbH.. All rights reserved.

  18. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    NASA Astrophysics Data System (ADS)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  19. Kinetics of Huperzine A Dissociation from Acetylcholinesterase via Multiple Unbinding Pathways.

    PubMed

    Rydzewski, J; Jakubowski, R; Nowak, W; Grubmüller, H

    2018-06-12

    The dissociation of huperzine A (hupA) from Torpedo californica acetylcholinesterase ( TcAChE) was investigated by 4 μs unbiased and biased all-atom molecular dynamics (MD) simulations in explicit solvent. We performed our study using memetic sampling (MS) for the determination of reaction pathways (RPs), metadynamics to calculate free energy, and maximum-likelihood estimation (MLE) to recover kinetic rates from unbiased MD simulations. Our simulations suggest that the dissociation of hupA occurs mainly via two RPs: a front door along the axis of the active-site gorge (pwf) and through a new transient side door (pws), i.e., formed by the Ω-loop (residues 67-94 of TcAChE). An analysis of the inhibitor unbinding along the RPs suggests that pws is opened transiently after hupA and the Ω-loop reach a low free-energy transition state characterized by the orientation of the pyridone group of the inhibitor directed toward the Ω-loop plane. Unlike pws, pwf does not require large structural changes in TcAChE to be accessible. The estimated free energies and rates agree well with available experimental data. The dissociation rates along the unbinding pathways are similar, suggesting that the dissociation of hupA along pws is likely to be relevant. This indicates that perturbations to hupA- TcAChE interactions could potentially induce pathway hopping. In summary, our results characterize the slow-onset inhibition of TcAChE by hupA, which may provide the structural and energetic bases for the rational design of the next-generation slow-onset inhibitors with optimized pharmacokinetic properties for the treatment of Alzheimer's disease.

  20. Titan dune heights retrieval by using Cassini Radar Altimeter

    NASA Astrophysics Data System (ADS)

    Mastrogiuseppe, M.; Poggiali, V.; Seu, R.; Martufi, R.; Notarnicola, C.

    2014-02-01

    The Cassini Radar is a Ku band multimode instrument capable of providing topographic and mapping information. During several of the 93 Titan fly-bys performed by Cassini, the radar collected a large amount of data observing many dune fields in multiple modes such as SAR, Altimeter, Scatterometer and Radiometer. Understanding dune characteristics, such as shape and height, will reveal important clues on Titan's climatic and geological history providing a better understanding of aeolian processes on Earth. Dunes are believed to be sculpted by the action of the wind, weak at the surface but still able to activate the process of sand-sized particle transport. This work aims to estimate dunes height by modeling the shape of the real Cassini Radar Altimeter echoes. Joint processing of SAR/Altimeter data has been adopted to localize the altimeter footprints overlapping dune fields excluding non-dune features. The height of the dunes was estimated by applying Maximum Likelihood Estimation along with a non-coherent electromagnetic (EM) echo model, thus comparing the real averaged waveform with the theoretical curves. Such analysis has been performed over the Fensal dune field observed during the T30 flyby (May 2007). As a result we found that the estimated dunes' peak to trough heights difference was in the order of 60-120 m. Estimation accuracy and robustness of the MLE for different complex scenarios was assessed via radar simulations and Monte-Carlo approach. We simulated dunes-interdunes different composition and roughness for a large set of values verifying that, in the range of possible Titan environment conditions, these two surface parameters have weak effects on our estimates of standard dune heights deviation. Results presented here are the first part of a study that will cover all Titan's sand seas.

  1. rFRET: A comprehensive, Matlab-based program for analyzing intensity-based ratiometric microscopic FRET experiments.

    PubMed

    Nagy, Peter; Szabó, Ágnes; Váradi, Tímea; Kovács, Tamás; Batta, Gyula; Szöllősi, János

    2016-04-01

    Fluorescence or Förster resonance energy transfer (FRET) remains one of the most widely used methods for assessing protein clustering and conformation. Although it is a method with solid physical foundations, many applications of FRET fall short of providing quantitative results due to inappropriate calibration and controls. This shortcoming is especially valid for microscopy where currently available tools have limited or no capability at all to display parameter distributions or to perform gating. Since users of multiparameter flow cytometry usually apply these tools, the absence of these features in applications developed for microscopic FRET analysis is a significant limitation. Therefore, we developed a graphical user interface-controlled Matlab application for the evaluation of ratiometric, intensity-based microscopic FRET measurements. The program can calculate all the necessary overspill and spectroscopic correction factors and the FRET efficiency and it displays the results on histograms and dot plots. Gating on plots and mask images can be used to limit the calculation to certain parts of the image. It is an important feature of the program that the calculated parameters can be determined by regression methods, maximum likelihood estimation (MLE) and from summed intensities in addition to pixel-by-pixel evaluation. The confidence interval of calculated parameters can be estimated using parameter simulations if the approximate average number of detected photons is known. The program is not only user-friendly, but it provides rich output, it gives the user freedom to choose from different calculation modes and it gives insight into the reliability and distribution of the calculated parameters. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  2. Superresolution microscope image reconstruction by spatiotemporal object decomposition and association: application in resolving t-tubule structure in skeletal muscle.

    PubMed

    Sun, Mingzhai; Huang, Jiaqing; Bunyak, Filiz; Gumpper, Kristyn; De, Gejing; Sermersheim, Matthew; Liu, George; Lin, Pei-Hui; Palaniappan, Kannappan; Ma, Jianjie

    2014-05-19

    One key factor that limits resolution of single-molecule superresolution microscopy relates to the localization accuracy of the activated emitters, which is usually deteriorated by two factors. One originates from the background noise due to out-of-focus signals, sample auto-fluorescence, and camera acquisition noise; and the other is due to the low photon count of emitters at a single frame. With fast acquisition rate, the activated emitters can last multiple frames before they transiently switch off or permanently bleach. Effectively incorporating the temporal information of these emitters is critical to improve the spatial resolution. However, majority of the existing reconstruction algorithms locate the emitters frame by frame, discarding or underusing the temporal information. Here we present a new image reconstruction algorithm based on tracklets, short trajectories of the same objects. We improve the localization accuracy by associating the same emitters from multiple frames to form tracklets and by aggregating signals to enhance the signal to noise ratio. We also introduce a weighted mean-shift algorithm (WMS) to automatically detect the number of modes (emitters) in overlapping regions of tracklets so that not only well-separated single emitters but also individual emitters within multi-emitter groups can be identified and tracked. In combination with a maximum likelihood estimator method (MLE), we are able to resolve low to medium density of overlapping emitters with improved localization accuracy. We evaluate the performance of our method with both synthetic and experimental data, and show that the tracklet-based reconstruction is superior in localization accuracy, particularly for weak signals embedded in a strong background. Using this method, for the first time, we resolve the transverse tubule structure of the mammalian skeletal muscle.

  3. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  4. Neuroprotective efficiency of Mangifera indica leaves extract on cadmium-induced cortical damage in rats.

    PubMed

    Al Omairi, Naif E; Radwan, Omyma K; Alzahrani, Yahea A; Kassab, Rami B

    2018-03-20

    Due to the high ability of cadmium to cross the blood-brain barrier, cadmium (Cd) causes severe neurological damages. Hence, the purpose of this study was to investigate the possible protective effect of Mangifera indica leaf extract (MLE) against Cd-induced neurotoxicity. Rats were divided into eight groups. Group 1 served as vehicle control group, groups 2, 3 and 4 received MLE (100, 200, 300 mg /kg b.wt, respectively). Group 5 was treated with CdCl 2 (5 mg/kg b.wt). Groups 6, 7 and 8 were co-treated with MLE and CdCl 2 using the same doses. All treatments were orally administered for 28 days. Cortical oxidative stress biomarkers [Malondialdehyde (MDA), nitric oxide (NO), glutathione content (GSH), oxidized form of glutathione (GSSG), 8-hydroxy-2-deoxyguanosine (8-OHdG), superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase (GPx)], inflammatory cytokines [tumor necrosis factor (TNF-α) and interlukin-1β (IL-1β)], biogenic amines [norepinephrine (NE), dopamine (DA) and serotonin (5-HT)], some biogenic metabolites [3,4-dihydroxyphenylacetic acid (DOPAC), homovanillic acid (HVA) and 5-hydroxyindoleacetic acid (5-HIAA)], acetylcholine esterase activity (AChE) and purinergic compound [adenosine triphosphate (ATP)] were determined in frontal cortex of rats. Results indicated that Cd increased levels of the oxidative biomarkers (MDA, NO, GSSG and 8-OHdG) and the inflammatory mediators (TNF-α and IL-1β), while lowered GSH, SOD, CAT, GPx and ATP levels. Also, Cd significantly decreased the AChE activity and the tested biogenic amines while elevated the tested metabolites in the frontal cortex. Levels of all disrupted cortical parameters were alleviated by MLE co-administration. The MLE induced apparent protective effect on Cd-induced neurotoxicity in concern with its medium and higher doses which may be due to its antioxidant and anti-inflammatory activities.

  5. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  6. Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.

    ERIC Educational Resources Information Center

    Ramsay, J. O.

    1980-01-01

    Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)

  7. ATAC Autocuer Modeling Analysis.

    DTIC Science & Technology

    1981-01-01

    the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of

  8. Epidemiologic programs for computers and calculators. A microcomputer program for multiple logistic regression by unconditional and conditional maximum likelihood methods.

    PubMed

    Campos-Filho, N; Franco, E L

    1989-02-01

    A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.

  9. The Maximum Likelihood Solution for Inclination-only Data

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2006-12-01

    The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag

  10. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  11. Correlation between the Hurst exponent and the maximal Lyapunov exponent: Examining some low-dimensional conservative maps

    NASA Astrophysics Data System (ADS)

    Tarnopolski, Mariusz

    2018-01-01

    The Chirikov standard map and the 2D Froeschlé map are investigated. A few thousand values of the Hurst exponent (HE) and the maximal Lyapunov exponent (mLE) are plotted in a mixed space of the nonlinear parameter versus the initial condition. Both characteristic exponents reveal remarkably similar structures in this space. A tight correlation between the HEs and mLEs is found, with the Spearman rank ρ = 0 . 83 and ρ = 0 . 75 for the Chirikov and 2D Froeschlé maps, respectively. Based on this relation, a machine learning (ML) procedure, using the nearest neighbor algorithm, is performed to reproduce the HE distribution based on the mLE distribution alone. A few thousand HE and mLE values from the mixed spaces were used for training, and then using 2 - 2 . 4 × 105 mLEs, the HEs were retrieved. The ML procedure allowed to reproduce the structure of the mixed spaces in great detail.

  12. Algorithms of maximum likelihood data clustering with applications

    NASA Astrophysics Data System (ADS)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  13. A low-power, high-throughput maximum-likelihood convolutional decoder chip for NASA's 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Mccallister, R. D.; Crawford, J. J.

    1981-01-01

    It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.

  14. PAMLX: a graphical user interface for PAML.

    PubMed

    Xu, Bo; Yang, Ziheng

    2013-12-01

    This note announces pamlX, a graphical user interface/front end for the paml (for Phylogenetic Analysis by Maximum Likelihood) program package (Yang Z. 1997. PAML: a program package for phylogenetic analysis by maximum likelihood. Comput Appl Biosci. 13:555-556; Yang Z. 2007. PAML 4: Phylogenetic analysis by maximum likelihood. Mol Biol Evol. 24:1586-1591). pamlX is written in C++ using the Qt library and communicates with paml programs through files. It can be used to create, edit, and print control files for paml programs and to launch paml runs. The interface is available for free download at http://abacus.gene.ucl.ac.uk/software/paml.html.

  15. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  16. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    ERIC Educational Resources Information Center

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  17. Maximum likelihood phase-retrieval algorithm: applications.

    PubMed

    Nahrstedt, D A; Southwell, W H

    1984-12-01

    The maximum likelihood estimator approach is shown to be effective in determining the wave front aberration in systems involving laser and flow field diagnostics and optical testing. The robustness of the algorithm enables convergence even in cases of severe wave front error and real, nonsymmetrical, obscured amplitude distributions.

  18. Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach

    NASA Astrophysics Data System (ADS)

    Billman, Caleb; Gonthier, P. L.; Harding, A. K.

    2012-01-01

    We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.

  19. Coalescent-based species tree inference from gene tree topologies under incomplete lineage sorting by maximum likelihood.

    PubMed

    Wu, Yufeng

    2012-03-01

    Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.

  20. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  1. On Muthen's Maximum Likelihood for Two-Level Covariance Structure Models

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro

    2005-01-01

    Data in social and behavioral sciences are often hierarchically organized. Special statistical procedures that take into account the dependence of such observations have been developed. Among procedures for 2-level covariance structure analysis, Muthen's maximum likelihood (MUML) has the advantage of easier computation and faster convergence. When…

  2. Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.

    2003-01-01

    The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…

  3. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  4. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  5. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    EPA Science Inventory

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  6. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  7. A Study of Item Bias for Attitudinal Measurement Using Maximum Likelihood Factor Analysis.

    ERIC Educational Resources Information Center

    Mayberry, Paul W.

    A technique for detecting item bias that is responsive to attitudinal measurement considerations is a maximum likelihood factor analysis procedure comparing multivariate factor structures across various subpopulations, often referred to as SIFASP. The SIFASP technique allows for factorial model comparisons in the testing of various hypotheses…

  8. The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.

    ERIC Educational Resources Information Center

    Baldwin, Beatrice

    The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…

  9. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  10. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  11. MSFC shuttle lightning research

    NASA Technical Reports Server (NTRS)

    Vaughan, Otha H., Jr.

    1993-01-01

    The shuttle mesoscale lightning experiment (MLE), flown on earlier shuttle flights, and most recently flown on the following space transportation systems (STS's), STS-31, -32, -35, -37, -38, -40, -41, and -48, has continued to focus on obtaining additional quantitative measurements of lightning characteristics and to create a data base for use in demonstrating observation simulations for future spaceborne lightning mapping systems. These flights are also providing design criteria data for the design of a proposed shuttle MLE-type lightning research instrument called mesoscale lightning observational sensors (MELOS), which are currently under development here at MSFC.

  12. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  13. An evaluation of percentile and maximum likelihood estimators of weibull paremeters

    Treesearch

    Stanley J. Zarnoch; Tommy R. Dell

    1985-01-01

    Two methods of estimating the three-parameter Weibull distribution were evaluated by computer simulation and field data comparison. Maximum likelihood estimators (MLB) with bias correction were calculated with the computer routine FITTER (Bailey 1974); percentile estimators (PCT) were those proposed by Zanakis (1979). The MLB estimators had superior smaller bias and...

  14. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  15. Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Lee, Sik-Yum

    2005-01-01

    In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…

  16. Unclassified Publications of Lincoln Laboratory, 1 January - 31 December 1990. Volume 16

    DTIC Science & Technology

    1990-12-31

    Apr. 1990 ADA223419 Hopped Communication Systems with Nonuniform Hopping Distributions 880 Bistatic Radar Cross Section of a Fenn, A.J. 2 May1990...EXPERIMENT JA-6241 MS-8424 LUNAR PERTURBATION MAXIMUM LIKELIHOOD ALGORITHM JA-6241 JA-6467 LWIR SPECTRAL BAND MAXIMUM LIKELIHOOD ESTIMATOR JA-6476 MS-8466

  17. Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2010-01-01

    Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…

  18. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  19. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  20. Five Methods for Estimating Angoff Cut Scores with IRT

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2017-01-01

    This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…

  1. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  2. Comparison of standard maximum likelihood classification and polytomous logistic regression used in remote sensing

    Treesearch

    John Hogland; Nedret Billor; Nathaniel Anderson

    2013-01-01

    Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...

  3. Procedure for estimating stability and control parameters from flight test data by using maximum likelihood methods employing a real-time digital system

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Bowles, R. L.; Mayhew, S. C.

    1972-01-01

    A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.

  4. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations.

    PubMed

    Can, Seda; van de Schoot, Rens; Hox, Joop

    2015-06-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.

  5. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  6. Measurement of In Vitro Single Cell Temperature by Novel Thermocouple Nanoprobe in Acute Lung Injury Models.

    PubMed

    Wang, Xing; Chen, Qiuhua; Tian, Wenjuan; Wang, Jianqing; Cheng, Lu; Lu, Jun; Chen, Mingqi; Pei, Yinhao; Li, Can; Chen, Gong; Gu, Ning

    2017-01-01

    Energy metabolism may alter pattern differences in acute lung injury (ALI) as one of the causes but the detailed features at single-cellular level remain unclear. Changes in intercellular temperature and adenosine triphosphate (ATP) concentration within the single cell may help to understand the role of energy metabolism in causing ALI. ALI in vitro models were established by treating mice lung epithelial (MLE-12) cells with lipopolysaccharide (LPS), hydrogen peroxide (H2O2), hydrochloric acid (HCl) and cobalt chloride (CoCl2, respectively. 100 nm micro thermocouple probe (TMP) was inserted into the cytosol by micromanipulation system and thermoelectric readings were recorded to calculate the intracellular temperature based on standard curve. The total ATP contents for the MLE-12 cells were evaluated at different time intervals after treatments. A significant increase of intracellular temperature was observed after 10 or 20 μg/L LPS and HCl treatments. The HCl increased the temperature in a dose-dependent manner. On the contrary, H2O2 induced a significant decline of intracellular temperature after treatment. No significant difference in intracellular temperature was observed after CoCl2 exposure. The intracellular ATP levels decreased in a time-dependent manner after treatment with H2O2 and HCl, while the LPS and CoCl2 had no significant effect on ATP levels. The intracellular temperature responses varied in different ALI models. The concentration of ATP in the MLE-12 cells played part in the intracellular temperature changes. No direct correlation was observed between the intracellular temperature and concentration of ATP in the MLE-12 cells.

  7. A Comparative Study on Phytochemical Profiles and Biological Activities of Sclerocarya birrea (A.Rich.) Hochst Leaf and Bark Extracts

    PubMed Central

    Russo, Daniela; Miglionico, Rocchina; Carmosino, Monica; Bisaccia, Faustino; Armentano, Maria Francesca

    2018-01-01

    Sclerocarya birrea (A.Rich.) Hochst (Anacardiaceae) is a savannah tree that has long been used in sub-Saharan Africa as a medicinal remedy for numerous ailments. The purpose of this study was to increase the scientific knowledge about this plant by evaluating the total content of polyphenols, flavonoids, and tannins in the methanol extracts of the leaves and bark (MLE and MBE, respectively), as well as the in vitro antioxidant activity and biological activities of these extracts. Reported results show that MLE is rich in flavonoids (132.7 ± 10.4 mg of quercetin equivalents/g), whereas MBE has the highest content of tannins (949.5 ± 29.7 mg of tannic acid equivalents/g). The antioxidant activity was measured using four different in vitro tests: β-carotene bleaching (BCB), 2,2′-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS), O2−•, and nitric oxide (NO•) assays. In all cases, MBE was the most active compared to MLE and the standards used (Trolox and ascorbic acid). Furthermore, MBE and MLE were tested to evaluate their activity in HepG2 and fibroblast cell lines. A higher cytotoxic activity of MBE was evidenced and confirmed by more pronounced alterations in cell morphology. MBE induced cell death, triggering the intrinsic apoptotic pathway by reactive oxygen species (ROS) generation, which led to a loss of mitochondrial membrane potential with subsequent cytochrome c release from the mitochondria into the cytosol. Moreover, MBE showed lower cytotoxicity in normal human dermal fibroblasts, suggesting its potential as a selective anticancer agent. PMID:29316691

  8. A Comparative Study on Phytochemical Profiles and Biological Activities of Sclerocarya birrea (A.Rich.) Hochst Leaf and Bark Extracts.

    PubMed

    Russo, Daniela; Miglionico, Rocchina; Carmosino, Monica; Bisaccia, Faustino; Andrade, Paula B; Valentão, Patrícia; Milella, Luigi; Armentano, Maria Francesca

    2018-01-08

    Sclerocarya birrea (A.Rich.) Hochst (Anacardiaceae) is a savannah tree that has long been used in sub-Saharan Africa as a medicinal remedy for numerous ailments. The purpose of this study was to increase the scientific knowledge about this plant by evaluating the total content of polyphenols, flavonoids, and tannins in the methanol extracts of the leaves and bark (MLE and MBE, respectively), as well as the in vitro antioxidant activity and biological activities of these extracts. Reported results show that MLE is rich in flavonoids (132.7 ± 10.4 mg of quercetin equivalents/g), whereas MBE has the highest content of tannins (949.5 ± 29.7 mg of tannic acid equivalents/g). The antioxidant activity was measured using four different in vitro tests: β-carotene bleaching (BCB), 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS), O₂ -• , and nitric oxide (NO • ) assays. In all cases, MBE was the most active compared to MLE and the standards used (Trolox and ascorbic acid). Furthermore, MBE and MLE were tested to evaluate their activity in HepG2 and fibroblast cell lines. A higher cytotoxic activity of MBE was evidenced and confirmed by more pronounced alterations in cell morphology. MBE induced cell death, triggering the intrinsic apoptotic pathway by reactive oxygen species (ROS) generation, which led to a loss of mitochondrial membrane potential with subsequent cytochrome c release from the mitochondria into the cytosol. Moreover, MBE showed lower cytotoxicity in normal human dermal fibroblasts, suggesting its potential as a selective anticancer agent.

  9. Hyperthermia promotes and prevents respiratory epithelial apoptosis through distinct mechanisms.

    PubMed

    Nagarsekar, Ashish; Tulapurkar, Mohan E; Singh, Ishwar S; Atamas, Sergei P; Shah, Nirav G; Hasday, Jeffrey D

    2012-12-01

    Hyperthermia has been shown to confer cytoprotection and to augment apoptosis in different experimental models. We analyzed the mechanisms of both effects in the same mouse lung epithelial (MLE) cell line (MLE15). Exposing MLE15 cells to heat shock (HS; 42°C, 2 h) or febrile-range hyperthermia (39.5°C) concurrent with activation of the death receptors, TNF receptor 1 or Fas, greatly accelerated apoptosis, which was detectable within 30 minutes and was associated with accelerated activation of caspase-2, -8, and -10, and the proapoptotic protein, Bcl2-interacting domain (Bid). Caspase-3 activation and cell death were partially blocked by inhibitors targeting all three initiator caspases. Cells expressing the IκB superrepessor were more susceptible than wild-type cells to TNF-α-induced apoptosis at 37°C, but HS and febrile-range hyperthermia still increased apoptosis in these cells. Delaying HS for 3 hours after TNF-α treatment abrogated its proapoptotic effect in wild-type cells, but not in IκB superrepressor-expression cells, suggesting that TNF-α stimulates delayed resistance to the proapoptotic effects of HS through an NF-κB-dependent mechanism. Pre-exposure to 2-hour HS beginning 6 to16 hours before TNF-α treatment or Fas activation reduced apoptosis in MLE15 cells. The antiapoptotic effects of HS pretreatment were reduced in TNF-α-treated embryonic fibroblasts from heat shock factor-1 (HSF1)-deficient mice, but the proapoptotic effects of concurrent HS were preserved. Thus, depending on the temperature and timing relative to death receptor activation, hyperthermia can exert pro- and antiapoptotic effects through distinct mechanisms.

  10. Effects of aeration and internal recycle flow on nitrous oxide emissions from a modified Ludzak-Ettinger process fed with glycerol.

    PubMed

    Song, Kang; Suenaga, Toshikazu; Harper, Willie F; Hori, Tomoyuki; Riya, Shohei; Hosomi, Masaaki; Terada, Akihiko

    2015-12-01

    Nitrous oxide (N2O) is emitted from a modified Ludzak-Ettinger (MLE) process, as a primary activated sludge system, which requires mitigation. The effects of aeration rates and internal recycle flow (IRF) ratios on N2O emission were investigated in an MLE process fed with glycerol. Reducing the aeration rate from 1.5 to 0.5 L/min increased gaseous the N2O concentration from the aerobic tank and the dissolved N2O concentration in the anoxic tank by 54.4 and 53.4 %, respectively. During the period of higher aeration, the N2O-N conversion ratio was 0.9 % and the potential N2O reducers were predominantly Rhodobacter, which accounted for 21.8 % of the total population. Increasing the IRF ratio from 3.6 to 7.2 decreased the N2O emission rate from the aerobic tank and the dissolved N2O concentration in the anoxic tank by 56 and 48 %, respectively. This study suggests effective N2O mitigation strategies for MLE systems.

  11. Using embryology screencasts: a useful addition to the student learning experience?

    PubMed

    Evans, Darrell J R

    2011-01-01

    Although podcasting has been a well used resource format in the last few years as a way of improving the student learning experience, the inclusion of enhanced audiovisual formats such as screencasts has been less used, despite the advantage that they work well for both visual and auditory learners. This study examines the use of and student reaction to a set of screencasts introduced to accompany embryology lectures within a second year module at Brighton and Sussex Medical School. Five mini-lecture screencasts and one review quiz screencast were produced as digital recordings of computer screen output with audio narration and released to students via the managed learning environment (MLE). Analysis of server log information from the MLE showed that the screencasts were accessed by many of the students in the cohort, although the exact numbers were variable depending on the screencast. Students accessed screencasts at different times of the day and over the whole of the access period, although maximum downloads were predictably recorded leading up to the written examination. Quantitative and qualitative feedback demonstrated that most students viewed the screencasts favorably in terms of usefulness to their learning, and end-of-module written examination scores suggest that the screencasts may have had a positive effect on student outcome when compared with previous student attainment. Overall, the development of a series of embryology screencasts to accompany embryology lecture sessions appears to be a useful addition to learning for most students and not simply an innovation that checks the box of "technology engagement." Copyright © 2011 American Association of Anatomists.

  12. Approximated maximum likelihood estimation in multifractal random walks

    NASA Astrophysics Data System (ADS)

    Løvsletten, O.; Rypdal, M.

    2012-04-01

    We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.

  13. Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan

    2005-01-01

    In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…

  14. 12-mode OFDM transmission using reduced-complexity maximum likelihood detection.

    PubMed

    Lobato, Adriana; Chen, Yingkan; Jung, Yongmin; Chen, Haoshuo; Inan, Beril; Kuschnerov, Maxim; Fontaine, Nicolas K; Ryf, Roland; Spinnler, Bernhard; Lankl, Berthold

    2015-02-01

    We report the transmission of 163-Gb/s MDM-QPSK-OFDM and 245-Gb/s MDM-8QAM-OFDM transmission over 74 km of few-mode fiber supporting 12 spatial and polarization modes. A low-complexity maximum likelihood detector is employed to enhance the performance of a system impaired by mode-dependent loss.

  15. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  16. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  17. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    ERIC Educational Resources Information Center

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  18. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    1992-01-01

    Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…

  19. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  20. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  1. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  2. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    NASA Technical Reports Server (NTRS)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  3. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    NASA Astrophysics Data System (ADS)

    Lowell, A. W.; Boggs, S. E.; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C.; Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y.; Jean, P.; von Ballmoos, P.; Lin, C.-H.; Amman, M.

    2017-10-01

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ˜21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.

  4. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  5. DSN telemetry system performance with convolutionally coded data using operational maximum-likelihood convolutional decoders

    NASA Technical Reports Server (NTRS)

    Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.

    1976-01-01

    The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.

  6. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    ERIC Educational Resources Information Center

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  7. The Construct Validity of Higher Order Structure-of-Intellect Abilities in a Battery of Tests Emphasizing the Product of Transformations: A Confirmatory Maximum Likelihood Factor Analysis.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; And Others

    1982-01-01

    A causal modeling system, using confirmatory maximum likelihood factor analysis with the LISREL IV computer program, evaluated the construct validity underlying the higher order factor structure of a given correlation matrix of 46 structure-of-intellect tests emphasizing the product of transformations. (Author/PN)

  8. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  9. On non-parametric maximum likelihood estimation of the bivariate survivor function.

    PubMed

    Prentice, R L

    The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.

  10. Comparison of the Efficacy of Atopalm(®) Multi-Lamellar Emulsion Cream and Physiogel(®) Intensive Cream in Improving Epidermal Permeability Barrier in Sensitive Skin.

    PubMed

    Jeong, Sekyoo; Lee, Sin Hee; Park, Byeong Deog; Wu, Yan; Man, George; Man, Mao-Qiang

    2016-03-01

    The management of sensitive skin, which affects over 60% of the general population, has been a long-standing challenge for both patients and clinicians. Because defective epidermal permeability barrier is one of the clinical features of sensitive skin, barrier-enhancing products could be an optimal regimen for sensitive skin. In the present study, we evaluated the efficacy and safety of two barrier-enhancing products, i.e., Atopalm (®) Multi-Lamellar Emulsion (MLE) Cream and Physiogel (®) Intensive Cream for sensitive skin. 60 patients with sensitive skin, aged 22-40 years old, were randomly assigned to one group treated with Atopalm MLE Cream, and another group treated with Physiogel Intensive Cream twice daily for 4 weeks. Lactic acid stinging test scores (LASTS), stratum hydration (SC) and transepidermal water loss (TEWL) were assessed before, 2 and 4 weeks after the treatment. Atopalm MLE Cream significantly lowered TEWL after 2 and 4 weeks of treatment (p < 0.01). In contrast, Physiogel Intensive Cream significantly increased TEWL after 2 weeks of treatment (p < 0.05) while TEWL significantly decreased after 4-week treatments. Moreover, both Atopalm MLE Cream and Physiogel Intensive Cream significantly increased SC hydration, and improved LASTS after 4 weeks of treatment. Both barrier-enhancing products are effective and safe for improving epidermal functions, including permeability barrier, SC hydration and LASTS, in sensitive skin. These products could be a valuable alternative for management of sensitive skin. Veterans Affairs Medical Center, San Francisco, California, USA, and NeoPharm Co., Ltd., Daejeon, Korea.

  11. Combined Treatment of Mulberry Leaf and Fruit Extract Ameliorates Obesity-Related Inflammation and Oxidative Stress in High Fat Diet-Induced Obese Mice

    PubMed Central

    Lim, Hyun Hwa; Yang, Soo Jin; Kim, Yuri; Lee, Myoungsook

    2013-01-01

    Abstract The aim of this study was to investigate whether a combined treatment of mulberry leaf extract (MLE) and mulberry fruit extract (MFE) was effective for improving obesity and obesity-related inflammation and oxidative stress in high fat (HF) diet-induced obese mice. After obesity was induced by HF diet for 9 weeks, the mice were divided into eight groups: (1) lean control, (2) HF diet-induced obese control, (3) 1:1 ratio of MLE and MFE at doses of 200 (L1:1), (4) 500 (M1:1), and (5) 1000 (H1:1) mg/kg per day, and (6) 2:1 ratio of MLE and MFE at doses of 200 (L2:1), (7) 500 (M2:1), and (8) 1000 (H2:1) mg/kg per day. All six combined treatments significantly lowered body weight gain, plasma triglycerides, and lipid peroxidation levels after the 12-week treatment period. Additionally, all combined treatments suppressed hepatic fat accumulation and reduced epididymal adipocyte size. These improvements were accompanied by decreases in protein levels of proinflammatory markers (tumor necrosis factor-alpha, C-reactive protein, interleukin-1, inducible nitric oxide synthase, and phospho-nuclear factor-kappa B inhibitor alpha) and oxidative stress markers (heme oxygenase-1 and manganese superoxide dismutase). M2:1 was the most effective ratio and dose for the improvements in obesity, inflammation, and oxidative stress. These results demonstrate that a combined MLE and MFE treatment ameliorated obesity and obesity-related metabolic stressors and suggest that it can be used as a means to prevent and/or treat obesity. PMID:23957352

  12. Resting-state functional magnetic resonance imaging of the subthalamic microlesion and stimulation effects in Parkinson's disease: Indications of a principal role of the brainstem

    PubMed Central

    Holiga, Štefan; Mueller, Karsten; Möller, Harald E.; Urgošík, Dušan; Růžička, Evžen; Schroeter, Matthias L.; Jech, Robert

    2015-01-01

    During implantation of deep-brain stimulation (DBS) electrodes in the target structure, neurosurgeons and neurologists commonly observe a “microlesion effect” (MLE), which occurs well before initiating subthalamic DBS. This phenomenon typically leads to a transitory improvement of motor symptoms of patients suffering from Parkinson's disease (PD). Mechanisms behind MLE remain poorly understood. In this work, we exploited the notion of ranking to assess spontaneous brain activity in PD patients examined by resting-state functional magnetic resonance imaging in response to penetration of DBS electrodes in the subthalamic nucleus. In particular, we employed a hypothesis-free method, eigenvector centrality (EC), to reveal motor-communication-hubs of the highest rank and their reorganization following the surgery; providing a unique opportunity to evaluate the direct impact of disrupting the PD motor circuitry in vivo without prior assumptions. Penetration of electrodes was associated with increased EC of functional connectivity in the brainstem. Changes in connectivity were quantitatively related to motor improvement, which further emphasizes the clinical importance of the functional integrity of the brainstem. Surprisingly, MLE and DBS were associated with anatomically different EC maps despite their similar clinical benefit on motor functions. The DBS solely caused an increase in connectivity of the left premotor region suggesting separate pathophysiological mechanisms of both interventions. While the DBS acts at the cortical level suggesting compensatory activation of less affected motor regions, the MLE affects more fundamental circuitry as the dysfunctional brainstem predominates in the beginning of PD. These findings invigorate the overlooked brainstem perspective in the understanding of PD and support the current trend towards its early diagnosis. PMID:26509113

  13. Resting-state functional magnetic resonance imaging of the subthalamic microlesion and stimulation effects in Parkinson's disease: Indications of a principal role of the brainstem.

    PubMed

    Holiga, Štefan; Mueller, Karsten; Möller, Harald E; Urgošík, Dušan; Růžička, Evžen; Schroeter, Matthias L; Jech, Robert

    2015-01-01

    During implantation of deep-brain stimulation (DBS) electrodes in the target structure, neurosurgeons and neurologists commonly observe a "microlesion effect" (MLE), which occurs well before initiating subthalamic DBS. This phenomenon typically leads to a transitory improvement of motor symptoms of patients suffering from Parkinson's disease (PD). Mechanisms behind MLE remain poorly understood. In this work, we exploited the notion of ranking to assess spontaneous brain activity in PD patients examined by resting-state functional magnetic resonance imaging in response to penetration of DBS electrodes in the subthalamic nucleus. In particular, we employed a hypothesis-free method, eigenvector centrality (EC), to reveal motor-communication-hubs of the highest rank and their reorganization following the surgery; providing a unique opportunity to evaluate the direct impact of disrupting the PD motor circuitry in vivo without prior assumptions. Penetration of electrodes was associated with increased EC of functional connectivity in the brainstem. Changes in connectivity were quantitatively related to motor improvement, which further emphasizes the clinical importance of the functional integrity of the brainstem. Surprisingly, MLE and DBS were associated with anatomically different EC maps despite their similar clinical benefit on motor functions. The DBS solely caused an increase in connectivity of the left premotor region suggesting separate pathophysiological mechanisms of both interventions. While the DBS acts at the cortical level suggesting compensatory activation of less affected motor regions, the MLE affects more fundamental circuitry as the dysfunctional brainstem predominates in the beginning of PD. These findings invigorate the overlooked brainstem perspective in the understanding of PD and support the current trend towards its early diagnosis.

  14. Combined treatment of mulberry leaf and fruit extract ameliorates obesity-related inflammation and oxidative stress in high fat diet-induced obese mice.

    PubMed

    Lim, Hyun Hwa; Yang, Soo Jin; Kim, Yuri; Lee, Myoungsook; Lim, Yunsook

    2013-08-01

    The aim of this study was to investigate whether a combined treatment of mulberry leaf extract (MLE) and mulberry fruit extract (MFE) was effective for improving obesity and obesity-related inflammation and oxidative stress in high fat (HF) diet-induced obese mice. After obesity was induced by HF diet for 9 weeks, the mice were divided into eight groups: (1) lean control, (2) HF diet-induced obese control, (3) 1:1 ratio of MLE and MFE at doses of 200 (L1:1), (4) 500 (M1:1), and (5) 1000 (H1:1) mg/kg per day, and (6) 2:1 ratio of MLE and MFE at doses of 200 (L2:1), (7) 500 (M2:1), and (8) 1000 (H2:1) mg/kg per day. All six combined treatments significantly lowered body weight gain, plasma triglycerides, and lipid peroxidation levels after the 12-week treatment period. Additionally, all combined treatments suppressed hepatic fat accumulation and reduced epididymal adipocyte size. These improvements were accompanied by decreases in protein levels of proinflammatory markers (tumor necrosis factor-alpha, C-reactive protein, interleukin-1, inducible nitric oxide synthase, and phospho-nuclear factor-kappa B inhibitor alpha) and oxidative stress markers (heme oxygenase-1 and manganese superoxide dismutase). M2:1 was the most effective ratio and dose for the improvements in obesity, inflammation, and oxidative stress. These results demonstrate that a combined MLE and MFE treatment ameliorated obesity and obesity-related metabolic stressors and suggest that it can be used as a means to prevent and/or treat obesity.

  15. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    PubMed

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  16. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowell, A. W.; Boggs, S. E; Chiu, C. L.

    2017-10-20

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. Wemore » find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.« less

  17. Lod scores for gene mapping in the presence of marker map uncertainty.

    PubMed

    Stringham, H M; Boehnke, M

    2001-07-01

    Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.

  18. On the Existence and Uniqueness of JML Estimates for the Partial Credit Model

    ERIC Educational Resources Information Center

    Bertoli-Barsotti, Lucio

    2005-01-01

    A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…

  19. Formulating the Rasch Differential Item Functioning Model under the Marginal Maximum Likelihood Estimation Context and Its Comparison with Mantel-Haenszel Procedure in Short Test and Small Sample Conditions

    ERIC Educational Resources Information Center

    Paek, Insu; Wilson, Mark

    2011-01-01

    This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…

  20. Microbial community analysis in the autotrophic denitrification process using spent sulfidic caustic by denaturing gradient gel electrophoresis of PCR-amplified genes.

    PubMed

    Lee, J-H; Lee, S-M; Choi, G-C; Park, H-S; Kang, D-H; Park, J-J

    2011-01-01

    Spent sulfidic caustic (SSC) produced from petrochemical plants contains a high concentration of hydrogen sulfide and alkalinity, and some almost non-biodegradable organic compounds such as benzene, toluene, ethylbenzene and xylenes (BTEX). SSC is mainly incinerated with auxiliary fuel, leading to secondary pollution problems. The reuse of this waste is becoming increasingly important from economic and environmental viewpoints. To denitrify wastewater with low COD/N ratio, additional carbon sources are required. Thus, autotrophic denitrification has attracted increasing attention. In this study, SSC was injected as an electron donor for sulfur-based autotrophic denitrification in the modified Ludzack-Ettinger (MLE) process. The efficiencies of nitrification, COD, and total nitrogen (TN) removal were evaluated with varying SSC dosage. Adequate SSC injection exhibited stable autotrophic denitrification. No BTEX were detected in the monitored BTEX concentrations of the effluent. To analyse the microbial community of the MLE process, PCR-DGGE based on 16 S rDNA with EUB primers, TD primers and nirK gene with nirK primers was performed in order to elucidate the application of the MLE process to SSC.

  1. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  2. Comparison of wheat classification accuracy using different classifiers of the image-100 system

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.

    1981-01-01

    Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.

  3. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  4. Estimating a Logistic Discrimination Functions When One of the Training Samples Is Subject to Misclassification: A Maximum Likelihood Approach.

    PubMed

    Nagelkerke, Nico; Fidler, Vaclav

    2015-01-01

    The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.

  5. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  6. Statistical Bias in Maximum Likelihood Estimators of Item Parameters.

    DTIC Science & Technology

    1982-04-01

    34 a> E r’r~e r ,C Ie I# ne,..,.rVi rnd Id.,flfv b1 - bindk numb.r) I; ,t-i i-cd I ’ tiie bias in the maximum likelihood ,st i- i;, ’ t iIeiIrs in...NTC, IL 60088 Psychometric Laboratory University of North Carolina I ERIC Facility-Acquisitions Davie Hall 013A 4833 Rugby Avenue Chapel Hill, NC

  7. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    ERIC Educational Resources Information Center

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  8. Bias correction of risk estimates in vaccine safety studies with rare adverse events using a self-controlled case series design.

    PubMed

    Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley

    2013-12-15

    The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.

  9. Composite Partial Likelihood Estimation Under Length-Biased Sampling, With Application to a Prevalent Cohort Study of Dementia

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing

    2013-01-01

    The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265

  10. Vector-Host Interactions of Culiseta melanura in a Focus of Eastern Equine Encephalitis Virus Activity in Southeastern Virginia.

    PubMed

    Molaei, Goudarz; Armstrong, Philip M; Abadam, Charles F; Akaratovic, Karen I; Kiser, Jay P; Andreadis, Theodore G

    2015-01-01

    Eastern equine encephalitis virus (EEEV) causes a highly pathogenic mosquito-borne zoonosis that is responsible for sporadic outbreaks of severe illness in humans and equines in the eastern USA. Culiseta (Cs.) melanura is the primary vector of EEEV in most geographic regions but its feeding patterns on specific avian and mammalian hosts are largely unknown in the mid-Atlantic region. The objectives of our study were to: 1) identify avian hosts of Cs. melanura and evaluate their potential role in enzootic amplification of EEEV, 2) assess spatial and temporal patterns of virus activity during a season of intense virus transmission, and 3) investigate the potential role of Cs. melanura in epidemic/epizootic transmission of EEEV to humans and equines. Accordingly, we collected mosquitoes at 55 sites in Suffolk, Virginia in 2013, and identified the source of blood meals in engorged mosquitoes by nucleotide sequencing PCR products of the mitochondrial cytochrome b gene. We also examined field-collected mosquitoes for evidence of infection with EEEV using Vector Test, cell culture, and PCR. Analysis of 188 engorged Cs. melanura sampled from April through October 2013 indicated that 95.2%, 4.3%, and 0.5% obtained blood meals from avian, mammalian, and reptilian hosts, respectively. American Robin was the most frequently identified host for Cs. melanura (42.6% of blood meals) followed by Northern Cardinal (16.0%), European Starling (11.2%), Carolina Wren (4.3%), and Common Grackle (4.3%). EEEV was detected in 106 mosquito pools of Cs. melanura, and the number of virus positive pools peaked in late July with 22 positive pools and a Maximum Likelihood Estimation (MLE) infection rate of 4.46 per 1,000 mosquitoes. Our findings highlight the importance of Cs. melanura as a regional EEEV vector based on frequent feeding on virus-competent bird species. A small proportion of blood meals acquired from mammalian hosts suggests the possibility that this species may occasionally contribute to epidemic/epizootic transmission of EEEV.

  11. TIGA Tide Gauge Data Reprocessing at GFZ

    NASA Astrophysics Data System (ADS)

    Deng, Zhiguo; Schöne, Tilo; Gendt, Gerd

    2014-05-01

    To analyse the tide gauge measurements for the purpose of global long-term sea level change research a well-defined absolute reference frame is required by oceanographic community. To create such frame the data from a global GNSS network located at or near tide gauges are processed. For analyzing the GNSS data on a preferably continuous basis the International GNSS Service (IGS) Tide Gauge Benchmark Monitoring Working Group (TIGA-WG) is responsible. As one of the TIGA Analysis Centers the German Research Centre for Geosciences (GFZ) is contributing to the IGS TIGA Reprocessing Campaign. The solutions of the TIGA Reprocessing Campaign will also contribute to 2nd IGS Data Reprocessing Campaign with GFZ IGS reprocessing solution. After the first IGS reprocessing finished in 2010 some improvements were implemented into the latest GFZ software version EPOS.P8: reference frame IGb08 based on ITRF2008, antenna calibration igs08.atx, geopotential model (EGM2008), higher-order ionospheric effects, new a priori meteorological model (GPT2), VMF mapping function, and other minor improvements. GPS data of the globally distributed tracking network of 794 stations for the time span from 1994 until end of 2012 are used for the TIGA reprocessing. To handle such large network a new processing strategy is developed and described in detail. In the TIGA reprocessing the GPS@TIGA data are processed in precise point positioning (PPP) mode to clean data using the IGS reprocessing orbit and clock products. To validate the quality of the PPP coordinate results the rates of 80 GPS@TIGA station vertical movement are estimated from the PPP results using Maximum Likelihood Estimation (MLE) method. The rates are compared with the solution of University of LaRochelle Consortium (ULR) (named ULR5). 56 of the 80 stations have a difference of the vertical velocities below 1 mm/yr. The error bars of PPP rates are significant larger than those of ULR5, which indicates large time correlated noise in the PPP solutions.

  12. Biweekly Maps of Wind Stress for the North Pacific from the ERS-1 Scatterometer

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The European Remote-sensing Satellite (ERS-1) was launched in July 1991 and contained several instruments for observing the Earth's ocean including a wind scatterometer. The scatterometer measurements were processed by the European Space Agency (ESA) and the Jet Propulsion Laboratory (JPL). JPL reprocessed (Freilich and Dunbar, 1992) the ERS-1 backscatter measurements to produced a 'value added' data set that contained the ESA wind vector as well as a set of up to four ambiguities. These ambiguities were further processed using a maximum-likelihood estimation (MLE) and a median filter to produce a 'selected vector.' This report describes a technique developed to produce time-averaged wind field estimates with their expected errors using only scatterometer wind vectors. The processing described in this report involved extracting regions of interest from the data tapes, checking the quality and creating the wind field estimate. This analysis also includes the derivation of biweekly average wind vectors over the North Pacific Ocean at a resolution of 0.50 x 0.50. This was done with an optimal average algorithm temporally and an over-determined biharmonic spline spatially. There have been other attempts at creating gridded wind files from ERS-1 winds, e.g., kriging techniques (Bentamy et al., 1996) and successive corrections schemes (Tang and Liu, 1996). There are several inherent problems with the ERS-1 scatterometer. Since this is a multidisciplinary mission, the satellite is flown in different orbits optimized for each phase of the mission. The scatterometer also shares several sub-systems with the Synthetic Aperture Radar (SAR) and cannot be operated while the SAR is in operation. The scatterometer is also a single-sided instrument and only measures backscatter along the right side of the satellite. The processing described here generates biweekly wind maps during the wktwo years analysis period regardless of the satellite orbit or missing data.

  13. Empirical evidence for multi-scaled controls on wildfire size distributions in California

    NASA Astrophysics Data System (ADS)

    Povak, N.; Hessburg, P. F., Sr.; Salter, R. B.

    2014-12-01

    Ecological theory asserts that regional wildfire size distributions are examples of self-organized critical (SOC) systems. Controls on SOC event-size distributions by virtue are purely endogenous to the system and include the (1) frequency and pattern of ignitions, (2) distribution and size of prior fires, and (3) lagged successional patterns after fires. However, recent work has shown that the largest wildfires often result from extreme climatic events, and that patterns of vegetation and topography may help constrain local fire spread, calling into question the SOC model's simplicity. Using an atlas of >12,000 California wildfires (1950-2012) and maximum likelihood estimation (MLE), we fit four different power-law models and broken-stick regressions to fire-size distributions across 16 Bailey's ecoregions. Comparisons among empirical fire size distributions across ecoregions indicated that most ecoregion's fire-size distributions were significantly different, suggesting that broad-scale top-down controls differed among ecoregions. One-parameter power-law models consistently fit a middle range of fire sizes (~100 to 10000 ha) across most ecoregions, but did not fit to larger and smaller fire sizes. We fit the same four power-law models to patch size distributions of aspect, slope, and curvature topographies and found that the power-law models fit to a similar middle range of topography patch sizes. These results suggested that empirical evidence may exist for topographic controls on fire sizes. To test this, we used neutral landscape modeling techniques to determine if observed fire edges corresponded with aspect breaks more often than expected by random. We found significant differences between the empirical and neutral models for some ecoregions, particularly within the middle range of fire sizes. Our results, combined with other recent work, suggest that controls on ecoregional fire size distributions are multi-scaled and likely are not purely SOC. California wildfire ecosystems appear to be adaptive, governed by stationary and non-stationary controls, which may be either exogenous or endogenous to the system.

  14. Effects on noise properties of GPS time series caused by higher-order ionospheric corrections

    NASA Astrophysics Data System (ADS)

    Jiang, Weiping; Deng, Liansheng; Li, Zhao; Zhou, Xiaohui; Liu, Hongfei

    2014-04-01

    Higher-order ionospheric (HOI) effects are one of the principal technique-specific error sources in precise global positioning system (GPS) analysis. These effects also influence the non-linear characteristics of GPS coordinate time series. In this paper, we investigate these effects on coordinate time series in terms of seasonal variations and noise amplitudes. Both power spectral techniques and maximum likelihood estimators (MLE) are used to evaluate these effects quantitatively and qualitatively. Our results show an overall improvement for the analysis of global sites if HOI effects are considered. We note that the noise spectral index that is used for the determination of the optimal noise models in our analysis ranged between -1 and 0 both with and without HOI corrections, implying that the coloured noise cannot be removed by these corrections. However, the corrections were found to have improved noise properties for global sites. After the corrections were applied, the noise amplitudes at most sites decreased, among which the white noise amplitudes decreased remarkably. The white noise amplitudes of up to 81.8% of the selected sites decreased in the up component, and the flicker noise of 67.5% of the sites decreased in the north component. Stacked periodogram results show that, no matter whether the HOI effects are considered or not, a common fundamental period of 1.04 cycles per year (cpy), together with the expected annual and semi-annual signals, can explain all peaks of the north and up components well. For the east component, however, reasonable results can be obtained only based on HOI corrections. HOI corrections are useful for better detecting the periodic signals in GPS coordinate time series. Moreover, the corrections contributed partly to the seasonal variations of the selected sites, especially for the up component. Statistically, HOI corrections reduced more than 50% and more than 65% of the annual and semi-annual amplitudes respectively at the selected sites.

  15. Adaptive framework to better characterize errors of apriori fluxes and observational residuals in a Bayesian setup for the urban flux inversions.

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Karion, A.; Mueller, K.; Gourdji, S.; Martin, C.; Whetstone, J. R.

    2017-12-01

    The National Institute of Standards and Technology (NIST) supports the North-East Corridor Baltimore Washington (NEC-B/W) project and Indianapolis Flux Experiment (INFLUX) aiming to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties. These projects employ different flux estimation methods including top-down inversion approaches. The traditional Bayesian inversion method estimates emission distributions by updating prior information using atmospheric observations of Green House Gases (GHG) coupled to an atmospheric and dispersion model. The magnitude of the update is dependent upon the observed enhancement along with the assumed errors such as those associated with prior information and the atmospheric transport and dispersion model. These errors are specified within the inversion covariance matrices. The assumed structure and magnitude of the specified errors can have large impact on the emission estimates from the inversion. The main objective of this work is to build a data-adaptive model for these covariances matrices. We construct a synthetic data experiment using a Kalman Filter inversion framework (Lopez et al., 2017) employing different configurations of transport and dispersion model and an assumed prior. Unlike previous traditional Bayesian approaches, we estimate posterior emissions using regularized sample covariance matrices associated with prior errors to investigate whether the structure of the matrices help to better recover our hypothetical true emissions. To incorporate transport model error, we use ensemble of transport models combined with space-time analytical covariance to construct a covariance that accounts for errors in space and time. A Kalman Filter is then run using these covariances along with Maximum Likelihood Estimates (MLE) of the involved parameters. Preliminary results indicate that specifying sptio-temporally varying errors in the error covariances can improve the flux estimates and uncertainties. We also demonstrate that differences between the modeled and observed meteorology can be used to predict uncertainties associated with atmospheric transport and dispersion modeling which can help improve the skill of an inversion at urban scales.

  16. Quasi- and pseudo-maximum likelihood estimators for discretely observed continuous-time Markov branching processes

    PubMed Central

    Chen, Rui; Hyrien, Ollivier

    2011-01-01

    This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356

  17. A Solution to Separation and Multicollinearity in Multiple Logistic Regression

    PubMed Central

    Shen, Jianzhao; Gao, Sujuan

    2010-01-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286

  18. A Solution to Separation and Multicollinearity in Multiple Logistic Regression.

    PubMed

    Shen, Jianzhao; Gao, Sujuan

    2008-10-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.

  19. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    PubMed

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  20. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  1. Maximum Likelihood Item Easiness Models for Test Theory Without an Answer Key

    PubMed Central

    Batchelder, William H.

    2014-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce two extensions to the basic model in order to account for item rating easiness/difficulty. The first extension is a multiplicative model and the second is an additive model. We show how the multiplicative model is related to the Rasch model. We describe several maximum-likelihood estimation procedures for the models and discuss issues of model fit and identifiability. We describe how the CCT models could be used to give alternative consensus-based measures of reliability. We demonstrate the utility of both the basic and extended models on a set of essay rating data and give ideas for future research. PMID:29795812

  2. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  3. Bayesian structural equation modeling in sport and exercise psychology.

    PubMed

    Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus

    2015-08-01

    Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.

  4. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  5. A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits

    PubMed Central

    Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling

    2013-01-01

    Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762

  6. A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0

    PubMed Central

    Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.

    2014-01-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072

  7. Sidewall GaAs tunnel junctions fabricated using molecular layer epitaxy

    PubMed Central

    Ohno, Takeo; Oyama, Yutaka

    2012-01-01

    In this article we review the fundamental properties and applications of sidewall GaAs tunnel junctions. Heavily impurity-doped GaAs epitaxial layers were prepared using molecular layer epitaxy (MLE), in which intermittent injections of precursors in ultrahigh vacuum were applied, and sidewall tunnel junctions were fabricated using a combination of device mesa wet etching of the GaAs MLE layer and low-temperature area-selective regrowth. The fabricated tunnel junctions on the GaAs sidewall with normal mesa orientation showed a record peak current density of 35 000 A cm-2. They can potentially be used as terahertz devices such as a tunnel injection transit time effect diode or an ideal static induction transistor. PMID:27877466

  8. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  9. Alkali production associated with malolactic fermentation by oral streptococci and protection against acid, oxidative, or starvation damage

    PubMed Central

    Sheng, Jiangyun; Baldeck, Jeremiah D.; Nguyen, Phuong T.M.; Quivey, Robert G.; Marquis, Robert E.

    2011-01-01

    Alkali production by oral streptococci is considered important for dental plaque ecology and caries moderation. Recently, malolactic fermentation (MLF) was identified as a major system for alkali production by oral streptococci, including Streptococcus mutans. Our major objectives in the work described in this paper were to further define the physiology and genetics of MLF of oral streptococci and its roles in protection against metabolic stress damage. l-Malic acid was rapidly fermented to l-lactic acid and CO2 by induced cells of wild-type S. mutans, but not by deletion mutants for mleS (malolactic enzyme) or mleP (malate permease). Mutants for mleR (the contiguous regulator gene) had intermediate capacities for MLF. Loss of capacity to catalyze MLF resulted in loss of capacity for protection against lethal acidification. MLF was also found to be protective against oxidative and starvation damage. The capacity of S. mutans to produce alkali from malate was greater than its capacity to produce acid from glycolysis at low pH values of 4 or 5. MLF acted additively with the arginine deiminase system for alkali production by Streptococcus sanguinis, but not with urease of Streptococcus salivarius. Malolactic fermentation is clearly a major process for alkali generation by oral streptococci and for protection against environmental stresses. PMID:20651853

  10. Alkali production associated with malolactic fermentation by oral streptococci and protection against acid, oxidative, or starvation damage.

    PubMed

    Sheng, Jiangyun; Baldeck, Jeremiah D; Nguyen, Phuong T M; Quivey, Robert G; Marquis, Robert E

    2010-07-01

    Alkali production by oral streptococci is considered important for dental plaque ecology and caries moderation. Recently, malolactic fermentation (MLF) was identified as a major system for alkali production by oral streptococci, including Streptococcus mutans. Our major objectives in the work described in this paper were to further define the physiology and genetics of MLF of oral streptococci and its roles in protection against metabolic stress damage. L-Malic acid was rapidly fermented to L-lactic acid and CO(2) by induced cells of wild-type S. mutans, but not by deletion mutants for mleS (malolactic enzyme) or mleP (malate permease). Mutants for mleR (the contiguous regulator gene) had intermediate capacities for MLF. Loss of capacity to catalyze MLF resulted in loss of capacity for protection against lethal acidification. MLF was also found to be protective against oxidative and starvation damage. The capacity of S. mutans to produce alkali from malate was greater than its capacity to produce acid from glycolysis at low pH values of 4 or 5. MLF acted additively with the arginine deiminase system for alkali production by Streptococcus sanguinis, but not with urease of Streptococcus salivarius. Malolactic fermentation is clearly a major process for alkali generation by oral streptococci and for protection against environmental stresses.

  11. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  12. Reactions of NO2 with BaO/Pt(111) Model Catalysts: The Effects of BaO Film Thickness and NO2 Pressure on the Formation of Ba(NOx)2 Species

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudiyanselage, Kumudu; Yi, Cheol-Woo; Szanyi, Janos

    2011-05-31

    The adsorption and reaction of NO2 on BaO (<1, ~3, and >20 monolayer equivalent (MLE))/Pt(111) model systems were studied with temperature programmed desorption (TPD), X-ray photoelectron spectroscopy (XPS), and infrared reflection absorption spectroscopy (IRAS) under ultra-high vacuum (UHV) as well as elevated pressure conditions. NO2 reacts with sub-monolayer BaO (<1 MLE) to form nitrites only, whereas the reaction of NO2 with BaO (~3 MLE)/Pt(111) produces mainly nitrites and a small amount of nitrates under UHV conditions (PNO2 ~ 1.0 × 10-9 Torr) at 300 K. In contrast, a thick BaO(>20 MLE) layer on Pt(111) reacts with NO2 to form nitrite-nitratemore » ion pairs under the same conditions. At elevated NO2 pressures (≥ 1.0 × 10-5 Torr), however, BaO layers at all these three coverages convert to amorphous barium nitrates at 300 K. Upon annealing to 500 K, these amorphous barium nitrate layers transform into crystalline phases. The thermal decomposition of the thus-formed Ba(NOx)2 species is also influenced by the coverage of BaO on the Pt(111) substrate: at low BaO coverages, these species decompose at significantly lower temperatures in comparison with those formed on thick BaO films due to the presence of Ba(NOx)2/Pt interface where the decomposition can proceed at lower temperatures. However, the thermal decomposition of the thick Ba(NO3)2 films follows that of bulk nitrates. Results obtained from these BaO/Pt(111) model systems under UHV and elevated pressure conditions clearly demonstrate that both the BaO film thickness and the applied NO2 pressure are critical in the Ba(NOx)2 formation and subsequent thermal decomposition processes.« less

  13. Regulatory T Cells Promote β-Catenin–Mediated Epithelium-to-Mesenchyme Transition During Radiation-Induced Pulmonary Fibrosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiong, Shanshan; Pan, Xiujie; Xu, Long

    Purpose: Radiation-induced pulmonary fibrosis results from thoracic radiation therapy and severely limits radiation therapy approaches. CD4{sup +}CD25{sup +}FoxP3{sup +} regulatory T cells (Tregs) as well as epithelium-to-mesenchyme transition (EMT) cells are involved in pulmonary fibrosis induced by multiple factors. However, the mechanisms of Tregs and EMT cells in irradiation-induced pulmonary fibrosis remain unclear. In the present study, we investigated the influence of Tregs on EMT in radiation-induced pulmonary fibrosis. Methods and Materials: Mice thoraxes were irradiated (20 Gy), and Tregs were depleted by intraperitoneal injection of a monoclonal anti-CD25 antibody 2 hours after irradiation and every 7 days thereafter. Mice were treated onmore » days 3, 7, and 14 and 1, 3, and 6 months post irradiation. The effectiveness of Treg depletion was assayed via flow cytometry. EMT and β-catenin in lung tissues were detected by immunohistochemistry. Tregs isolated from murine spleens were cultured with mouse lung epithelial (MLE) 12 cells, and short interfering RNA (siRNA) knockdown of β-catenin in MLE 12 cells was used to explore the effects of Tregs on EMT and β-catenin via flow cytometry and Western blotting. Results: Anti-CD25 antibody treatment depleted Tregs efficiently, attenuated the process of radiation-induced pulmonary fibrosis, hindered EMT, and reduced β-catenin accumulation in lung epithelial cells in vivo. The coculture of Tregs with irradiated MLE 12 cells showed that Tregs could promote EMT in MLE 12 cells and that the effect of Tregs on EMT was partially abrogated by β-catenin knockdown in vitro. Conclusions: Tregs can promote EMT in accelerating radiation-induced pulmonary fibrosis. This process is partially mediated through β-catenin. Our study suggests a new mechanism for EMT, promoted by Tregs, that accelerates radiation-induced pulmonary fibrosis.« less

  14. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    NASA Technical Reports Server (NTRS)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  15. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  16. Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation

    PubMed Central

    Meyer, Karin

    2016-01-01

    Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681

  17. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  18. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  19. Vector Antenna and Maximum Likelihood Imaging for Radio Astronomy

    DTIC Science & Technology

    2016-03-05

    Maximum Likelihood Imaging for Radio Astronomy Mary Knapp1, Frank Robey2, Ryan Volz3, Frank Lind3, Alan Fenn2, Alex Morris2, Mark Silver2, Sarah Klein2...haystack.mit.edu Abstract1— Radio astronomy using frequencies less than ~100 MHz provides a window into non-thermal processes in objects ranging from planets...observational astronomy . Ground-based observatories including LOFAR [1], LWA [2], [3], MWA [4], and the proposed SKA-Low [5], [6] are improving access to

  20. A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2012-01-01

    This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659

  1. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  2. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures

    PubMed Central

    Theobald, Douglas L.; Wuttke, Deborah S.

    2008-01-01

    Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907

  3. Maximum Likelihood Analysis in the PEN Experiment

    NASA Astrophysics Data System (ADS)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  4. The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation

    NASA Technical Reports Server (NTRS)

    Tsou, Haiping; Yan, Tsun-Yee

    2000-01-01

    This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.

  5. A maximum likelihood algorithm for genome mapping of cytogenetic loci from meiotic configuration data.

    PubMed Central

    Reyes-Valdés, M H; Stelly, D M

    1995-01-01

    Frequencies of meiotic configurations in cytogenetic stocks are dependent on chiasma frequencies in segments defined by centromeres, breakpoints, and telomeres. The expectation maximization algorithm is proposed as a general method to perform maximum likelihood estimations of the chiasma frequencies in the intervals between such locations. The estimates can be translated via mapping functions into genetic maps of cytogenetic landmarks. One set of observational data was analyzed to exemplify application of these methods, results of which were largely concordant with other comparable data. The method was also tested by Monte Carlo simulation of frequencies of meiotic configurations from a monotelodisomic translocation heterozygote, assuming six different sample sizes. The estimate averages were always close to the values given initially to the parameters. The maximum likelihood estimation procedures can be extended readily to other kinds of cytogenetic stocks and allow the pooling of diverse cytogenetic data to collectively estimate lengths of segments, arms, and chromosomes. Images Fig. 1 PMID:7568226

  6. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  7. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  9. DECONV-TOOL: An IDL based deconvolution software package

    NASA Technical Reports Server (NTRS)

    Varosi, F.; Landsman, W. B.

    1992-01-01

    There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.

  10. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.

    2013-10-15

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. Themore » key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.« less

  11. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas.

    PubMed

    Washeleski, Robert L; Meyer, Edmond J; King, Lyon B

    2013-10-01

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.

  12. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  13. An Investigation into the Antiobesity Effects of Morinda citrifolia L. Leaf Extract in High Fat Diet Induced Obese Rats Using a 1H NMR Metabolomics Approach

    PubMed Central

    Gooda Sahib Jambocus, Najla; Saari, Nazamid; Ismail, Amin; Mahomoodally, Mohamad Fawzi; Abdul Hamid, Azizah

    2016-01-01

    The prevalence of obesity is increasing worldwide, with high fat diet (HFD) as one of the main contributing factors. Obesity increases the predisposition to other diseases such as diabetes through various metabolic pathways. Limited availability of antiobesity drugs and the popularity of complementary medicine have encouraged research in finding phytochemical strategies to this multifaceted disease. HFD induced obese Sprague-Dawley rats were treated with an extract of Morinda citrifolia L. leaves (MLE 60). After 9 weeks of treatment, positive effects were observed on adiposity, fecal fat content, plasma lipids, and insulin and leptin levels. The inducement of obesity and treatment with MLE 60 on metabolic alterations were then further elucidated using a 1H NMR based metabolomics approach. Discriminating metabolites involved were products of various metabolic pathways, including glucose metabolism and TCA cycle (lactate, 2-oxoglutarate, citrate, succinate, pyruvate, and acetate), amino acid metabolism (alanine, 2-hydroxybutyrate), choline metabolism (betaine), creatinine metabolism (creatinine), and gut microbiome metabolism (hippurate, phenylacetylglycine, dimethylamine, and trigonelline). Treatment with MLE 60 resulted in significant improvement in the metabolic perturbations caused obesity as demonstrated by the proximity of the treated group to the normal group in the OPLS-DA score plot and the change in trajectory movement of the diseased group towards the healthy group upon treatment. PMID:26798649

  14. Voice characteristics before versus after mandibular setback surgery in patients with mandibular prognathism using nonlinear dynamics and conventional acoustic analyses.

    PubMed

    Mishima, Katsuaki; Moritani, Norifumi; Nakano, Hiroyuki; Matsushita, Asuka; Iida, Seiji; Ueyama, Yoshiya

    2013-12-01

    The purpose of this study was to explore the voice characteristics of patients with mandibular prognathism, and to investigate the effects of mandibular setback surgery on these characteristics using nonlinear dynamics and conventional acoustic analyses. Sixteen patients (8 males and 8 females) who had skeletal 3, class III malocclusion without cleft palate, and who underwent a bilateral sagittal split ramus osteotomy (BSSRO), were enrolled. As controls, 50 healthy adults (25 males and 25 females) were enrolled. The mean first LEs (mLE1) computed for each one-second interval, and the fundamental frequency (F0) and frequencies of the first and second formant (F1, F2) were calculated for each Japanese vowel. The mLE1s for /u/ in males, and /o/ in females and the F2s for /i/ and /u/ in males, changed significantly after BSSRO. Class III voice characteristics were observed in the mLE1s for /i/ in both males and females, in the F0 for /a/, /i/, /u/ and /o/ in females, and in the F1 and F2 for /a/ in males, and the F1 for /u/ and the F2 for /i/ in females. Most of these characteristics were preserved after BSSRO. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  15. An Investigation into the Antiobesity Effects of Morinda citrifolia L. Leaf Extract in High Fat Diet Induced Obese Rats Using a (1)H NMR Metabolomics Approach.

    PubMed

    Gooda Sahib Jambocus, Najla; Saari, Nazamid; Ismail, Amin; Khatib, Alfi; Mahomoodally, Mohamad Fawzi; Abdul Hamid, Azizah

    2016-01-01

    The prevalence of obesity is increasing worldwide, with high fat diet (HFD) as one of the main contributing factors. Obesity increases the predisposition to other diseases such as diabetes through various metabolic pathways. Limited availability of antiobesity drugs and the popularity of complementary medicine have encouraged research in finding phytochemical strategies to this multifaceted disease. HFD induced obese Sprague-Dawley rats were treated with an extract of Morinda citrifolia L. leaves (MLE 60). After 9 weeks of treatment, positive effects were observed on adiposity, fecal fat content, plasma lipids, and insulin and leptin levels. The inducement of obesity and treatment with MLE 60 on metabolic alterations were then further elucidated using a (1)H NMR based metabolomics approach. Discriminating metabolites involved were products of various metabolic pathways, including glucose metabolism and TCA cycle (lactate, 2-oxoglutarate, citrate, succinate, pyruvate, and acetate), amino acid metabolism (alanine, 2-hydroxybutyrate), choline metabolism (betaine), creatinine metabolism (creatinine), and gut microbiome metabolism (hippurate, phenylacetylglycine, dimethylamine, and trigonelline). Treatment with MLE 60 resulted in significant improvement in the metabolic perturbations caused obesity as demonstrated by the proximity of the treated group to the normal group in the OPLS-DA score plot and the change in trajectory movement of the diseased group towards the healthy group upon treatment.

  16. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    NASA Astrophysics Data System (ADS)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.

  17. Inferring Phylogenetic Networks Using PhyloNet.

    PubMed

    Wen, Dingqiao; Yu, Yun; Zhu, Jiafan; Nakhleh, Luay

    2018-07-01

    PhyloNet was released in 2008 as a software package for representing and analyzing phylogenetic networks. At the time of its release, the main functionalities in PhyloNet consisted of measures for comparing network topologies and a single heuristic for reconciling gene trees with a species tree. Since then, PhyloNet has grown significantly. The software package now includes a wide array of methods for inferring phylogenetic networks from data sets of unlinked loci while accounting for both reticulation (e.g., hybridization) and incomplete lineage sorting. In particular, PhyloNet now allows for maximum parsimony, maximum likelihood, and Bayesian inference of phylogenetic networks from gene tree estimates. Furthermore, Bayesian inference directly from sequence data (sequence alignments or biallelic markers) is implemented. Maximum parsimony is based on an extension of the "minimizing deep coalescences" criterion to phylogenetic networks, whereas maximum likelihood and Bayesian inference are based on the multispecies network coalescent. All methods allow for multiple individuals per species. As computing the likelihood of a phylogenetic network is computationally hard, PhyloNet allows for evaluation and inference of networks using a pseudolikelihood measure. PhyloNet summarizes the results of the various analyzes and generates phylogenetic networks in the extended Newick format that is readily viewable by existing visualization software.

  18. Regression estimators for generic health-related quality of life and quality-adjusted life years.

    PubMed

    Basu, Anirban; Manca, Andrea

    2012-01-01

    To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.

  19. Parameter estimation of history-dependent leaky integrate-and-fire neurons using maximum-likelihood methods

    PubMed Central

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2012-01-01

    When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282

  20. Accurate Structural Correlations from Maximum Likelihood Superpositions

    PubMed Central

    Theobald, Douglas L; Wuttke, Deborah S

    2008-01-01

    The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091

  1. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  2. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  3. Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors

    PubMed Central

    Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.

    2009-01-01

    In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527

  4. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    PubMed

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  5. A maximum likelihood convolutional decoder model vs experimental data comparison

    NASA Technical Reports Server (NTRS)

    Chen, R. Y.

    1979-01-01

    This article describes the comparison of a maximum likelihood convolutional decoder (MCD) prediction model and the actual performance of the MCD at the Madrid Deep Space Station. The MCD prediction model is used to develop a subroutine that has been utilized by the Telemetry Analysis Program (TAP) to compute the MCD bit error rate for a given signal-to-noise ratio. The results indicate that that the TAP can predict quite well compared to the experimental measurements. An optimal modulation index also can be found through TAP.

  6. Analysis of crackling noise using the maximum-likelihood method: Power-law mixing and exponential damping.

    PubMed

    Salje, Ekhard K H; Planes, Antoni; Vives, Eduard

    2017-10-01

    Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.

  7. Investigation on the coloured noise in GPS-derived position with time-varying seasonal signals

    NASA Astrophysics Data System (ADS)

    Gruszczynska, Marta; Klos, Anna; Bos, Machiel Simon; Bogusz, Janusz

    2016-04-01

    The seasonal signals in the GPS-derived time series arise from real geophysical signals related to tidal (residual) or non-tidal (loadings from atmosphere, ocean and continental hydrosphere, thermo elastic strain, etc.) effects and numerical artefacts including aliasing from mismodelling in short periods or repeatability of the GPS satellite constellation with respect to the Sun (draconitics). Singular Spectrum Analysis (SSA) is a method for investigation of nonlinear dynamics, suitable to either stationary or non-stationary data series without prior knowledge about their character. The aim of SSA is to mathematically decompose the original time series into a sum of slowly varying trend, seasonal oscillations and noise. In this presentation we will explore the ability of SSA to subtract the time-varying seasonal signals in GPS-derived North-East-Up topocentric components and show properties of coloured noise from residua. For this purpose we used data from globally distributed IGS (International GNSS Service) permanent stations processed by the JPL (Jet Propulsion Laboratory) in a PPP (Precise Point Positioning) mode. After introducing a threshold of 13 years, 264 stations left with a maximum length reaching 23 years. The data was initially pre-processed for outliers, offsets and gaps. The SSA was applied to pre-processed series to estimate the time-varying seasonal signals. We adopted a 3-years window as the optimal dimension of its size determined with the Akaike's Information Criteria (AIC) values. A Fisher-Snedecor test corrected for the presence of temporal correlation was used to determine the statistical significance of reconstructed components. This procedure showed that first four components describing annual and semi-annual signals, are significant at a 99.7% confidence level, which corresponds to 3-sigma criterion. We compared the non-parametric SSA approach with a commonly chosen parametric Least-Squares Estimation that assumes constant amplitudes and phases over time. We noticed a maximum difference in seasonal oscillation of 3.5 mm and a maximum change in velocity of 0.15 mm/year for Up component (YELL, Yellowknife, Canada), when SSA and LSE are compared. The annual signal has the greatest influence on data variability in time series, while the semi-annual signal in Up component has much smaller contribution in the total variance of data. For some stations more than 35% of the total variance is explained by annual signal. According to the Power Spectral Densities (PSD) we proved that SSA has the ability to properly subtract the seasonals changing in time with almost no influence on power-law character of stochastic part. Then, the modified Maximum Likelihood Estimation (MLE) in Hector software was applied to SSA-filtered time series. We noticed a significant improvement in spectral indices and power-law amplitudes in comparison to classically determined ones with LSE, which will be the main subject of this presentation.

  8. Likelihood-based modification of experimental crystal structure electron density maps

    DOEpatents

    Terwilliger, Thomas C [Sante Fe, NM

    2005-04-16

    A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.

  9. Phylogenetic place of guinea pigs: no support of the rodent-polyphyly hypothesis from maximum-likelihood analyses of multiple protein sequences.

    PubMed

    Cao, Y; Adachi, J; Yano, T; Hasegawa, M

    1994-07-01

    Graur et al.'s (1991) hypothesis that the guinea pig-like rodents have an evolutionary origin within mammals that is separate from that of other rodents (the rodent-polyphyly hypothesis) was reexamined by the maximum-likelihood method for protein phylogeny, as well as by the maximum-parsimony and neighbor-joining methods. The overall evidence does not support Graur et al.'s hypothesis, which radically contradicts the traditional view of rodent monophyly. This work demonstrates that we must be careful in choosing a proper method for phylogenetic inference and that an argument based on a small data set (with respect to the length of the sequence and especially the number of species) may be unstable.

  10. Task Performance with List-Mode Data

    NASA Astrophysics Data System (ADS)

    Caucci, Luca

    This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.

  11. Improved relocatable over-the-horizon radar detection and tracking using the maximum likelihood adaptive neural system algorithm

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.

    1998-07-01

    An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.

  12. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  13. Testing students' e-learning via Facebook through Bayesian structural equation modeling.

    PubMed

    Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.

  14. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less

  15. Land cover mapping after the tsunami event over Nanggroe Aceh Darussalam (NAD) province, Indonesia

    NASA Astrophysics Data System (ADS)

    Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Alias, A. N.; Mohd. Saleh, N.; Wong, C. J.; Surbakti, M. S.

    2008-03-01

    Remote sensing offers an important means of detecting and analyzing temporal changes occurring in our landscape. This research used remote sensing to quantify land use/land cover changes at the Nanggroe Aceh Darussalam (Nad) province, Indonesia on a regional scale. The objective of this paper is to assess the changed produced from the analysis of Landsat TM data. A Landsat TM image was used to develop land cover classification map for the 27 March 2005. Four supervised classifications techniques (Maximum Likelihood, Minimum Distance-to- Mean, Parallelepiped and Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier) were performed to the satellite image. Training sites and accuracy assessment were needed for supervised classification techniques. The training sites were established using polygons based on the colour image. High detection accuracy (>80%) and overall Kappa (>0.80) were achieved by the Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier in this study. This preliminary study has produced a promising result. This indicates that land cover mapping can be carried out using remote sensing classification method of the satellite digital imagery.

  16. Evidence of seasonal variation in longitudinal growth of height in a sample of boys from Stuttgart Carlsschule, 1771-1793, using combined principal component analysis and maximum likelihood principle.

    PubMed

    Lehmann, A; Scheffler, Ch; Hermanussen, M

    2010-02-01

    Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.

  17. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis

    PubMed Central

    van de Schoot, Rens; Hox, Joop

    2014-01-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827

  18. Testing students’ e-learning via Facebook through Bayesian structural equation modeling

    PubMed Central

    Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. PMID:28886019

  19. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    NASA Astrophysics Data System (ADS)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  20. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  1. Hunger influenced life expectancy in war-torn Sub-Saharan African countries.

    PubMed

    Uchendu, Florence N

    2018-04-27

    Malnutrition is a global public health problem especially in developing countries experiencing war/conflicts. War might be one of the socio-political factors influencing malnutrition in Sub-Saharan African (SSA) countries. This study aims at determining the influence of war on corruption, population (POP), number of population malnourished (NPU), food security and life expectancy (LE) in war-torn SSA countries (WTSSA) by comparing their malnutrition indicators. Fourteen countries in WTSSA were stratified into zones according to war incidences. Countries' secondary data on population (POP), NPU, Food Security Index (FSI), corruption perceptions index (CPI), Global Hunger Index (GHI) and LE were obtained from global published data. T test, multivariate and Pearson correlation analyses were performed to determine the relationship between CPI, POP, GHI, FSI, NPU, male LE (MLE) and female LE (FLE) in WTSSA at p < .05. Data were presented in tables, means, standard deviation and percentages. Mean NPU, CPI, GHI, POP, FSI, MLE and FLE in WTSSA were 5.0 million, 28.3%, 18.2%, 33.8 million, 30.8%, 54.7 years and 57.1 years, respectively. GHI significantly influenced LE in both male and female POP in WTSSA. NPU, CPI, FSI, GHI and FLE were not significantly different according to zones except in MLE. Malnutrition indicators were similarly affected in WTSSA. Hunger influenced life expectancy. Policies promoting good governance, equity, peaceful co-existence, respect for human right and adequate food supply will aid malnutrition eradication and prevent war occurrences in Sub-Saharan African countries.

  2. Development of an LSI maximum-likelihood convolutional decoder for advanced forward error correction capability on the NASA 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Clark, R. T.; Mccallister, R. D.

    1982-01-01

    The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.

  3. Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,

  4. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  5. Gyre and gimble: a maximum-likelihood replacement for Patterson correlation refinement.

    PubMed

    McCoy, Airlie J; Oeffner, Robert D; Millán, Claudia; Sammito, Massimo; Usón, Isabel; Read, Randy J

    2018-04-01

    Descriptions are given of the maximum-likelihood gyre method implemented in Phaser for optimizing the orientation and relative position of rigid-body fragments of a model after the orientation of the model has been identified, but before the model has been positioned in the unit cell, and also the related gimble method for the refinement of rigid-body fragments of the model after positioning. Gyre refinement helps to lower the root-mean-square atomic displacements between model and target molecular-replacement solutions for the test case of antibody Fab(26-10) and improves structure solution with ARCIMBOLDO_SHREDDER.

  6. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure

    PubMed Central

    Richards, V. M.; Dai, W.

    2014-01-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826

  7. Equalization of nonlinear transmission impairments by maximum-likelihood-sequence estimation in digital coherent receivers.

    PubMed

    Khairuzzaman, Md; Zhang, Chao; Igarashi, Koji; Katoh, Kazuhiro; Kikuchi, Kazuro

    2010-03-01

    We describe a successful introduction of maximum-likelihood-sequence estimation (MLSE) into digital coherent receivers together with finite-impulse response (FIR) filters in order to equalize both linear and nonlinear fiber impairments. The MLSE equalizer based on the Viterbi algorithm is implemented in the offline digital signal processing (DSP) core. We transmit 20-Gbit/s quadrature phase-shift keying (QPSK) signals through a 200-km-long standard single-mode fiber. The bit-error rate performance shows that the MLSE equalizer outperforms the conventional adaptive FIR filter, especially when nonlinear impairments are predominant.

  8. The epoch state navigation filter. [for maximum likelihood estimates of position and velocity vectors

    NASA Technical Reports Server (NTRS)

    Battin, R. H.; Croopnick, S. R.; Edwards, J. A.

    1977-01-01

    The formulation of a recursive maximum likelihood navigation system employing reference position and velocity vectors as state variables is presented. Convenient forms of the required variational equations of motion are developed together with an explicit form of the associated state transition matrix needed to refer measurement data from the measurement time to the epoch time. Computational advantages accrue from this design in that the usual forward extrapolation of the covariance matrix of estimation errors can be avoided without incurring unacceptable system errors. Simulation data for earth orbiting satellites are provided to substantiate this assertion.

  9. A 3D approximate maximum likelihood localization solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-09-23

    A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with acoustic transmitters and vocalizing marine mammals to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives and support Marine Renewable Energy. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  10. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  11. Search for Point Sources of Ultra-High-Energy Cosmic Rays above 4.0 × 1019 eV Using a Maximum Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.

    2005-04-01

    We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.

  12. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  13. Using the β-binomial distribution to characterize forest health

    Treesearch

    S.J. Zarnoch; R.L. Anderson; R.M. Sheffield

    1995-01-01

    The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...

  14. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  15. A Note on Three Statistical Tests in the Logistic Regression DIF Procedure

    ERIC Educational Resources Information Center

    Paek, Insu

    2012-01-01

    Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…

  16. Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data

    ERIC Educational Resources Information Center

    Xi, Nuo; Browne, Michael W.

    2014-01-01

    A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…

  17. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  18. A likelihood-based time series modeling approach for application in dendrochronology to examine the growth-climate relations and forest disturbance history

    EPA Science Inventory

    A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...

  19. People, Parks and Rainforests.

    ERIC Educational Resources Information Center

    Singer, Judith Y.

    1992-01-01

    The MLE Learning Center, a publicly funded day care center and after-school program in Brooklyn, New York, helps children develop awareness of a global community by using local resources to teach the children about the rainforest. (LB)

  20. A bivariate contaminated binormal model for robust fitting of proper ROC curves to a pair of correlated, possibly degenerate, ROC datasets.

    PubMed

    Zhai, Xuetong; Chakraborty, Dev P

    2017-06-01

    The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A limited simulation validation of the method was performed. CORCBM and CORROC2 were applied to two datasets containing nine readers each contributing paired interpretations. CORCBM successfully fitted the data for all readers, whereas CORROC2 failed to fit a degenerate dataset. All fits were visually reasonable. All CORCBM fits were proper, whereas all CORROC2 fits were improper. CORCBM and CORROC2 were in agreement (a) in declaring only one of the nine readers as having significantly different performances in the two modalities; (b) in estimating higher correlations for diseased cases than for nondiseased ones; and (c) in finding that the intermodality correlation estimates for nondiseased cases were consistent between the two methods. All CORCBM fits yielded higher area under curve (AUC) than the CORROC2 fits, consistent with the fact that a proper ROC model like CORCBM is based on a likelihood-ratio-equivalent decision variable, and consequently yields higher performance than the binormal model-based CORROC2. The method gave satisfactory fits to four simulated datasets. CORCBM is a robust method for fitting paired ROC datasets, always yielding proper ROC curves, and able to fit degenerate datasets. © 2017 American Association of Physicists in Medicine.

  1. A Maximum-Likelihood Approach to Force-Field Calibration.

    PubMed

    Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam

    2015-09-28

    A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-likelihood parameter estimation when the contributions to the maximum-likelihood function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions.

  2. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  3. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  4. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    PubMed

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  5. Simulation-Based Evaluation of Hybridization Network Reconstruction Methods in the Presence of Incomplete Lineage Sorting

    PubMed Central

    Kamneva, Olga K; Rosenberg, Noah A

    2017-01-01

    Hybridization events generate reticulate species relationships, giving rise to species networks rather than species trees. We report a comparative study of consensus, maximum parsimony, and maximum likelihood methods of species network reconstruction using gene trees simulated assuming a known species history. We evaluate the role of the divergence time between species involved in a hybridization event, the relative contributions of the hybridizing species, and the error in gene tree estimation. When gene tree discordance is mostly due to hybridization and not due to incomplete lineage sorting (ILS), most of the methods can detect even highly skewed hybridization events between highly divergent species. For recent divergences between hybridizing species, when the influence of ILS is sufficiently high, likelihood methods outperform parsimony and consensus methods, which erroneously identify extra hybridizations. The more sophisticated likelihood methods, however, are affected by gene tree errors to a greater extent than are consensus and parsimony. PMID:28469378

  6. Free energy reconstruction from steered dynamics without post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athenes, Manuel, E-mail: Manuel.Athenes@cea.f; Condensed Matter and Materials Division, Physics and Life Sciences Directorate, LLNL, Livermore, CA 94551; Marinica, Mihai-Cosmin

    2010-09-20

    Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, wemore » accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.« less

  7. Master teachers' responses to twenty literacy and science/mathematics practices in deaf education.

    PubMed

    Easterbrooks, Susan R; Stephenson, Brenda; Mertens, Donna

    2006-01-01

    Under a grant to improve outcomes for students who are deaf or hard of hearing awarded to the Association of College Educators--Deaf/Hard of Hearing, a team identified content that all teachers of students who are deaf and hard of hearing must understand and be able to teach. Also identified were 20 practices associated with content standards (10 each, literacy and science/mathematics). Thirty-seven master teachers identified by grant agents rated the practices on a Likert-type scale indicating the maximum benefit of each practice and maximum likelihood that they would use the practice, yielding a likelihood-impact analysis. The teachers showed strong agreement on the benefits and likelihood of use of the rated practices. Concerns about implementation of many of the practices related to time constraints and mixed-ability classrooms were themes of the reviews. Actions for teacher preparation programs were recommended.

  8. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    PubMed

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  9. Maximum-likelihood estimation of parameterized wavefronts from multifocal data

    PubMed Central

    Sakamoto, Julia A.; Barrett, Harrison H.

    2012-01-01

    A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282

  10. Multilocus sequence typing and pulsed-field gel electrophoresis analysis of Oenococcus oeni from different wine-producing regions of China.

    PubMed

    Wang, Tao; Li, Hua; Wang, Hua; Su, Jing

    2015-04-16

    The present study established a typing method with NotI-based pulsed-field gel electrophoresis (PFGE) and stress response gene schemed multilocus sequence typing (MLST) for 55 Oenococcus oeni strains isolated from six individual regions in China and two model strains PSU-1 (CP000411) and ATCC BAA-1163 (AAUV00000000). Seven stress response genes, cfa, clpL, clpP, ctsR, mleA, mleP and omrA, were selected for MLST testing, and positive selective pressure was detected for these genes. Furthermore, both methods separated the strains into two clusters. The PFGE clusters are correlated with the region, whereas the sequence types (STs) formed by the MLST confirm the two clusters identified by PFGE. In addition, the population structure was a mixture of evolutionary pathways, and the strains exhibited both clonal and panmictic characteristics. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. The Fastrack Suborbital Platform for Microgravity Applications

    NASA Technical Reports Server (NTRS)

    Levine, H. G.; Ball, J. E.; Shultz, D.; Odyssey, A.; Wells, H. W.; Soler, R. R.; Albino, S.; Meshberger, R. J.; Murdoch, T.

    2009-01-01

    The FASTRACK suborbital experiment platform has been developed to provide a capability for utilizing 2.5-5 minute microgravity flight opportunities anticipated from the commercial suborbital fleet (currently in development) for science investigations, technology development and hardware testing. It also provides "express rack" functionality to deliver payloads to ISS. FASTRACK fits within a 24" x 24" x 36" (61 cm x 61 cm x 91.4 cm) envelope and is capable of supporting either two single Middeck Locker Equivalents (MLE) or one double MLE configuration. Its overall mass is 300 lbs (136 kg), of which 160 lbs (72 kg) is reserved for experiments. FASTRACK operates using 28 VDC power or batteries. A support drawer located at the bottom of the structure contains all ancillary electrical equipment (including batteries, a conditioned power system and a data collection system) as well as a front panel that contains all switches (including remote cut-off), breakers and warning LEDs.

  12. Conceptualization of an R&D Based Learning-to-Innovate Model for Science Education

    NASA Astrophysics Data System (ADS)

    Lai, Oiki Sylvia

    The purpose of this research was to conceptualize an R & D based learning-to-innovate (LTI) model. The problem to be addressed was the lack of a theoretical L TI model, which would inform science pedagogy. The absorptive capacity (ACAP) lens was adopted to untangle the R & D LTI phenomenon into four learning processes: problem-solving via knowledge acquisition, incremental improvement via knowledge participation, scientific discovery via knowledge creation, and product design via knowledge productivity. The four knowledge factors were the latent factors and each factor had seven manifest elements as measured variables. The key objectives of the non experimental quantitative survey were to measure the relative importance of the identified elements and to explore the underlining structure of the variables. A questionnaire had been prepared, and was administered to more than 155 R & D professionals from four sectors - business, academic, government, and nonprofit. The results showed that every identified element was important to the R & D professionals, in terms of improving the related type of innovation. The most important elements were highlighted to serve as building blocks for elaboration. In search for patterns of the data matrix, exploratory factor analysis (EF A) was performed. Principal component analysis was the first phase of EF A to extract factors; while maximum likelihood estimation (MLE) was used to estimate the model. EF A yielded the finding of two aspects in each kind of knowledge. Logical names were assigned to represent the nature of the subsets: problem and knowledge under knowledge acquisition, planning and participation under knowledge participation, exploration and discovery under knowledge creation, and construction and invention under knowledge productivity. These two constructs, within each kind of knowledge, added structure to the vague R & D based LTI model. The research questions and hypotheses testing were addressed using correlation analysis. The alternative hypotheses that there were positive relationships between knowledge factors and their corresponding types of innovation were accepted. In-depth study of each process is recommended in both research and application. Experimental tests are needed, in order to ultimately present the LTI model to enhance the scientific knowledge absorptive capacity of the learners to facilitate their innovation performance.

  13. Multivariate stochastic analysis for Monthly hydrological time series at Cuyahoga River Basin

    NASA Astrophysics Data System (ADS)

    zhang, L.

    2011-12-01

    Copula has become a very powerful statistic and stochastic methodology in case of the multivariate analysis in Environmental and Water resources Engineering. In recent years, the popular one-parameter Archimedean copulas, e.g. Gumbel-Houggard copula, Cook-Johnson copula, Frank copula, the meta-elliptical copula, e.g. Gaussian Copula, Student-T copula, etc. have been applied in multivariate hydrological analyses, e.g. multivariate rainfall (rainfall intensity, duration and depth), flood (peak discharge, duration and volume), and drought analyses (drought length, mean and minimum SPI values, and drought mean areal extent). Copula has also been applied in the flood frequency analysis at the confluences of river systems by taking into account the dependence among upstream gauge stations rather than by using the hydrological routing technique. In most of the studies above, the annual time series have been considered as stationary signal which the time series have been assumed as independent identically distributed (i.i.d.) random variables. But in reality, hydrological time series, especially the daily and monthly hydrological time series, cannot be considered as i.i.d. random variables due to the periodicity existed in the data structure. Also, the stationary assumption is also under question due to the Climate Change and Land Use and Land Cover (LULC) change in the fast years. To this end, it is necessary to revaluate the classic approach for the study of hydrological time series by relaxing the stationary assumption by the use of nonstationary approach. Also as to the study of the dependence structure for the hydrological time series, the assumption of same type of univariate distribution also needs to be relaxed by adopting the copula theory. In this paper, the univariate monthly hydrological time series will be studied through the nonstationary time series analysis approach. The dependence structure of the multivariate monthly hydrological time series will be studied through the copula theory. As to the parameter estimation, the maximum likelihood estimation (MLE) will be applied. To illustrate the method, the univariate time series model and the dependence structure will be determined and tested using the monthly discharge time series of Cuyahoga River Basin.

  14. Does the Integration of Haptic and Visual Cues Reduce the Effect of a Biased Visual Reference Frame on the Subjective Head Orientation?

    PubMed Central

    Gueguen, Marc; Vuillerme, Nicolas; Isableu, Brice

    2012-01-01

    Background The selection of appropriate frames of reference (FOR) is a key factor in the elaboration of spatial perception and the production of robust interaction with our environment. The extent to which we perceive the head axis orientation (subjective head orientation, SHO) with both accuracy and precision likely contributes to the efficiency of these spatial interactions. A first goal of this study was to investigate the relative contribution of both the visual and egocentric FOR (centre-of-mass) in the SHO processing. A second goal was to investigate humans' ability to process SHO in various sensory response modalities (visual, haptic and visuo-haptic), and the way they modify the reliance to either the visual or egocentric FORs. A third goal was to question whether subjects combined visual and haptic cues optimally to increase SHO certainty and to decrease the FORs disruption effect. Methodology/Principal Findings Thirteen subjects were asked to indicate their SHO while the visual and/or egocentric FORs were deviated. Four results emerged from our study. First, visual rod settings to SHO were altered by the tilted visual frame but not by the egocentric FOR alteration, whereas no haptic settings alteration was observed whether due to the egocentric FOR alteration or the tilted visual frame. These results are modulated by individual analysis. Second, visual and egocentric FOR dependency appear to be negatively correlated. Third, the response modality enrichment appears to improve SHO. Fourth, several combination rules of the visuo-haptic cues such as the Maximum Likelihood Estimation (MLE), Winner-Take-All (WTA) or Unweighted Mean (UWM) rule seem to account for SHO improvements. However, the UWM rule seems to best account for the improvement of visuo-haptic estimates, especially in situations with high FOR incongruence. Finally, the data also indicated that FOR reliance resulted from the application of UWM rule. This was observed more particularly, in the visual dependent subject. Conclusions: Taken together, these findings emphasize the importance of identifying individual spatial FOR preferences to assess the efficiency of our interaction with the environment whilst performing spatial tasks. PMID:22509295

  15. Multiscale Characterization of the Probability Density Functions of Velocity and Temperature Increment Fields

    NASA Astrophysics Data System (ADS)

    DeMarco, Adam Ward

    The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.

  16. A tree island approach to inferring phylogeny in the ant subfamily Formicinae, with especial reference to the evolution of weaving.

    PubMed

    Johnson, Rebecca N; Agapow, Paul-Michael; Crozier, Ross H

    2003-11-01

    The ant subfamily Formicinae is a large assemblage (2458 species (J. Nat. Hist. 29 (1995) 1037), including species that weave leaf nests together with larval silk and in which the metapleural gland-the ancestrally defining ant character-has been secondarily lost. We used sequences from two mitochondrial genes (cytochrome b and cytochrome oxidase 2) from 18 formicine and 4 outgroup taxa to derive a robust phylogeny, employing a search for tree islands using 10000 randomly constructed trees as starting points and deriving a maximum likelihood consensus tree from the ML tree and those not significantly different from it. Non-parametric bootstrapping showed that the ML consensus tree fit the data significantly better than three scenarios based on morphology, with that of Bolton (Identification Guide to the Ant Genera of the World, Harvard University Press, Cambridge, MA) being the best among these alternative trees. Trait mapping showed that weaving had arisen at least four times and possibly been lost once. A maximum likelihood analysis showed that loss of the metapleural gland is significantly associated with the weaver life-pattern. The graph of the frequencies with which trees were discovered versus their likelihood indicates that trees with high likelihoods have much larger basins of attraction than those with lower likelihoods. While this result indicates that single searches are more likely to find high- than low-likelihood tree islands, it also indicates that searching only for the single best tree may lose important information.

  17. Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.

    PubMed

    Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R

    2018-05-26

    Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. DSN telemetry system performance using a maximum likelihood convolutional decoder

    NASA Technical Reports Server (NTRS)

    Benjauthrit, B.; Kemp, R. P.

    1977-01-01

    Results are described of telemetry system performance testing using DSN equipment and a Maximum Likelihood Convolutional Decoder (MCD) for code rates 1/2 and 1/3, constraint length 7 and special test software. The test results confirm the superiority of the rate 1/3 over that of the rate 1/2. The overall system performance losses determined at the output of the Symbol Synchronizer Assembly are less than 0.5 db for both code rates. Comparison of the performance is also made with existing mathematical models. Error statistics of the decoded data are examined. The MCD operational threshold is found to be about 1.96 db.

  19. Multifrequency InSAR height reconstruction through maximum likelihood estimation of local planes parameters.

    PubMed

    Pascazio, Vito; Schirinzi, Gilda

    2002-01-01

    In this paper, a technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities.

  20. Soft decoding a self-dual (48, 24; 12) code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  1. Effects of time-shifted data on flight determined stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Steers, S. T.; Iliff, K. W.

    1975-01-01

    Flight data were shifted in time by various increments to assess the effects of time shifts on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there was a considerable time shift in the data. Time shifts degraded the estimates of the derivatives, but the degradation was in a consistent rather than a random pattern. Time shifts in the control variables caused the most degradation, and the lateral-directional rotary derivatives were affected the most by time shifts in any variable.

  2. Minimum distance classification in remote sensing

    NASA Technical Reports Server (NTRS)

    Wacker, A. G.; Landgrebe, D. A.

    1972-01-01

    The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.

  3. Maximum likelihood conjoint measurement of lightness and chroma.

    PubMed

    Rogers, Marie; Knoblauch, Kenneth; Franklin, Anna

    2016-03-01

    Color varies along dimensions of lightness, hue, and chroma. We used maximum likelihood conjoint measurement to investigate how lightness and chroma influence color judgments. Observers judged lightness and chroma of stimuli that varied in both dimensions in a paired-comparison task. We modeled how changes in one dimension influenced judgment of the other. An additive model best fit the data in all conditions except for judgment of red chroma where there was a small but significant interaction. Lightness negatively contributed to perception of chroma for red, blue, and green hues but not for yellow. The method permits quantification of lightness and chroma contributions to color appearance.

  4. Case-Deletion Diagnostics for Maximum Likelihood Multipoint Quantitative Trait Locus Linkage Analysis

    PubMed Central

    Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.

    2009-01-01

    Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086

  5. Fitting distributions to microbial contamination data collected with an unequal probability sampling design.

    PubMed

    Williams, M S; Ebel, E D; Cao, Y

    2013-01-01

    The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.

  6. RAxML-VI-HPC: maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models.

    PubMed

    Stamatakis, Alexandros

    2006-11-01

    RAxML-VI-HPC (randomized axelerated maximum likelihood for high performance computing) is a sequential and parallel program for inference of large phylogenies with maximum likelihood (ML). Low-level technical optimizations, a modification of the search algorithm, and the use of the GTR+CAT approximation as replacement for GTR+Gamma yield a program that is between 2.7 and 52 times faster than the previous version of RAxML. A large-scale performance comparison with GARLI, PHYML, IQPNNI and MrBayes on real data containing 1000 up to 6722 taxa shows that RAxML requires at least 5.6 times less main memory and yields better trees in similar times than the best competing program (GARLI) on datasets up to 2500 taxa. On datasets > or =4000 taxa it also runs 2-3 times faster than GARLI. RAxML has been parallelized with MPI to conduct parallel multiple bootstraps and inferences on distinct starting trees. The program has been used to compute ML trees on two of the largest alignments to date containing 25,057 (1463 bp) and 2182 (51,089 bp) taxa, respectively. icwww.epfl.ch/~stamatak

  7. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level

    PubMed Central

    Savalei, Victoria; Rhemtulla, Mijke

    2017-01-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371

  8. Determining crop residue type and class using satellite acquired data. M.S. Thesis Progress Report, Jun. 1990

    NASA Technical Reports Server (NTRS)

    Zhuang, Xin

    1990-01-01

    LANDSAT Thematic Mapper (TM) data for March 23, 1987 with accompanying ground truth data for the study area in Miami County, IN were used to determine crop residue type and class. Principle components and spectral ratioing transformations were applied to the LANDSAT TM data. One graphic information system (GIS) layer of land ownership was added to each original image as the eighth band of data in an attempt to improve classification. Maximum likelihood, minimum distance, and neural networks were used to classify the original, transformed, and GIS-enhanced remotely sensed data. Crop residues could be separated from one another and from bare soil and other biomass. Two types of crop residue and four classes were identified from each LANDSAT TM image. The maximum likelihood classifier performed the best classification for each original image without need of any transformation. The neural network classifier was able to improve the classification by incorporating a GIS-layer of land ownership as an eighth band of data. The maximum likelihood classifier was unable to consider this eighth band of data and thus, its results could not be improved by its consideration.

  9. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.

    PubMed

    Savalei, Victoria; Rhemtulla, Mijke

    2017-08-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.

  10. Maximum-Entropy Inference with a Programmable Annealer

    PubMed Central

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-01-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311

  11. Genotoxic Evaluation of Mikania laevigata Extract on DNA Damage Caused by Acute Coal Dust Exposure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freitas, T.P.; Heuser, V.D.; Tavares, P.

    2009-06-15

    We report data on the possible antigenotoxic activity of Mikania laevigata extract (MLE) after acute intratracheal instillation of coal dust using the comet assay in peripheral blood, bone marrow, and liver cells and the micronucleus test in peripheral blood of Wistar rats. The animals were pretreated for 2 weeks with saline solution (groups 1 and 2) or MLE (100 mg/kg) (groups 3 and 4). On day 15, the animals were anesthetized with ketamine (80 mg/kg) and xylazine (20 mg/kg), and gross mineral coal dust (3 mg/0.3 mL saline) (groups 2 and 4) or saline solution (0.3 mL) (groups 1 andmore » 3) was administered directly in the lung by intratracheal administration. Fifteen days after coal dust or saline instillation, the animals were sacrificed, and the femur, liver, and peripheral blood were removed. The results showed a general increase in the DNA damage values at 8 hours for all treatment groups, probably related to surgical procedures that had stressed the animals. Also, liver cells from rats treated with coal dust, pretreated or not with MLE, showed statistically higher comet assay values compared to the control group at 14 days after exposure. These results could be expected because the liver metabolizes a variety of organic compounds to more polar by-products. On the other hand, the micronucleus assay results did not show significant differences among groups. Therefore, our data do not support the antimutagenic activity of M. laevigata as a modulator of DNA damage after acute coal dust instillation.« less

  12. Comparing inversion techniques for constraining CO2 fluxes in the Brazilian Amazon Basin with aircraft observations

    NASA Astrophysics Data System (ADS)

    Chow, V. Y.; Gerbig, C.; Longo, M.; Koch, F.; Nehrkorn, T.; Eluszkiewicz, J.; Ceballos, J. C.; Longo, K.; Wofsy, S. C.

    2012-12-01

    The Balanço Atmosférico Regional de Carbono na Amazônia (BARCA) aircraft program spanned the dry to wet and wet to dry transition seasons in November 2008 & May 2009 respectively. It resulted in ~150 vertical profiles covering the Brazilian Amazon Basin (BAB). With the data we attempt to estimate a carbon budget for the BAB, to determine if regional aircraft experiments can provide strong constraints for a budget, and to compare inversion frameworks when optimizing flux estimates. We use a LPDM to integrate satellite-, aircraft-, & surface-data with mesoscale meteorological fields to link bottom-up and top-down models to provide constraints and error bounds for regional fluxes. The Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by meteorological fields from BRAMS, ECMWF, and WRF are coupled to a biosphere model, the Vegetation Photosynthesis Respiration Model (VPRM), to determine regional CO2 fluxes for the BAB. The VPRM is a prognostic biosphere model driven by MODIS 8-day EVI and LSWI indices along with shortwave radiation and temperature from tower measurements and mesoscale meteorological data. VPRM parameters are tuned using eddy flux tower data from the Large-Scale Biosphere Atmosphere experiment. VPRM computes hourly CO2 fluxes by calculating Gross Ecosystem Exchange (GEE) and Respiration (R) for 8 different vegetation types. The VPRM fluxes are scaled up to the BAB by using time-averaged drivers (shortwave radiation & temperature) from high-temporal resolution runs of BRAMS, ECMWF, and WRF and vegetation maps from SYNMAP and IGBP2007. Shortwave radiation from each mesoscale model is validated using surface data and output from GL 1.2, a global radiation model based on GOES 8 visible imagery. The vegetation maps are updated to 2008 and 2009 using landuse scenarios modeled by Sim Amazonia 2 and Sim Brazil. A priori fluxes modeled by STILT-VPRM are optimized using data from BARCA, eddy covariance sites, and flask measurements. The aircraft mixing ratios are applied as a top down constraint in Maximum Likelihood Estimation (MLE) and Bayesian inversion frameworks that solves for parameters controlling the flux. Posterior parameter estimates are used to estimate the carbon budget of the BAB. Preliminary results show that the STILT-VPRM model simulates the net emission of CO2 during both transition periods reasonably well. There is significant enhancement from biomass burning during the November 2008 profiles and some from fossil fuel combustion during the May 2009 flights. ΔCO/ΔCO2 emission ratios are used in combination with continuous observations of CO to remove the CO2 contributions from biomass burning and fossil fuel combustion from the observed CO2 measurements resulting in better agreement of observed and modeled aircraft data. Comparing column calculations for each of the vertical profiles shows our model represents the variability in the diurnal cycle. The high altitude CO2 values from above 3500m are similar to the lateral boundary conditions from CarbonTracker 2010 and GEOS-Chem indicating little influence from surface fluxes at these levels. The MLE inversion provides scaling factors for GEE and R for each of the 8 vegetation types and a Bayesian inversion is being conducted. Our initial inversion results suggest the BAB represents a small net source of CO2 during both of the BARCA intensives.

  13. Phylogenetically marking the limits of the genus Fusarium for post-Article 59 usage

    USDA-ARS?s Scientific Manuscript database

    Fusarium (Hypocreales, Nectriaceae) is one of the most important and systematically challenging groups of mycotoxigenic, plant pathogenic, and human pathogenic fungi. We conducted maximum likelihood (ML), maximum parsimony (MP) and Bayesian (B) analyses on partial nucleotide sequences of genes encod...

  14. Determining the linkage of disease-resistance genes to molecular markers: the LOD-SCORE method revisited with regard to necessary sample sizes.

    PubMed

    Hühn, M

    1995-05-01

    Some approaches to molecular marker-assisted linkage detection for a dominant disease-resistance trait based on a segregating F2 population are discussed. Analysis of two-point linkage is carried out by the traditional measure of maximum lod score. It depends on (1) the maximum-likelihood estimate of the recombination fraction between the marker and the disease-resistance gene locus, (2) the observed absolute frequencies, and (3) the unknown number of tested individuals. If one replaces the absolute frequencies by expressions depending on the unknown sample size and the maximum-likelihood estimate of recombination value, the conventional rule for significant linkage (maximum lod score exceeds a given linkage threshold) can be resolved for the sample size. For each sub-population used for linkage analysis [susceptible (= recessive) individuals, resistant (= dominant) individuals, complete F2] this approach gives a lower bound for the necessary number of individuals required for the detection of significant two-point linkage by the lod-score method.

  15. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  16. Poisson point process modeling for polyphonic music transcription.

    PubMed

    Peeling, Paul; Li, Chung-fai; Godsill, Simon

    2007-04-01

    Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.

  17. Maximum-likelihood techniques for joint segmentation-classification of multispectral chromosome images.

    PubMed

    Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L

    2005-12-01

    Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.

  18. Exploiting Non-sequence Data in Dynamic Model Learning

    DTIC Science & Technology

    2013-10-01

    For our experiments here and in Section 3.5, we implement the proposed algorithms in MATLAB and use the maximum directed spanning tree solver...embarrassingly parallelizable, whereas PM’s maximum directed spanning tree procedure is harder to parallelize. In this experiment, our MATLAB ...some estimation problems, this approach is able to give unique and consistent estimates while the maximum- likelihood method gets entangled in

  19. Lateral stability and control derivatives of a jet fighter airplane extracted from flight test data by utilizing maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Steinmetz, G. G.

    1972-01-01

    A method of parameter extraction for stability and control derivatives of aircraft from flight test data, implementing maximum likelihood estimation, has been developed and successfully applied to actual lateral flight test data from a modern sophisticated jet fighter. This application demonstrates the important role played by the analyst in combining engineering judgment and estimator statistics to yield meaningful results. During the analysis, the problems of uniqueness of the extracted set of parameters and of longitudinal coupling effects were encountered and resolved. The results for all flight runs are presented in tabular form and as time history comparisons between the estimated states and the actual flight test data.

  20. Effect of sampling rate and record length on the determination of stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Brenner, M. J.; Iliff, K. W.; Whitman, R. K.

    1978-01-01

    Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.

Top