Chen, Rui; Hyrien, Ollivier
2011-01-01
This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356
A conditional probability analysis (CPA) approach has been developed for identifying biological thresholds of impact for use in the development of geographic-specific water quality criteria for protection of aquatic life. This approach expresses the threshold as the likelihood ...
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
Communicating uncertainty in circulation aspects of climate change
NASA Astrophysics Data System (ADS)
Shepherd, Ted
2017-04-01
The usual way of representing uncertainty in climate change is to define a likelihood range of possible futures, conditioned on a particular pathway of greenhouse gas concentrations (RCPs). Typically these likelihood ranges are derived from multi-model ensembles. However, there is no obvious basis for treating such ensembles as probability distributions. Moreover, for aspects of climate related to atmospheric circulation, such an approach generally leads to large uncertainty and low confidence in projections. Yet this does not mean that the associated climate risks are small. We therefore need to develop suitable ways of communicating climate risk whilst acknowledging the uncertainties. This talk will outline an approach based on conditioning the purely thermodynamic aspects of climate change, concerning which there is comparatively high confidence, on circulation-related aspects, and treating the latter through non-probabilistic storylines.
Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay
2013-12-01
Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.
Shih, Weichung Joe; Li, Gang; Wang, Yining
2016-03-01
Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.
SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.
Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman
2017-03-01
We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).
Eng, Kenny; Carlisle, Daren M.; Wolock, David M.; Falcone, James A.
2013-01-01
An approach is presented in this study to aid water-resource managers in characterizing streamflow alteration at ungauged rivers. Such approaches can be used to take advantage of the substantial amounts of biological data collected at ungauged rivers to evaluate the potential ecological consequences of altered streamflows. National-scale random forest statistical models are developed to predict the likelihood that ungauged rivers have altered streamflows (relative to expected natural condition) for five hydrologic metrics (HMs) representing different aspects of the streamflow regime. The models use human disturbance variables, such as number of dams and road density, to predict the likelihood of streamflow alteration. For each HM, separate models are derived to predict the likelihood that the observed metric is greater than (‘inflated’) or less than (‘diminished’) natural conditions. The utility of these models is demonstrated by applying them to all river segments in the South Platte River in Colorado, USA, and for all 10-digit hydrologic units in the conterminous United States. In general, the models successfully predicted the likelihood of alteration to the five HMs at the national scale as well as in the South Platte River basin. However, the models predicting the likelihood of diminished HMs consistently outperformed models predicting inflated HMs, possibly because of fewer sites across the conterminous United States where HMs are inflated. The results of these analyses suggest that the primary predictors of altered streamflow regimes across the Nation are (i) the residence time of annual runoff held in storage in reservoirs, (ii) the degree of urbanization measured by road density and (iii) the extent of agricultural land cover in the river basin.
2010-03-03
obtainable while for the free-decay problem we simply have to include the initial conditions as random variables to be predicted. A different approach that...important and useful properties of MLEs is that, under regularity conditions , they are asymptotically unbiased and possess the minimum possible...becomes pLðzjh;s2G;MiÞ (i.e. the likelihood is conditional on the specified model). However, in this work we will only consider a single model and drop the
Leaché, Adam D.; Banbury, Barbara L.; Felsenstein, Joseph; de Oca, Adrián nieto-Montes; Stamatakis, Alexandros
2015-01-01
Single nucleotide polymorphisms (SNPs) are useful markers for phylogenetic studies owing in part to their ubiquity throughout the genome and ease of collection. Restriction site associated DNA sequencing (RADseq) methods are becoming increasingly popular for SNP data collection, but an assessment of the best practises for using these data in phylogenetics is lacking. We use computer simulations, and new double digest RADseq (ddRADseq) data for the lizard family Phrynosomatidae, to investigate the accuracy of RAD loci for phylogenetic inference. We compare the two primary ways RAD loci are used during phylogenetic analysis, including the analysis of full sequences (i.e., SNPs together with invariant sites), or the analysis of SNPs on their own after excluding invariant sites. We find that using full sequences rather than just SNPs is preferable from the perspectives of branch length and topological accuracy, but not of computational time. We introduce two new acquisition bias corrections for dealing with alignments composed exclusively of SNPs, a conditional likelihood method and a reconstituted DNA approach. The conditional likelihood method conditions on the presence of variable characters only (the number of invariant sites that are unsampled but known to exist is not considered), while the reconstituted DNA approach requires the user to specify the exact number of unsampled invariant sites prior to the analysis. Under simulation, branch length biases increase with the amount of missing data for both acquisition bias correction methods, but branch length accuracy is much improved in the reconstituted DNA approach compared to the conditional likelihood approach. Phylogenetic analyses of the empirical data using concatenation or a coalescent-based species tree approach provide strong support for many of the accepted relationships among phrynosomatid lizards, suggesting that RAD loci contain useful phylogenetic signal across a range of divergence times despite the presence of missing data. Phylogenetic analysis of RAD loci requires careful attention to model assumptions, especially if downstream analyses depend on branch lengths. PMID:26227865
A composite likelihood approach for spatially correlated survival data
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450
A composite likelihood approach for spatially correlated survival data.
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.
Posterior Predictive Bayesian Phylogenetic Model Selection
Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn
2014-01-01
We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892
Robust analysis of semiparametric renewal process models
Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.
2013-01-01
Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568
Balliu, Brunilda; Tsonaka, Roula; Boehringer, Stefan; Houwing-Duistermaat, Jeanine
2015-03-01
Integrative omics, the joint analysis of outcome and multiple types of omics data, such as genomics, epigenomics, and transcriptomics data, constitute a promising approach for powerful and biologically relevant association studies. These studies often employ a case-control design, and often include nonomics covariates, such as age and gender, that may modify the underlying omics risk factors. An open question is how to best integrate multiple omics and nonomics information to maximize statistical power in case-control studies that ascertain individuals based on the phenotype. Recent work on integrative omics have used prospective approaches, modeling case-control status conditional on omics, and nonomics risk factors. Compared to univariate approaches, jointly analyzing multiple risk factors with a prospective approach increases power in nonascertained cohorts. However, these prospective approaches often lose power in case-control studies. In this article, we propose a novel statistical method for integrating multiple omics and nonomics factors in case-control association studies. Our method is based on a retrospective likelihood function that models the joint distribution of omics and nonomics factors conditional on case-control status. The new method provides accurate control of Type I error rate and has increased efficiency over prospective approaches in both simulated and real data. © 2015 Wiley Periodicals, Inc.
Public acceptance of wildland fire and fuel management: Panel responses in seven locations
Eric Toman; Bruce Shindler; Sarah McCaffrey; James Bennett
2014-01-01
Wildland fire affects both public and private resources throughout the United States. A century of fire suppression has contributed to changing ecological conditions and accumulated fuel loads. Managers have used a variety of approaches to address these conditions and reduce the likelihood of wildland fires that may result in adverse ecological impacts and threaten...
We have previously developed a statistical method to identify gene sets enriched with condition-specific genetic dependencies. The method constructs gene dependency networks from bootstrapped samples in one condition and computes the divergence between distributions of network likelihood scores from different conditions. It was shown to be capable of sensitive and specific identification of pathways with phenotype-specific dysregulation, i.e., rewiring of dependencies between genes in different conditions.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).
A Bayesian Approach to More Stable Estimates of Group-Level Effects in Contextual Studies.
Zitzmann, Steffen; Lüdtke, Oliver; Robitzsch, Alexander
2015-01-01
Multilevel analyses are often used to estimate the effects of group-level constructs. However, when using aggregated individual data (e.g., student ratings) to assess a group-level construct (e.g., classroom climate), the observed group mean might not provide a reliable measure of the unobserved latent group mean. In the present article, we propose a Bayesian approach that can be used to estimate a multilevel latent covariate model, which corrects for the unreliable assessment of the latent group mean when estimating the group-level effect. A simulation study was conducted to evaluate the choice of different priors for the group-level variance of the predictor variable and to compare the Bayesian approach with the maximum likelihood approach implemented in the software Mplus. Results showed that, under problematic conditions (i.e., small number of groups, predictor variable with a small ICC), the Bayesian approach produced more accurate estimates of the group-level effect than the maximum likelihood approach did.
Evaluation of Dynamic Coastal Response to Sea-level Rise Modifies Inundation Likelihood
NASA Technical Reports Server (NTRS)
Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.
2016-01-01
Sea-level rise (SLR) poses a range of threats to natural and built environments, making assessments of SLR-induced hazards essential for informed decision making. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30x30m resolution predictions for more than 38,000 sq km of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.
Automatic optimism: the affective basis of judgments about the likelihood of future events.
Lench, Heather C
2009-05-01
People generally judge that the future will be consistent with their desires, but the reason for this desirability bias is unclear. This investigation examined whether affective reactions associated with future events are the mechanism through which desires influence likelihood judgments. In 4 studies, affective reactions were manipulated for initially neutral events. Compared with a neutral condition, events associated with positive reactions were judged as likely to occur, and events associated with negative reactions were judged as unlikely to occur. Desirability biases were reduced when participants could misattribute affective reactions to a source other than future events, and the relationship between affective reactions and judgments was influenced when approach and avoidance motivations were independently manipulated. Together, these findings demonstrate that positive and negative affective reactions to potential events cause the desirability bias in likelihood judgments and suggest that this effect occurs because of a tendency to approach positive possibilities and avoid negative possibilities. (c) 2009 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro
2003-06-01
In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.
Dziak, John J.; Bray, Bethany C.; Zhang, Jieting; Zhang, Minqiang; Lanza, Stephanie T.
2016-01-01
Several approaches are available for estimating the relationship of latent class membership to distal outcomes in latent profile analysis (LPA). A three-step approach is commonly used, but has problems with estimation bias and confidence interval coverage. Proposed improvements include the correction method of Bolck, Croon, and Hagenaars (BCH; 2004), Vermunt’s (2010) maximum likelihood (ML) approach, and the inclusive three-step approach of Bray, Lanza, & Tan (2015). These methods have been studied in the related case of latent class analysis (LCA) with categorical indicators, but not as well studied for LPA with continuous indicators. We investigated the performance of these approaches in LPA with normally distributed indicators, under different conditions of distal outcome distribution, class measurement quality, relative latent class size, and strength of association between latent class and the distal outcome. The modified BCH implemented in Latent GOLD had excellent performance. The maximum likelihood and inclusive approaches were not robust to violations of distributional assumptions. These findings broadly agree with and extend the results presented by Bakk and Vermunt (2016) in the context of LCA with categorical indicators. PMID:28630602
Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C
2018-04-01
A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.
MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS
Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...
Rasch, Elizabeth K.; Huynh, Minh; Ho, Pei-Shu; Heuser, Aaron; Houtenville, Andrew; Chan, Leighton
2014-01-01
Background: Given the complexity of the adjudication process and volume of applications to Social Security Administration’s (SSA) disability programs, many individuals with serious medical conditions die while awaiting an application decision. Limitations of traditional survival methods called for a new empirical approach to identify conditions resulting in rapid mortality. Objective: To identify health conditions associated with significantly higher mortality than a key reference group among applicants for SSA disability programs. Research design: We identified mortality patterns and generated a survival surface for a reference group using conditions already designated for expedited processing. We identified conditions associated with significantly higher mortality than the reference group and prioritized them by the expected likelihood of death during the adjudication process. Subjects: Administrative records of 29 million Social Security disability applicants, who applied for benefits from 1996 – 2007, were analyzed. Measures: We computed survival spells from time of onset of disability to death, and from date of application to death. Survival data were organized by entry cohort. Results: In our sample, we observed that approximately 42,000 applicants died before a decision was made on their disability claims. We identified 24 conditions with survival profiles comparable to the reference group. Applicants with these conditions were not likely to survive adjudication. Conclusions: Our approach facilitates ongoing revision of the conditions SSA designates for expedited awards and has applicability to other programs where survival profiles are a consideration. PMID:25310524
Maximum likelihood estimation for life distributions with competing failure modes
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1979-01-01
Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
Evaluation of dynamic coastal response to sea-level rise modifies inundation likelihood
Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.
2016-01-01
Sea-level rise (SLR) poses a range of threats to natural and built environments1, 2, making assessments of SLR-induced hazards essential for informed decision making3. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30 × 30 m resolution predictions for more than 38,000 km2 of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.
A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.
Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua
2017-07-01
Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.
Ng, S K; McLachlan, G J
2003-04-15
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.
Kashuba, Roxolana; McMahon, Gerard; Cuffney, Thomas F.; Qian, Song; Reckhow, Kenneth; Gerritsen, Jeroen; Davies, Susan
2012-01-01
In realization of the aforementioned advantages, a Bayesian network model was constructed to characterize the effect of urban development on aquatic macroinvertebrate stream communities through three simultaneous, interacting ecological pathways affecting stream hydrology, habitat, and water quality across watersheds in the Northeastern United States. This model incorporates both empirical data and expert knowledge to calculate the probabilities of attaining desired aquatic ecosystem conditions under different urban stress levels, environmental conditions, and management options. Ecosystem conditions are characterized in terms of standardized Biological Condition Gradient (BCG) management endpoints. This approach to evaluating urban development-induced perturbations in watersheds integrates statistical and mechanistic perspectives, different information sources, and several ecological processes into a comprehensive description of the system that can be used to support decision making. The completed model can be used to infer which management actions would lead to the highest likelihood of desired BCG tier achievement. For example, if best management practices (BMP) were implemented in a highly urbanized watershed to reduce flashiness to medium levels and specific conductance to low levels, the stream would have a 70-percent chance of achieving BCG Tier 3 or better, relative to a 24-percent achievement likelihood for unmanaged high urban land cover. Results are reported probabilistically to account for modeling uncertainty that is inherent in sources such as natural variability and model simplification error.
He, Ye; Lin, Huazhen; Tu, Dongsheng
2018-06-04
In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
ERIC Educational Resources Information Center
DeSarbo, Wayne S.; Park, Joonwook; Scott, Crystal J.
2008-01-01
A cyclical conditional maximum likelihood estimation procedure is developed for the multidimensional unfolding of two- or three-way dominance data (e.g., preference, choice, consideration) measured on ordered successive category rating scales. The technical description of the proposed model and estimation procedure are discussed, as well as the…
USDA-ARS?s Scientific Manuscript database
Today’s peanut drying processes utilize decision support software based on modeling and require substantial human interaction for moisture sampling. These conditions increase the likelihood of peanuts being overdried or underdried. This research addresses the need for an automated controller with re...
Estimating Function Approaches for Spatial Point Processes
NASA Astrophysics Data System (ADS)
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.
A novel description of FDG excretion in the renal system: application to metformin-treated models
NASA Astrophysics Data System (ADS)
Garbarino, S.; Caviglia, G.; Sambuceti, G.; Benvenuto, F.; Piana, M.
2014-05-01
This paper introduces a novel compartmental model describing the excretion of 18F-fluoro-deoxyglucose (FDG) in the renal system and a numerical method based on the maximum likelihood for its reduction. This approach accounts for variations in FDG concentration due to water re-absorption in renal tubules and the increase of the bladder’s volume during the FDG excretion process. From the computational viewpoint, the reconstruction of the tracer kinetic parameters is obtained by solving the maximum likelihood problem iteratively, using a non-stationary, steepest descent approach that explicitly accounts for the Poisson nature of nuclear medicine data. The reliability of the method is validated against two sets of synthetic data realized according to realistic conditions. Finally we applied this model to describe FDG excretion in the case of animal models treated with metformin. In particular we show that our approach allows the quantitative estimation of the reduction of FDG de-phosphorylation induced by metformin.
A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits
Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling
2013-01-01
Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762
Assessing the Likelihood of Rare Medical Events in Astronauts
NASA Technical Reports Server (NTRS)
Myers, Jerry G., Jr.; Leandowski, Beth E.; Brooker, John E.; Weaver, Aaron S.
2011-01-01
Despite over half a century of manned space flight, the space flight community is only now coming to fully assess the short and long term medical dangers of exposure to reduced gravity environments. Further, as new manned spacecraft are designed and with the advent of commercial flight capabilities to the general public, a full understanding of medical risk becomes even more critical for maintaining and understanding mission safety and crew health. To address these critical issues, the National Aeronautics and Space Administration (NASA) Human Research Program (HRP) has begun to address the medical hazards with a formalized risk management approach by effectively identifying and attempting to mitigate acute and chronic medical risks to manned space flight. This paper describes NASA Glenn Research Center?s (GRC) efforts to develop a systematic methodology to assess the likelihood of in-flight medical conditions. Using a probabilistic approach, medical risks are assessed using well established and accepted biomedical and human performance models in combination with fundamentally observed data that defines the astronauts? physical conditions, environment and activity levels. Two different examples of space flight risk are used to show the versatility of our approach and how it successfully integrates disparate information to provide HRP decision makers with a valuable source of information which is otherwise lacking.
NASA Astrophysics Data System (ADS)
Zhou, X.; Albertson, J. D.
2016-12-01
Natural gas is considered as a bridge fuel towards clean energy due to its potential lower greenhouse gas emission comparing with other fossil fuels. Despite numerous efforts, an efficient and cost-effective approach to monitor fugitive methane emissions along the natural gas production-supply chain has not been developed yet. Recently, mobile methane measurement has been introduced which applies a Bayesian approach to probabilistically infer methane emission rates and update estimates recursively when new measurements become available. However, the likelihood function, especially the error term which determines the shape of the estimate uncertainty, is not rigorously defined and evaluated with field data. To address this issue, we performed a series of near-source (< 30 m) controlled methane release experiments using a specialized vehicle mounted with fast response methane analyzers and a GPS unit. Methane concentrations were measured at two different heights along mobile traversals downwind of the sources, and concurrent wind and temperature data are recorded by nearby 3-D sonic anemometers. With known methane release rates, the measurements were used to determine the functional form and the parameterization of the likelihood function in the Bayesian inference scheme under different meteorological conditions.
Forecasting conditional climate-change using a hybrid approach
Esfahani, Akbar Akbari; Friedel, Michael J.
2014-01-01
A novel approach is proposed to forecast the likelihood of climate-change across spatial landscape gradients. This hybrid approach involves reconstructing past precipitation and temperature using the self-organizing map technique; determining quantile trends in the climate-change variables by quantile regression modeling; and computing conditional forecasts of climate-change variables based on self-similarity in quantile trends using the fractionally differenced auto-regressive integrated moving average technique. The proposed modeling approach is applied to states (Arizona, California, Colorado, Nevada, New Mexico, and Utah) in the southwestern U.S., where conditional forecasts of climate-change variables are evaluated against recent (2012) observations, evaluated at a future time period (2030), and evaluated as future trends (2009–2059). These results have broad economic, political, and social implications because they quantify uncertainty in climate-change forecasts affecting various sectors of society. Another benefit of the proposed hybrid approach is that it can be extended to any spatiotemporal scale providing self-similarity exists.
Heumann, Benjamin W.; Walsh, Stephen J.; Verdery, Ashton M.; McDaniel, Phillip M.; Rindfuss, Ronald R.
2012-01-01
Understanding the pattern-process relations of land use/land cover change is an important area of research that provides key insights into human-environment interactions. The suitability or likelihood of occurrence of land use such as agricultural crop types across a human-managed landscape is a central consideration. Recent advances in niche-based, geographic species distribution modeling (SDM) offer a novel approach to understanding land suitability and land use decisions. SDM links species presence-location data with geospatial information and uses machine learning algorithms to develop non-linear and discontinuous species-environment relationships. Here, we apply the MaxEnt (Maximum Entropy) model for land suitability modeling by adapting niche theory to a human-managed landscape. In this article, we use data from an agricultural district in Northeastern Thailand as a case study for examining the relationships between the natural, built, and social environments and the likelihood of crop choice for the commonly grown crops that occur in the Nang Rong District – cassava, heavy rice, and jasmine rice, as well as an emerging crop, fruit trees. Our results indicate that while the natural environment (e.g., elevation and soils) is often the dominant factor in crop likelihood, the likelihood is also influenced by household characteristics, such as household assets and conditions of the neighborhood or built environment. Furthermore, the shape of the land use-environment curves illustrates the non-continuous and non-linear nature of these relationships. This approach demonstrates a novel method of understanding non-linear relationships between land and people. The article concludes with a proposed method for integrating the niche-based rules of land use allocation into a dynamic land use model that can address both allocation and quantity of agricultural crops. PMID:24187378
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
Analysis of case-only studies accounting for genotyping error.
Cheng, K F
2007-03-01
The case-only design provides one approach to assess possible interactions between genetic and environmental factors. It has been shown that if these factors are conditionally independent, then a case-only analysis is not only valid but also very efficient. However, a drawback of the case-only approach is that its conclusions may be biased by genotyping errors. In this paper, our main aim is to propose a method for analysis of case-only studies when these errors occur. We show that the bias can be adjusted through the use of internal validation data, which are obtained by genotyping some sampled individuals twice. Our analysis is based on a simple and yet highly efficient conditional likelihood approach. Simulation studies considered in this paper confirm that the new method has acceptable performance under genotyping errors.
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...
2014-10-16
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.
2014-01-01
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157
Depaoli, Sarah
2013-06-01
Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Sources of Biased Inference in Alcohol and Drug Services Research: An Instrumental Variable Approach
Schmidt, Laura A.; Tam, Tammy W.; Larson, Mary Jo
2012-01-01
Objective: This study examined the potential for biased inference due to endogeneity when using standard approaches for modeling the utilization of alcohol and drug treatment. Method: Results from standard regression analysis were compared with those that controlled for endogeneity using instrumental variables estimation. Comparable models predicted the likelihood of receiving alcohol treatment based on the widely used Aday and Andersen medical care–seeking model. Data were from the National Epidemiologic Survey on Alcohol and Related Conditions and included a representative sample of adults in households and group quarters throughout the contiguous United States. Results: Findings suggested that standard approaches for modeling treatment utilization are prone to bias because of uncontrolled reverse causation and omitted variables. Compared with instrumental variables estimation, standard regression analyses produced downwardly biased estimates of the impact of alcohol problem severity on the likelihood of receiving care. Conclusions: Standard approaches for modeling service utilization are prone to underestimating the true effects of problem severity on service use. Biased inference could lead to inaccurate policy recommendations, for example, by suggesting that people with milder forms of substance use disorder are more likely to receive care than is actually the case. PMID:22152672
Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine
1999-01-01
Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...
Copula based flexible modeling of associations between clustered event times.
Geerdens, Candida; Claeskens, Gerda; Janssen, Paul
2016-07-01
Multivariate survival data are characterized by the presence of correlation between event times within the same cluster. First, we build multi-dimensional copulas with flexible and possibly symmetric dependence structures for such data. In particular, clustered right-censored survival data are modeled using mixtures of max-infinitely divisible bivariate copulas. Second, these copulas are fit by a likelihood approach where the vast amount of copula derivatives present in the likelihood is approximated by finite differences. Third, we formulate conditions for clustered right-censored survival data under which an information criterion for model selection is either weakly consistent or consistent. Several of the familiar selection criteria are included. A set of four-dimensional data on time-to-mastitis is used to demonstrate the developed methodology.
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; Zhang, Jinming
2008-01-01
The method of maximum-likelihood is typically applied to item response theory (IRT) models when the ability parameter is estimated while conditioning on the true item parameters. In practice, the item parameters are unknown and need to be estimated first from a calibration sample. Lewis (1985) and Zhang and Lu (2007) proposed the expected response…
Liu, Fang; Eugenio, Evercita C
2018-04-01
Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.
Robust Methods for Moderation Analysis with a Two-Level Regression Model.
Yang, Miao; Yuan, Ke-Hai
2016-01-01
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.
An Optimization-based Framework to Learn Conditional Random Fields for Multi-label Classification
Naeini, Mahdi Pakdaman; Batal, Iyad; Liu, Zitao; Hong, CharmGil; Hauskrecht, Milos
2015-01-01
This paper studies multi-label classification problem in which data instances are associated with multiple, possibly high-dimensional, label vectors. This problem is especially challenging when labels are dependent and one cannot decompose the problem into a set of independent classification problems. To address the problem and properly represent label dependencies we propose and study a pairwise conditional random Field (CRF) model. We develop a new approach for learning the structure and parameters of the CRF from data. The approach maximizes the pseudo likelihood of observed labels and relies on the fast proximal gradient descend for learning the structure and limited memory BFGS for learning the parameters of the model. Empirical results on several datasets show that our approach outperforms several multi-label classification baselines, including recently published state-of-the-art methods. PMID:25927015
NASA Astrophysics Data System (ADS)
Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.
1998-07-01
An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.
Assessing Individual Weather Risk-Taking and Its Role in Modeling Likelihood of Hurricane Evacuation
NASA Astrophysics Data System (ADS)
Stewart, A. E.
2017-12-01
This research focuses upon measuring an individual's level of perceived risk of different severe and extreme weather conditions using a new self-report measure, the Weather Risk-Taking Scale (WRTS). For 32 severe and extreme situations in which people could perform an unsafe behavior (e. g., remaining outside with lightning striking close by, driving over roadways covered with water, not evacuating ahead of an approaching hurricane, etc.), people rated: 1.their likelihood of performing the behavior, 2. The perceived risk of performing the behavior, 3. the expected benefits of performing the behavior, and 4. whether the behavior has actually been performed in the past. Initial development research with the measure using 246 undergraduate students examined its psychometric properties and found that it was internally consistent (Cronbach's a ranged from .87 to .93 for the four scales) and that the scales possessed good temporal (test-retest) reliability (r's ranged from .84 to .91). A second regression study involving 86 undergraduate students found that taking weather risks was associated with having taken similar risks in one's past and with the personality trait of sensation-seeking. Being more attentive to the weather and perceiving its risks when it became extreme was associated with lower likelihoods of taking weather risks (overall regression model, R2adj = 0.60). A third study involving 334 people examined the contributions of weather risk perceptions and risk-taking in modeling the self-reported likelihood of complying with a recommended evacuation ahead of a hurricane. Here, higher perceptions of hurricane risks and lower perceived benefits of risk-taking along with fear of severe weather and hurricane personal self-efficacy ratings were all statistically significant contributors to the likelihood of evacuating ahead of a hurricane. Psychological rootedness and attachment to one's home also tend to predict lack of evacuation. This research highlights the contributions that a psychological approach can offer in understanding preparations for severe weather. This approach also suggests that a great deal of individual variation exists in weather-protective behaviors, which may explain in part why some people take weather-related risks despite receiving warnings for severe weather.
NASA Astrophysics Data System (ADS)
Fishman, M. M.
1985-01-01
The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.
Algorithms of maximum likelihood data clustering with applications
NASA Astrophysics Data System (ADS)
Giada, Lorenzo; Marsili, Matteo
2002-12-01
We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.
Approaches to defining reference regimes for river restoration planning
NASA Astrophysics Data System (ADS)
Beechie, T. J.
2014-12-01
Reference conditions or reference regimes can be defined using three general approaches, historical analysis, contemporary reference sites, and theoretical or empirical models. For large features (e.g., floodplain channels and ponds) historical data and maps are generally reliable. For smaller features (e.g., pools and riffles in small tributaries), field data from contemporary reference sites are a reasonable surrogate for historical data. Models are generally used for features that have no historical information or present day reference sites (e.g., beaver pond habitat). Each of these approaches contributes to a watershed-wide understanding of current biophysical conditions relative to potential conditions, which helps create not only a guiding vision for restoration, but also helps quantify and locate the largest or most important restoration opportunities. Common uses of geomorphic and biological reference conditions include identifying key areas for habitat protection or restoration, and informing the choice of restoration targets. Examples of use of each of these three approaches to define reference regimes in western USA illustrate how historical information and current research highlight key restoration opportunities, focus restoration effort in areas that can produce the largest ecological benefit, and contribute to estimating restoration potential and assessing likelihood of achieving restoration goals.
Fuzzy multinomial logistic regression analysis: A multi-objective programming approach
NASA Astrophysics Data System (ADS)
Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan
2017-05-01
Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.
Vermunt, Neeltje P C A; Westert, Gert P; Olde Rikkert, Marcel G M; Faber, Marjan J
2018-03-01
To assess the impact of patient characteristics, patient-professional engagement, communication and context on the probability that healthcare professionals will discuss goals or priorities with older patients. Secondary analysis of cross-sectional data from the 2014 Commonwealth Fund International Health Policy Survey of Older Adults. 11 western countries. Community-dwelling adults, aged 55 or older. Assessment of goals and priorities. The final sample size consisted of 17,222 respondents, 54% of whom reported an assessment of their goals and priorities (AGP) by healthcare professionals. In logistic regression model 1, which was used to analyse the entire population, the determinants found to have moderate to large effects on the likelihood of AGP were information exchange on stress, diet or exercise, or both. Country (living in Sweden) and continuity of care (no regular professional or organisation) had moderate to large negative effects on the likelihood of AGP. In model 2, which focussed on respondents who experienced continuity of care, country and information exchange on stress and lifestyle were the main determinants of AGP, with comparable odds ratios to model 1. Furthermore, a professional asking questions also increased the likelihood of AGP. Continuity of care and information exchange is associated with a higher probability of AGP, while people living in Sweden are less likely to experience these assessments. Further study is required to determine whether increasing information exchange and professionals asking more questions may improve goal setting with older patients. Key points A patient goal-oriented approach can be beneficial for older patients with chronic conditions or multimorbidity; however, discussing goals with these patients is not a common practice. The likelihood of discussing goals varies by country, occurring most commonly in the USA, and least often in Sweden. Country-level differences in continuity of care and questions asked by a regularly visited professional affect the goal discussion probability. Patient characteristics, including age, have less impact than expected on the likelihood of sharing goals.
Multiscale analysis of restoration priorities for marine shoreline planning.
Diefenderfer, Heida L; Sobocinski, Kathryn L; Thom, Ronald M; May, Christopher W; Borde, Amy B; Southard, Susan L; Vavrinec, John; Sather, Nichole K
2009-10-01
Planners are being called on to prioritize marine shorelines for conservation status and restoration action. This study documents an approach to determining the management strategy most likely to succeed based on current conditions at local and landscape scales. The conceptual framework based in restoration ecology pairs appropriate restoration strategies with sites based on the likelihood of producing long-term resilience given the condition of ecosystem structures and processes at three scales: the shorezone unit (site), the drift cell reach (nearshore marine landscape), and the watershed (terrestrial landscape). The analysis is structured by a conceptual ecosystem model that identifies anthropogenic impacts on targeted ecosystem functions. A scoring system, weighted by geomorphic class, is applied to available spatial data for indicators of stress and function using geographic information systems. This planning tool augments other approaches to prioritizing restoration, including historical conditions and change analysis and ecosystem valuation.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.
Multivariate Phylogenetic Comparative Methods: Evaluations, Comparisons, and Recommendations.
Adams, Dean C; Collyer, Michael L
2018-01-01
Recent years have seen increased interest in phylogenetic comparative analyses of multivariate data sets, but to date the varied proposed approaches have not been extensively examined. Here we review the mathematical properties required of any multivariate method, and specifically evaluate existing multivariate phylogenetic comparative methods in this context. Phylogenetic comparative methods based on the full multivariate likelihood are robust to levels of covariation among trait dimensions and are insensitive to the orientation of the data set, but display increasing model misspecification as the number of trait dimensions increases. This is because the expected evolutionary covariance matrix (V) used in the likelihood calculations becomes more ill-conditioned as trait dimensionality increases, and as evolutionary models become more complex. Thus, these approaches are only appropriate for data sets with few traits and many species. Methods that summarize patterns across trait dimensions treated separately (e.g., SURFACE) incorrectly assume independence among trait dimensions, resulting in nearly a 100% model misspecification rate. Methods using pairwise composite likelihood are highly sensitive to levels of trait covariation, the orientation of the data set, and the number of trait dimensions. The consequences of these debilitating deficiencies are that a user can arrive at differing statistical conclusions, and therefore biological inferences, simply from a dataspace rotation, like principal component analysis. By contrast, algebraic generalizations of the standard phylogenetic comparative toolkit that use the trace of covariance matrices are insensitive to levels of trait covariation, the number of trait dimensions, and the orientation of the data set. Further, when appropriate permutation tests are used, these approaches display acceptable Type I error and statistical power. We conclude that methods summarizing information across trait dimensions, as well as pairwise composite likelihood methods should be avoided, whereas algebraic generalizations of the phylogenetic comparative toolkit provide a useful means of assessing macroevolutionary patterns in multivariate data. Finally, we discuss areas in which multivariate phylogenetic comparative methods are still in need of future development; namely highly multivariate Ornstein-Uhlenbeck models and approaches for multivariate evolutionary model comparisons. © The Author(s) 2017. Published by Oxford University Press on behalf of the Systematic Biology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
Adaptive decoding of convolutional codes
NASA Astrophysics Data System (ADS)
Hueske, K.; Geldmacher, J.; Götze, J.
2007-06-01
Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.
Studying parents and grandparents to assess genetic contributions to early-onset disease.
Weinberg, Clarice R
2003-02-01
Suppose DNA is available from affected individuals, their parents, and their grandparents. Particularly for early-onset diseases, maternally mediated genetic effects can play a role, because the mother determines the prenatal environment. The proposed maximum-likelihood approach for the detection of apparent transmission distortion treats the triad consisting of the affected individual and his or her two parents as the outcome, conditioning on grandparental mating types. Under a null model in which the allele under study does not confer susceptibility, either through linkage or directly, and when there are no maternally mediated genetic effects, conditional probabilities for specific triads are easily derived. A log-linear model permits a likelihood-ratio test (LRT) and allows the estimation of relative penetrances. The proposed approach is robust against genetic population stratification. Missing-data methods permit the inclusion of incomplete families, even if the missing person is the affected grandchild, as is the case when an induced abortion has followed the detection of a malformation. When screening multiple markers, one can begin by genotyping only the grandparents and the affected grandchildren. LRTs based on conditioning on grandparental mating types (i.e., ignoring the parents) have asymptotic relative efficiencies that are typically >150% (per family), compared with tests based on parents. A test for asymmetry in the number of copies carried by maternal versus paternal grandparents yields an LRT specific to maternal effects. One can then genotype the parents for only the genes that passed the initial screen. Conditioning on both the grandparents' and the affected grandchild's genotypes, a third log-linear model captures the remaining information, in an independent LRT for maternal effects.
The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.
ERIC Educational Resources Information Center
Baldwin, Beatrice
The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…
Bivariate categorical data analysis using normal linear conditional multinomial probability model.
Sun, Bingrui; Sutradhar, Brajendra
2015-02-10
Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.
Lewis, Melissa A; Rhew, Isaac C; Fairlie, Anne M; Swanson, Alex; Anderson, Judyth; Kaysen, Debra
2018-03-06
The purpose of this study was to evaluate personalized feedback intervention (PFI) framing with two web-delivered PFIs aimed to reduce young adult alcohol-related risky sexual behavior (RSB). Combined PFIs typically use an additive approach whereby independent components on drinking and components on RSB are presented without the discussion of the influence of alcohol on RSB. In contrast, an integrated PFI highlights the RSB-alcohol connection by presenting integrated alcohol and RSB components that focus on the role of intoxication as a barrier to risk reduction in sexual situations. In a randomized controlled trial, 402 (53.98% female) sexually active young adults aged 18-25 were randomly assigned to a combined PFI, an integrated PFI, or attention control. All assessment and intervention procedures were web-based. At the 1-month follow-up, those randomly assigned to the integrated condition had a lower likelihood of having any casual sex partners compared to those in the control group. At the 6-month follow-up, the combined condition had a lower likelihood of having any casual sex partners compared to those in the control group. When examining alcohol-related RSB, at the 1-month follow-up, both interventions showed a lower likelihood of any drinking prior to sex compared to the control group. When examining alcohol-related sexual consequences, results showed a reduction in the non-zero count of consequences in the integrated condition compared to the control at the 1-month follow-up. For typical drinks per week, those in the combined condition showed a greater reduction in the non-zero count of drinks than those in the control condition at the 1-month follow-up. While there were no significant differences between the two interventions, the current findings highlight the utility of two efficacious web-based alcohol and RSB interventions among a national sample of at-risk young adults.
Assessing performance and validating finite element simulations using probabilistic knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolin, Ronald M.; Rodriguez, E. A.
Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less
ERIC Educational Resources Information Center
Krings, Franciska; Facchin, Stephanie
2009-01-01
This study demonstrated relations between men's perceptions of organizational justice and increased sexual harassment proclivities. Respondents reported higher likelihood to sexually harass under conditions of low interactional justice, suggesting that sexual harassment likelihood may increase as a response to perceived injustice. Moreover, the…
Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data
ERIC Educational Resources Information Center
Xi, Nuo; Browne, Michael W.
2014-01-01
A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
Bayesian logistic regression approaches to predict incorrect DRG assignment.
Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural
2018-05-07
Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.
NASA Technical Reports Server (NTRS)
Huyse, Luc; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
Free-form shape optimization of airfoils poses unexpected difficulties. Practical experience has indicated that a deterministic optimization for discrete operating conditions can result in dramatically inferior performance when the actual operating conditions are different from the - somewhat arbitrary - design values used for the optimization. Extensions to multi-point optimization have proven unable to adequately remedy this problem of "localized optimization" near the sampled operating conditions. This paper presents an intrinsically statistical approach and demonstrates how the shortcomings of multi-point optimization with respect to "localized optimization" can be overcome. The practical examples also reveal how the relative likelihood of each of the operating conditions is automatically taken into consideration during the optimization process. This is a key advantage over the use of multipoint methods.
Depression and Chronic Health Conditions Among Latinos: The Role of Social Networks.
Soto, Sandra; Arredondo, Elva M; Villodas, Miguel T; Elder, John P; Quintanar, Elena; Madanat, Hala
2016-12-01
The purpose of this study was to examine the "buffering hypothesis" of social network characteristics in the association between chronic conditions and depression among Latinos. Cross-sectional self-report data from the San Diego Prevention Research Center's community survey of Latinos were used (n = 393). Separate multiple logistic regression models tested the role of chronic conditions and social network characteristics in the likelihood of moderate-to-severe depressive symptoms. Having a greater proportion of the network comprised of friends increased the likelihood of depression among those with high cholesterol. Having a greater proportion of women in the social network was directly related to the increased likelihood of depression, regardless of the presence of chronic health conditions. Findings suggest that network characteristics may play a role in the link between chronic conditions and depression among Latinos. Future research should explore strategies targeting the social networks of Latinos to improve health outcomes.
Lod scores for gene mapping in the presence of marker map uncertainty.
Stringham, H M; Boehnke, M
2001-07-01
Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.
Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics
NASA Astrophysics Data System (ADS)
Abe, Sumiyoshi
2014-11-01
The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.
Statistical inference methods for sparse biological time series data.
Ndukum, Juliet; Fonseca, Luís L; Santos, Helena; Voit, Eberhard O; Datta, Susmita
2011-04-25
Comparing metabolic profiles under different biological perturbations has become a powerful approach to investigating the functioning of cells. The profiles can be taken as single snapshots of a system, but more information is gained if they are measured longitudinally over time. The results are short time series consisting of relatively sparse data that cannot be analyzed effectively with standard time series techniques, such as autocorrelation and frequency domain methods. In this work, we study longitudinal time series profiles of glucose consumption in the yeast Saccharomyces cerevisiae under different temperatures and preconditioning regimens, which we obtained with methods of in vivo nuclear magnetic resonance (NMR) spectroscopy. For the statistical analysis we first fit several nonlinear mixed effect regression models to the longitudinal profiles and then used an ANOVA likelihood ratio method in order to test for significant differences between the profiles. The proposed methods are capable of distinguishing metabolic time trends resulting from different treatments and associate significance levels to these differences. Among several nonlinear mixed-effects regression models tested, a three-parameter logistic function represents the data with highest accuracy. ANOVA and likelihood ratio tests suggest that there are significant differences between the glucose consumption rate profiles for cells that had been--or had not been--preconditioned by heat during growth. Furthermore, pair-wise t-tests reveal significant differences in the longitudinal profiles for glucose consumption rates between optimal conditions and heat stress, optimal and recovery conditions, and heat stress and recovery conditions (p-values <0.0001). We have developed a nonlinear mixed effects model that is appropriate for the analysis of sparse metabolic and physiological time profiles. The model permits sound statistical inference procedures, based on ANOVA likelihood ratio tests, for testing the significance of differences between short time course data under different biological perturbations.
Climate change and disaster management.
O'Brien, Geoff; O'Keefe, Phil; Rose, Joanne; Wisner, Ben
2006-03-01
Climate change, although a natural phenomenon, is accelerated by human activities. Disaster policy response to climate change is dependent on a number of factors, such as readiness to accept the reality of climate change, institutions and capacity, as well as willingness to embed climate change risk assessment and management in development strategies. These conditions do not yet exist universally. A focus that neglects to enhance capacity-building and resilience as a prerequisite for managing climate change risks will, in all likelihood, do little to reduce vulnerability to those risks. Reducing vulnerability is a key aspect of reducing climate change risk. To do so requires a new approach to climate change risk and a change in institutional structures and relationships. A focus on development that neglects to enhance governance and resilience as a prerequisite for managing climate change risks will, in all likelihood, do little to reduce vulnerability to those risks.
Zou, W; Ouyang, H
2016-02-01
We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.
Campos-Filho, N; Franco, E L
1989-02-01
A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.
Heersink, Daniel K; Caley, Peter; Paini, Dean R; Barry, Simon C
2016-05-01
The cost of an uncontrolled incursion of invasive alien species (IAS) arising from undetected entry through ports can be substantial, and knowledge of port-specific risks is needed to help allocate limited surveillance resources. Quantifying the establishment likelihood of such an incursion requires quantifying the ability of a species to enter, establish, and spread. Estimation of the approach rate of IAS into ports provides a measure of likelihood of entry. Data on the approach rate of IAS are typically sparse, and the combinations of risk factors relating to country of origin and port of arrival diverse. This presents challenges to making formal statistical inference on establishment likelihood. Here we demonstrate how these challenges can be overcome with judicious use of mixed-effects models when estimating the incursion likelihood into Australia of the European (Apis mellifera) and Asian (A. cerana) honeybees, along with the invasive parasites of biosecurity concern they host (e.g., Varroa destructor). Our results demonstrate how skewed the establishment likelihood is, with one-tenth of the ports accounting for 80% or more of the likelihood for both species. These results have been utilized by biosecurity agencies in the allocation of resources to the surveillance of maritime ports. © 2015 Society for Risk Analysis.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
ERIC Educational Resources Information Center
Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike
2011-01-01
It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…
Spectral decompositions of multiple time series: a Bayesian non-parametric approach.
Macaro, Christian; Prado, Raquel
2014-01-01
We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.
Modelling Wind Turbine Failures based on Weather Conditions
NASA Astrophysics Data System (ADS)
Reder, Maik; Melero, Julio J.
2017-11-01
A large proportion of the overall costs of a wind farm is directly related to operation and maintenance (O&M) tasks. By applying predictive O&M strategies rather than corrective approaches these costs can be decreased significantly. Here, especially wind turbine (WT) failure models can help to understand the components’ degradation processes and enable the operators to anticipate upcoming failures. Usually, these models are based on the age of the systems or components. However, latest research shows that the on-site weather conditions also affect the turbine failure behaviour significantly. This study presents a novel approach to model WT failures based on the environmental conditions to which they are exposed to. The results focus on general WT failures, as well as on four main components: gearbox, generator, pitch and yaw system. A penalised likelihood estimation is used in order to avoid problems due to for example highly correlated input covariates. The relative importance of the model covariates is assessed in order to analyse the effect of each weather parameter on the model output.
Measurement of CIB power spectra with CAM-SPEC from Planck HFI maps
NASA Astrophysics Data System (ADS)
Mak, Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine
2015-08-01
We present new measurements of the cosmic infrared background (CIB) anisotropies and its first likelihood using Planck HFI data at 353, 545, and 857 GHz. The measurements are based on cross-frequency power spectra and likelihood analysis using the CAM-SPEC package, rather than map based template removal of foregrounds as done in previous Planck CIB analysis. We construct the likelihood of the CIB temperature fluctuations, an extension of CAM-SPEC likelihood as used in CMB analysis to higher frequency, and use it to drive the best estimate of the CIB power spectrum over three decades in multiple moment, l, covering 50 ≤ l ≤ 2500. We adopt parametric models of the CIB and foreground contaminants (Galactic cirrus, infrared point sources, and cosmic microwave background anisotropies), and calibrate the dataset uniformly across frequencies with known Planck beam and noise properties in the likelihood construction. We validate our likelihood through simulations and extensive suite of consistency tests, and assess the impact of instrumental and data selection effects on the final CIB power spectrum constraints. Two approaches are developed for interpreting the CIB power spectrum. The first approach is based on simple parametric model which model the cross frequency power using amplitudes, correlation coefficients, and known multipole dependence. The second approach is based on the physical models for galaxy clustering and the evolution of infrared emission of galaxies. The new approaches fit all auto- and cross- power spectra very well, with the best fit of χ2ν = 1.04 (parametric model). Using the best foreground solution, we find that the cleaned CIB power spectra are in good agreement with previous Planck and Herschel measurements.
NASA Astrophysics Data System (ADS)
Sutawanir
2015-12-01
Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.
Multiple Cognitive Control Effects of Error Likelihood and Conflict
Brown, Joshua W.
2010-01-01
Recent work on cognitive control has suggested a variety of performance monitoring functions of the anterior cingulate cortex, such as errors, conflict, error likelihood, and others. Given the variety of monitoring effects, a corresponding variety of control effects on behavior might be expected. This paper explores whether conflict and error likelihood produce distinct cognitive control effects on behavior, as measured by response time. A change signal task (Brown & Braver, 2005) was modified to include conditions of likely errors due to tardy as well as premature responses, in conditions with and without conflict. The results discriminate between competing hypotheses of independent vs. interacting conflict and error likelihood control effects. Specifically, the results suggest that the likelihood of premature vs. tardy response errors can lead to multiple distinct control effects, which are independent of cognitive control effects driven by response conflict. As a whole, the results point to the existence of multiple distinct cognitive control mechanisms and challenge existing models of cognitive control that incorporate only a single control signal. PMID:19030873
On the existence of maximum likelihood estimates for presence-only data
Hefley, Trevor J.; Hooten, Mevin B.
2015-01-01
It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.
Hey, Jody; Nielsen, Rasmus
2007-01-01
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen
2018-07-01
Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.
ERIC Educational Resources Information Center
Criss, Amy H.; McClelland, James L.
2006-01-01
The subjective likelihood model [SLiM; McClelland, J. L., & Chappell, M. (1998). Familiarity breeds differentiation: a subjective-likelihood approach to the effects of experience in recognition memory. "Psychological Review," 105(4), 734-760.] and the retrieving effectively from memory model [REM; Shiffrin, R. M., & Steyvers, M. (1997). A model…
NASA Astrophysics Data System (ADS)
Pan, Zhen; Anderes, Ethan; Knox, Lloyd
2018-05-01
One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.
Cierniak, Robert; Lorent, Anna
2016-09-01
The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sun, Zhichao; Mukherjee, Bhramar; Estes, Jason P; Vokonas, Pantel S; Park, Sung Kyun
2017-08-15
Joint effects of genetic and environmental factors have been increasingly recognized in the development of many complex human diseases. Despite the popularity of case-control and case-only designs, longitudinal cohort studies that can capture time-varying outcome and exposure information have long been recommended for gene-environment (G × E) interactions. To date, literature on sampling designs for longitudinal studies of G × E interaction is quite limited. We therefore consider designs that can prioritize a subsample of the existing cohort for retrospective genotyping on the basis of currently available outcome, exposure, and covariate data. In this work, we propose stratified sampling based on summaries of individual exposures and outcome trajectories and develop a full conditional likelihood approach for estimation that adjusts for the biased sample. We compare the performance of our proposed design and analysis with combinations of different sampling designs and estimation approaches via simulation. We observe that the full conditional likelihood provides improved estimates for the G × E interaction and joint exposure effects over uncorrected complete-case analysis, and the exposure enriched outcome trajectory dependent design outperforms other designs in terms of estimation efficiency and power for detection of the G × E interaction. We also illustrate our design and analysis using data from the Normative Aging Study, an ongoing longitudinal cohort study initiated by the Veterans Administration in 1963. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Combining operational models and data into a dynamic vessel risk assessment tool for coastal regions
NASA Astrophysics Data System (ADS)
Fernandes, R.; Braunschweig, F.; Lourenço, F.; Neves, R.
2015-07-01
The technological evolution in terms of computational capacity, data acquisition systems, numerical modelling and operational oceanography is supplying opportunities for designing and building holistic approaches and complex tools for newer and more efficient management (planning, prevention and response) of coastal water pollution risk events. A combined methodology to dynamically estimate time and space variable shoreline risk levels from ships has been developed, integrating numerical metocean forecasts and oil spill simulations with vessel tracking automatic identification systems (AIS). The risk rating combines the likelihood of an oil spill occurring from a vessel navigating in a study area - Portuguese Continental shelf - with the assessed consequences to the shoreline. The spill likelihood is based on dynamic marine weather conditions and statistical information from previous accidents. The shoreline consequences reflect the virtual spilled oil amount reaching shoreline and its environmental and socio-economic vulnerabilities. The oil reaching shoreline is quantified with an oil spill fate and behaviour model running multiple virtual spills from vessels along time. Shoreline risks can be computed in real-time or from previously obtained data. Results show the ability of the proposed methodology to estimate the risk properly sensitive to dynamic metocean conditions and to oil transport behaviour. The integration of meteo-oceanic + oil spill models with coastal vulnerability and AIS data in the quantification of risk enhances the maritime situational awareness and the decision support model, providing a more realistic approach in the assessment of shoreline impacts. The risk assessment from historical data can help finding typical risk patterns, "hot spots" or developing sensitivity analysis to specific conditions, whereas real time risk levels can be used in the prioritization of individual ships, geographical areas, strategic tug positioning and implementation of dynamic risk-based vessel traffic monitoring.
NASA Astrophysics Data System (ADS)
Beyene, Mussie T.; Jain, Shaleen
2018-06-01
El Niño-Southern Oscillation (ENSO) teleconnections induced wintertime surface air temperature (SAT) anomalies over North America show inter-event variability, asymmetry, and nonlinearity. This diagnostic study appraises the assumption that ENSO-induced teleconnections are adequately characterized as symmetric shifts in the SAT probability distributions for North American locations. To this end, a new conditional quantile functional estimation approach presented here incorporates: (a) the detailed nature of location and amplitude of SST anomalies—in particular the Eastern Pacific (EP), Central Pacific (CP) ENSO events—based on its two leading principal components, and (b) over the entire range of SATs, characterize the differential sensitivity to ENSO. Statistical significance is assessed using a wild bootstrap approach. Conditional risk at upper and lower quartile SAT conditioned on archetypical ENSO states is derived. There is marked asymmetry in ENSO effects on the likelihood of upper and lower quartile winter SATs for most North American regions. CP El Niño patterns show 20-80% decrease in the likelihood of lower quartile SATs for Canada and US west coast and a 20-40% increase across southeastern US. However, the upper quartile SAT for large swathes of Canada shows no sensitivity to CP El Niño. Similarly, EP El Niño is linked to a 40-80% increase in the probability of upper quartile winter SATs for Canada and northern US and a 20% decrease for southern US and northern Mexico regions; however, little or no change in the risk of lower quartile winter temperatures for southern parts of North America. Localized estimate of ENSO-related risk are also presented.
Perceptions of Socratic and non-Socratic presentation of information in cognitive behaviour therapy.
Heiniger, Louise E; Clark, Gavin I; Egan, Sarah J
2018-03-01
Socratic Method is a style of inquiry used in cognitive behavioural therapy (CBT) that encourages clients to reflect on their problems and draw conclusions from newly-gained insights. However, assumptions about the superior efficacy of Socratic Method over non-Socratic (didactic) approaches remain largely untested. The aim of this study was to compare the perceived helpfulness of therapists' questions, autonomy supportiveness, likelihood of engaging in therapeutic tasks and preference for Socratic Method versus a didactic approach using a video analogue and ratings of lay observers. The mediating effects of therapeutic alliance and empathy were also examined. Participants (N = 144, mean age = 37, SD = 13) completed an online survey where they rated two videoed therapy analogues. Socratic Method had higher mean scores on perceived helpfulness of therapists' questions, autonomy supportiveness, and likelihood of engaging in therapeutic tasks and preference than didactic presentation. Perceived helpfulness and preference ratings were higher for Socratic Method after accounting for potential confounders. Perceived therapeutic alliance and empathy both mediated the effect of therapy condition on autonomy and engagement. The findings support the use of Socratic Method in CBT. Copyright © 2017 Elsevier Ltd. All rights reserved.
The DNA database search controversy revisited: bridging the Bayesian-frequentist gap.
Storvik, Geir; Egeland, Thore
2007-09-01
Two different quantities have been suggested for quantification of evidence in cases where a suspect is found by a search through a database of DNA profiles. The likelihood ratio, typically motivated from a Bayesian setting, is preferred by most experts in the field. The so-called np rule has been suggested through frequentist arguments and has been suggested by the American National Research Council and Stockmarr (1999, Biometrics55, 671-677). The two quantities differ substantially and have given rise to the DNA database search controversy. Although several authors have criticized the different approaches, a full explanation of why these differences appear is still lacking. In this article we show that a P-value in a frequentist hypothesis setting is approximately equal to the result of the np rule. We argue, however, that a more reasonable procedure in this case is to use conditional testing, in which case a P-value directly related to posterior probabilities and the likelihood ratio is obtained. This way of viewing the problem bridges the gap between the Bayesian and frequentist approaches. At the same time it indicates that the np rule should not be used to quantify evidence.
Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A
2016-01-01
Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.
Norström, Madelaine; Kristoffersen, Anja Bråthen; Görlach, Franziska Sophie; Nygård, Karin; Hopp, Petter
2015-01-01
In order to facilitate foodborne outbreak investigations there is a need to improve the methods for identifying the food products that should be sampled for laboratory analysis. The aim of this study was to examine the applicability of a likelihood ratio approach previously developed on simulated data, to real outbreak data. We used human case and food product distribution data from the Norwegian enterohaemorrhagic Escherichia coli outbreak in 2006. The approach was adjusted to include time, space smoothing and to handle missing or misclassified information. The performance of the adjusted likelihood ratio approach on the data originating from the HUS outbreak and control data indicates that the adjusted approach is promising and indicates that the adjusted approach could be a useful tool to assist and facilitate the investigation of food borne outbreaks in the future if good traceability are available and implemented in the distribution chain. However, the approach needs to be further validated on other outbreak data and also including other food products than meat products in order to make a more general conclusion of the applicability of the developed approach. PMID:26237468
Chatterji, Pinka; Brandon, Peter; Markowitz, Sara
2016-07-01
We examine the effects of the 2010 Patient Protection and Affordable Care Act's (ACA) prohibition of preexisting conditions exclusions for children on job mobility among parents. We use a difference-in-difference approach, comparing pre-post policy changes in job mobility among privately-insured parents of children with chronic health conditions vs. privately-insured parents of healthy children. Data come from the 2004 and 2008 Survey of Income and Program Participation (SIPP). Among married fathers, the policy change is associated with about a 0.7 percentage point, or 35 percent increase, in the likelihood of leaving an employer voluntarily. We find no evidence that the policy change affected job mobility among married and unmarried mothers. Copyright © 2016 Elsevier B.V. All rights reserved.
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
Models and analysis for multivariate failure time data
NASA Astrophysics Data System (ADS)
Shih, Joanna Huang
The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.
On the Existence and Uniqueness of JML Estimates for the Partial Credit Model
ERIC Educational Resources Information Center
Bertoli-Barsotti, Lucio
2005-01-01
A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…
ERIC Educational Resources Information Center
Paek, Insu; Wilson, Mark
2011-01-01
This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…
ERIC Educational Resources Information Center
Andersen, Erling B.
A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-06-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.
Alloy, Lauren B.; Urošević, Snežana; Abramson, Lyn Y.; Jager-Hyman, Shari; Nusslock, Robin; Whitehouse, Wayne G.; Hogan, Michael
2011-01-01
Little longitudinal research has examined progression to more severe bipolar disorders in individuals with “soft” bipolar spectrum conditions. We examine rates and predictors of progression to bipolar I and II diagnoses in a non-patient sample of college-age participants (n = 201) with high General Behavior Inventory scores and childhood or adolescent onset of “soft” bipolar spectrum disorders followed longitudinally for 4.5 years from the Longitudinal Investigation of Bipolar Spectrum (LIBS) project. Of 57 individuals with initial cyclothymia or bipolar disorder not otherwise specified (BiNOS) diagnoses, 42.1% progressed to a bipolar II diagnosis and 10.5% progressed to a bipolar I diagnosis. Of 144 individuals with initial bipolar II diagnoses, 17.4% progressed to a bipolar I diagnosis. Consistent with hypotheses derived from the clinical literature and the Behavioral Approach System (BAS) model of bipolar disorder, and controlling for relevant variables (length of follow-up, initial depressive and hypomanic symptoms, treatment-seeking, and family history), high BAS sensitivity (especially BAS Fun Seeking) predicted a greater likelihood of progression to bipolar II disorder, whereas early age of onset and high impulsivity predicted a greater likelihood of progression to bipolar I (high BAS sensitivity and Fun-Seeking also predicted progression to bipolar I when family history was not controlled). The interaction of high BAS and high Behavioral Inhibition System (BIS) sensitivities also predicted greater likelihood of progression to bipolar I. We discuss implications of the findings for the bipolar spectrum concept, the BAS model of bipolar disorder, and early intervention efforts. PMID:21668080
The Genetics of Major Depression
Flint, Jonathan; Kendler, Kenneth S.
2014-01-01
Major depression is the commonest psychiatric disorder and in the U.S. has the greatest impact of all biomedical diseases on disability. Here we review evidence of the genetic contribution to disease susceptibility and the current state of molecular approaches. Genome-wide association and linkage results provide constraints on the allele frequencies and effect sizes of susceptibility loci, which we use to interpret the voluminous candidate gene literature. We consider evidence for the genetic heterogeneity of the disorder and the likelihood that subtypes exist that represent more genetically homogenous conditions than have hitherto been analyzed. PMID:24507187
Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin
2016-01-01
In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680
NASA Astrophysics Data System (ADS)
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
Decomposition of conditional probability for high-order symbolic Markov chains.
Melnik, S S; Usatenko, O V
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Decomposition of conditional probability for high-order symbolic Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Fu, P; Panneerselvam, A; Clifford, B; Dowlati, A; Ma, P C; Zeng, G; Halmos, B; Leidner, R S
2015-12-01
It is well known that non-small cell lung cancer (NSCLC) is a heterogeneous group of diseases. Previous studies have demonstrated genetic variation among different ethnic groups in the epidermal growth factor receptor (EGFR) in NSCLC. Research by our group and others has recently shown a lower frequency of EGFR mutations in African Americans with NSCLC, as compared to their White counterparts. In this study, we use our original study data of EGFR pathway genetics in African American NSCLC as an example to illustrate that univariate analyses based on aggregation versus partition of data leads to contradictory results, in order to emphasize the importance of controlling statistical confounding. We further investigate analytic approaches in logistic regression for data with separation, as is the case in our example data set, and apply appropriate methods to identify predictors of EGFR mutation. Our simulation shows that with separated or nearly separated data, penalized maximum likelihood (PML) produces estimates with smallest bias and approximately maintains the nominal value with statistical power equal to or better than that from maximum likelihood and exact conditional likelihood methods. Application of the PML method in our example data set shows that race and EGFR-FISH are independently significant predictors of EGFR mutation. © The Author(s) 2011.
Keegan, Theresa H M; Li, Qian; Steele, Amy; Alvarez, Elysia M; Brunson, Ann; Flowers, Christopher R; Glaser, Sally L; Wun, Ted
2018-06-01
Hodgkin lymphoma (HL) survivors experience high risks of second cancers and cardiovascular disease, but no studies have considered whether the occurrence of these and other medical conditions differ by sociodemographic factors in adolescent and young adult (AYA) survivors. Data for 5,085 patients aged 15-39 when diagnosed with HL during 1996-2012 and surviving ≥ 2 years were obtained from the California Cancer Registry and linked to hospitalization data. We examined the impact of race/ethnicity, neighborhood socioeconomic status (SES), and health insurance on the occurrence of medical conditions (≥ 2 years after diagnosis) and the impact of medical conditions on survival using multivariable Cox proportional hazards regression. Twenty-six percent of AYAs experienced at least one medical condition and 15% had ≥ 2 medical conditions after treatment for HL. In multivariable analyses, Black HL survivors had a higher likelihood (vs. non-Hispanic Whites) of endocrine [hazard ratio (HR) = 1.37, 95% confidence interval (CI) 1.05-1.78] and circulatory system diseases (HR = 1.58, CI 1.17-2.14); Hispanics had a higher likelihood of endocrine diseases [HR = 1.24 (1.04-1.48)]. AYAs with public or no insurance (vs. private/military) had higher likelihood of circulatory system diseases, respiratory system diseases, chronic kidney disease/renal failure, liver disease, and endocrine diseases. AYAs residing in low SES neighborhoods (vs. high) had higher likelihood of respiratory system and endocrine diseases. AYAs with these medical conditions or second cancers had an over twofold increased risk of death. Strategies to improve health care utilization for surveillance and secondary prevention among AYA HL survivors at increased risk of medical conditions may improve outcomes.
Likelihood Ratios for the Emergency Physician.
Peng, Paul; Coyle, Andrew
2018-04-26
The concept of likelihood ratios was introduced more than 40 years ago, yet this powerful metric has still not seen wider application or discussion in the medical decision-making process. There is concern that clinicians-in-training are still being taught an over-simplified approach to diagnostic test performance, and have limited exposure to likelihood ratios. Even for those familiar with likelihood ratios, they might perceive them as mathematically-cumbersome in application, if not difficult to determine for a particular disease process. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Histogram equalization with Bayesian estimation for noise robust speech recognition.
Suh, Youngjoo; Kim, Hoirin
2018-02-01
The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Julien, Clavel; Leandro, Aristide; Hélène, Morlon
2018-06-19
Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager
NASA Astrophysics Data System (ADS)
Lowell, A. W.; Boggs, S. E.; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C.; Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y.; Jean, P.; von Ballmoos, P.; Lin, C.-H.; Amman, M.
2017-10-01
Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ˜21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.
Inverse Ising problem in continuous time: A latent variable approach
NASA Astrophysics Data System (ADS)
Donner, Christian; Opper, Manfred
2017-12-01
We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.
Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence
NASA Astrophysics Data System (ADS)
Lewis, Nicholas; Grünwald, Peter
2018-03-01
Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).
Christensen, Ole F
2012-12-03
Single-step methods provide a coherent and conceptually simple approach to incorporate genomic information into genetic evaluations. An issue with single-step methods is compatibility between the marker-based relationship matrix for genotyped animals and the pedigree-based relationship matrix. Therefore, it is necessary to adjust the marker-based relationship matrix to the pedigree-based relationship matrix. Moreover, with data from routine evaluations, this adjustment should in principle be based on both observed marker genotypes and observed phenotypes, but until now this has been overlooked. In this paper, I propose a new method to address this issue by 1) adjusting the pedigree-based relationship matrix to be compatible with the marker-based relationship matrix instead of the reverse and 2) extending the single-step genetic evaluation using a joint likelihood of observed phenotypes and observed marker genotypes. The performance of this method is then evaluated using two simulated datasets. The method derived here is a single-step method in which the marker-based relationship matrix is constructed assuming all allele frequencies equal to 0.5 and the pedigree-based relationship matrix is constructed using the unusual assumption that animals in the base population are related and inbred with a relationship coefficient γ and an inbreeding coefficient γ / 2. Taken together, this γ parameter and a parameter that scales the marker-based relationship matrix can handle the issue of compatibility between marker-based and pedigree-based relationship matrices. The full log-likelihood function used for parameter inference contains two terms. The first term is the REML-log-likelihood for the phenotypes conditional on the observed marker genotypes, whereas the second term is the log-likelihood for the observed marker genotypes. Analyses of the two simulated datasets with this new method showed that 1) the parameters involved in adjusting marker-based and pedigree-based relationship matrices can depend on both observed phenotypes and observed marker genotypes and 2) a strong association between these two parameters exists. Finally, this method performed at least as well as a method based on adjusting the marker-based relationship matrix. Using the full log-likelihood and adjusting the pedigree-based relationship matrix to be compatible with the marker-based relationship matrix provides a new and interesting approach to handle the issue of compatibility between the two matrices in single-step genetic evaluation.
Distributed multimodal data fusion for large scale wireless sensor networks
NASA Astrophysics Data System (ADS)
Ertin, Emre
2006-05-01
Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Program for Weibull Analysis of Fatigue Data
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2005-01-01
A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.
Ahn, Jaeil; Mukherjee, Bhramar; Banerjee, Mousumi; Cooney, Kathleen A.
2011-01-01
Summary The stereotype regression model for categorical outcomes, proposed by Anderson (1984) is nested between the baseline category logits and adjacent category logits model with proportional odds structure. The stereotype model is more parsimonious than the ordinary baseline-category (or multinomial logistic) model due to a product representation of the log odds-ratios in terms of a common parameter corresponding to each predictor and category specific scores. The model could be used for both ordered and unordered outcomes. For ordered outcomes, the stereotype model allows more flexibility than the popular proportional odds model in capturing highly subjective ordinal scaling which does not result from categorization of a single latent variable, but are inherently multidimensional in nature. As pointed out by Greenland (1994), an additional advantage of the stereotype model is that it provides unbiased and valid inference under outcome-stratified sampling as in case-control studies. In addition, for matched case-control studies, the stereotype model is amenable to classical conditional likelihood principle, whereas there is no reduction due to sufficiency under the proportional odds model. In spite of these attractive features, the model has been applied less, as there are issues with maximum likelihood estimation and likelihood based testing approaches due to non-linearity and lack of identifiability of the parameters. We present comprehensive Bayesian inference and model comparison procedure for this class of models as an alternative to the classical frequentist approach. We illustrate our methodology by analyzing data from The Flint Men’s Health Study, a case-control study of prostate cancer in African-American men aged 40 to 79 years. We use clinical staging of prostate cancer in terms of Tumors, Nodes and Metastatsis (TNM) as the categorical response of interest. PMID:19731262
An analytic modeling and system identification study of rotor/fuselage dynamics at hover
NASA Technical Reports Server (NTRS)
Hong, Steven W.; Curtiss, H. C., Jr.
1993-01-01
A combination of analytic modeling and system identification methods have been used to develop an improved dynamic model describing the response of articulated rotor helicopters to control inputs. A high-order linearized model of coupled rotor/body dynamics including flap and lag degrees of freedom and inflow dynamics with literal coefficients is compared to flight test data from single rotor helicopters in the near hover trim condition. The identification problem was formulated using the maximum likelihood function in the time domain. The dynamic model with literal coefficients was used to generate the model states, and the model was parametrized in terms of physical constants of the aircraft rather than the stability derivatives resulting in a significant reduction in the number of quantities to be identified. The likelihood function was optimized using the genetic algorithm approach. This method proved highly effective in producing an estimated model from flight test data which included coupled fuselage/rotor dynamics. Using this approach it has been shown that blade flexibility is a significant contributing factor to the discrepancies between theory and experiment shown in previous studies. Addition of flexible modes, properly incorporating the constraint due to the lag dampers, results in excellent agreement between flight test and theory, especially in the high frequency range.
An analytic modeling and system identification study of rotor/fuselage dynamics at hover
NASA Technical Reports Server (NTRS)
Hong, Steven W.; Curtiss, H. C., Jr.
1993-01-01
A combination of analytic modeling and system identification methods have been used to develop an improved dynamic model describing the response of articulated rotor helicopters to control inputs. A high-order linearized model of coupled rotor/body dynamics including flap and lag degrees of freedom and inflow dynamics with literal coefficients is compared to flight test data from single rotor helicopters in the near hover trim condition. The identification problem was formulated using the maximum likelihood function in the time domain. The dynamic model with literal coefficients was used to generate the model states, and the model was parametrized in terms of physical constants of the aircraft rather than the stability derivatives, resulting in a significant reduction in the number of quantities to be identified. The likelihood function was optimized using the genetic algorithm approach. This method proved highly effective in producing an estimated model from flight test data which included coupled fuselage/rotor dynamics. Using this approach it has been shown that blade flexibility is a significant contributing factor to the discrepancies between theory and experiment shown in previous studies. Addition of flexible modes, properly incorporating the constraint due to the lag dampers, results in excellent agreement between flight test and theory, especially in the high frequency range.
Bayesian Monte Carlo and Maximum Likelihood Approach for ...
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen dynamics as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficien
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Kim, D.; Burge, J.; Lane, T.; Pearlson, G. D; Kiehl, K. A; Calhoun, V. D.
2008-01-01
We utilized a discrete dynamic Bayesian network (dDBN) approach (Burge et al., 2007) to determine differences in brain regions between patients with schizophrenia and healthy controls on a measure of effective connectivity, termed the approximate conditional likelihood score (ACL) (Burge and Lane, 2005). The ACL score represents a class-discriminative measure of effective connectivity by measuring the relative likelihood of the correlation between brain regions in one group versus another. The algorithm is capable of finding non-linear relationships between brain regions because it uses discrete rather than continuous values and attempts to model temporal relationships with a first-order Markov and stationary assumption constraint (Papoulis, 1991). Since Bayesian networks are overly sensitive to noisy data, we introduced an independent component analysis (ICA) filtering approach that attempted to reduce the noise found in fMRI data by unmixing the raw datasets into a set of independent spatial component maps. Components that represented noise were removed and the remaining components reconstructed into the dimensions of the original fMRI datasets. We applied the dDBN algorithm to a group of 35 patients with schizophrenia and 35 matched healthy controls using an ICA filtered and unfiltered approach. We determined that filtering the data significantly improved the magnitude of the ACL score. Patients showed the greatest ACL scores in several regions, most markedly the cerebellar vermis and hemispheres. Our findings suggest that schizophrenia patients exhibit weaker connectivity than healthy controls in multiple regions, including bilateral temporal and frontal cortices, plus cerebellum during an auditory paradigm. PMID:18602482
Maximum likelihood phase-retrieval algorithm: applications.
Nahrstedt, D A; Southwell, W H
1984-12-01
The maximum likelihood estimator approach is shown to be effective in determining the wave front aberration in systems involving laser and flow field diagnostics and optical testing. The robustness of the algorithm enables convergence even in cases of severe wave front error and real, nonsymmetrical, obscured amplitude distributions.
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level
Savalei, Victoria; Rhemtulla, Mijke
2017-01-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.
Savalei, Victoria; Rhemtulla, Mijke
2017-08-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.
The effect of women's suggestive clothing on men's behavior and judgment: a field study.
Guéguen, Nicolas
2011-10-01
Numerous studies have shown that men overestimate the sexual intent of women based on their clothing style; however, this hypothesis has not been assessed empirically in a natural setting. This small field study measured the time it took for men to approach two female confederates sitting in a tavern, one wearing suggestive clothes and one wearing more conservative clothes. The behavior of 108 men was observed over 54 periods on 16 different nights in two different taverns. The time it took for the men to approach after initial eye contact was significantly shorter in the suggestive clothing condition. The men were also asked by male confederates to rate the likelihood of having a date with the women, and having sex on the first date. The men rated their chances to have a date and to have sex significantly higher in the suggestive clothing condition. Results are discussed with respect to men's possible misinterpretation that women's clothing indicates sexual interest, and the risks associated with the misinterpretation.
O'Connell, Megan E; Tuokko, Holly; Voll, Stacey; Simard, Martine; Griffith, Lauren E; Taler, Vanessa; Wolfson, Christina; Kirkland, Susan; Raina, Parminder
We detail a new approach to the creation of normative data for neuropsychological tests. The traditional approach to normative data creation is to make demographic adjustments based on observations of correlations between single neuropsychological tests and selected demographic variables. We argue, however, that this does not describe the implications for clinical practice, such as increased likelihood of misclassification of cognitive impairment, nor does it elucidate the impact on decision-making with a neuropsychological battery. We propose base rate analyses; specifically, differential base rates of impaired scores between theoretical and actual base rates as the basis for decisions to create demographic adjustments within normative data. Differential base rates empirically describe the potential clinical implications of failing to create an appropriate normative group. We demonstrate this approach with data from a short telephone-administered neuropsychological battery given to a large, neurologically healthy sample aged 45-85 years old. We explored whether adjustments for age and medical conditions were warranted based on differential base rates of spuriously impaired scores. Theoretical base rates underestimated the frequency of impaired scores in older adults and overestimated the frequency of impaired scores in younger adults, providing an evidence base for the creation of age-corrected normative data. In contrast, the number of medical conditions (numerous cardiovascular, hormonal, and metabolic conditions) was not related to differential base rates of impaired scores. Despite a small correlation between number of medical conditions and each neuropsychological variable, normative adjustments for number of medical conditions does not appear warranted. Implications for creation of normative data are discussed.
NASA Astrophysics Data System (ADS)
Peng, Juan-juan; Wang, Jian-qiang; Yang, Wu-E.
2017-01-01
In this paper, multi-criteria decision-making (MCDM) problems based on the qualitative flexible multiple criteria method (QUALIFLEX), in which the criteria values are expressed by multi-valued neutrosophic information, are investigated. First, multi-valued neutrosophic sets (MVNSs), which allow the truth-membership function, indeterminacy-membership function and falsity-membership function to have a set of crisp values between zero and one, are introduced. Then the likelihood of multi-valued neutrosophic number (MVNN) preference relations is defined and the corresponding properties are also discussed. Finally, an extended QUALIFLEX approach based on likelihood is explored to solve MCDM problems where the assessments of alternatives are in the form of MVNNs; furthermore an example is provided to illustrate the application of the proposed method, together with a comparison analysis.
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
Empirical likelihood-based tests for stochastic ordering
BARMI, HAMMOU EL; MCKEAGUE, IAN W.
2013-01-01
This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142
Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowell, A. W.; Boggs, S. E; Chiu, C. L.
2017-10-20
Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. Wemore » find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.« less
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses
ERIC Educational Resources Information Center
Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini
2012-01-01
The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…
New Assumptions to Guide SETI Research
NASA Technical Reports Server (NTRS)
Colombano, S. P.
2018-01-01
The recent Kepler discoveries of Earth-like planets offer the opportunity to focus our attention on detecting signs of life and technology in specific planetary systems, but I feel we need to become more flexible in our assumptions. The reason is that, while it is still reasonable and conservative to assume that life is most likely to have originated in conditions similar to ours, the vast time differences in potential evolutions render the likelihood of "matching" technologies very slim. In light of these challenges I propose a more "aggressive"� approach to future SETI exploration in directions that until now have received little consideration.
The evaluation of the OSGLR algorithm for restructurable controls
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.
1986-01-01
The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
Physics-based, Bayesian sequential detection method and system for radioactive contraband
Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E
2014-03-18
A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.
Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les
2008-01-01
To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
Genealogical Working Distributions for Bayesian Model Testing with Phylogenetic Uncertainty
Baele, Guy; Lemey, Philippe; Suchard, Marc A.
2016-01-01
Marginal likelihood estimates to compare models using Bayes factors frequently accompany Bayesian phylogenetic inference. Approaches to estimate marginal likelihoods have garnered increased attention over the past decade. In particular, the introduction of path sampling (PS) and stepping-stone sampling (SS) into Bayesian phylogenetics has tremendously improved the accuracy of model selection. These sampling techniques are now used to evaluate complex evolutionary and population genetic models on empirical data sets, but considerable computational demands hamper their widespread adoption. Further, when very diffuse, but proper priors are specified for model parameters, numerical issues complicate the exploration of the priors, a necessary step in marginal likelihood estimation using PS or SS. To avoid such instabilities, generalized SS (GSS) has recently been proposed, introducing the concept of “working distributions” to facilitate—or shorten—the integration process that underlies marginal likelihood estimation. However, the need to fix the tree topology currently limits GSS in a coalescent-based framework. Here, we extend GSS by relaxing the fixed underlying tree topology assumption. To this purpose, we introduce a “working” distribution on the space of genealogies, which enables estimating marginal likelihoods while accommodating phylogenetic uncertainty. We propose two different “working” distributions that help GSS to outperform PS and SS in terms of accuracy when comparing demographic and evolutionary models applied to synthetic data and real-world examples. Further, we show that the use of very diffuse priors can lead to a considerable overestimation in marginal likelihood when using PS and SS, while still retrieving the correct marginal likelihood using both GSS approaches. The methods used in this article are available in BEAST, a powerful user-friendly software package to perform Bayesian evolutionary analyses. PMID:26526428
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
Combining operational models and data into a dynamic vessel risk assessment tool for coastal regions
NASA Astrophysics Data System (ADS)
Fernandes, R.; Braunschweig, F.; Lourenço, F.; Neves, R.
2016-02-01
The technological evolution in terms of computational capacity, data acquisition systems, numerical modelling and operational oceanography is supplying opportunities for designing and building holistic approaches and complex tools for newer and more efficient management (planning, prevention and response) of coastal water pollution risk events. A combined methodology to dynamically estimate time and space variable individual vessel accident risk levels and shoreline contamination risk from ships has been developed, integrating numerical metocean forecasts and oil spill simulations with vessel tracking automatic identification systems (AIS). The risk rating combines the likelihood of an oil spill occurring from a vessel navigating in a study area - the Portuguese continental shelf - with the assessed consequences to the shoreline. The spill likelihood is based on dynamic marine weather conditions and statistical information from previous accidents. The shoreline consequences reflect the virtual spilled oil amount reaching shoreline and its environmental and socio-economic vulnerabilities. The oil reaching shoreline is quantified with an oil spill fate and behaviour model running multiple virtual spills from vessels along time, or as an alternative, a correction factor based on vessel distance from coast. Shoreline risks can be computed in real time or from previously obtained data. Results show the ability of the proposed methodology to estimate the risk properly sensitive to dynamic metocean conditions and to oil transport behaviour. The integration of meteo-oceanic + oil spill models with coastal vulnerability and AIS data in the quantification of risk enhances the maritime situational awareness and the decision support model, providing a more realistic approach in the assessment of shoreline impacts. The risk assessment from historical data can help finding typical risk patterns ("hot spots") or developing sensitivity analysis to specific conditions, whereas real-time risk levels can be used in the prioritization of individual ships, geographical areas, strategic tug positioning and implementation of dynamic risk-based vessel traffic monitoring.
Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2005-01-01
In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…
Five Methods for Estimating Angoff Cut Scores with IRT
ERIC Educational Resources Information Center
Wyse, Adam E.
2017-01-01
This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…
John Hogland; Nedret Billor; Nathaniel Anderson
2013-01-01
Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...
A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...
Martell, R F; Desmet, A L
2001-12-01
This study departed from previous research on gender stereotyping in the leadership domain by adopting a more comprehensive view of leadership and using a diagnostic-ratio measurement strategy. One hundred and fifty-one managers (95 men and 56 women) judged the leadership effectiveness of male and female middle managers by providing likelihood ratings for 14 categories of leader behavior. As expected, the likelihood ratings for some leader behaviors were greater for male managers, whereas for other leader behaviors, the likelihood ratings were greater for female managers or were no different. Leadership ratings revealed some evidence of a same-gender bias. Providing explicit verification of managerial success had only a modest effect on gender stereotyping. The merits of adopting a probabilistic approach in examining the perception and treatment of stigmatized groups are discussed.
Hamilton, Jane E; Desai, Pratikkumar V; Hoot, Nathan R; Gearing, Robin E; Jeong, Shin; Meyer, Thomas D; Soares, Jair C; Begley, Charles E
2016-11-01
Behavioral health-related emergency department (ED) visits have been linked with ED overcrowding, an increased demand on limited resources, and a longer length of stay (LOS) due in part to patients being admitted to the hospital but waiting for an inpatient bed. This study examines factors associated with the likelihood of hospital admission for ED patients with behavioral health conditions at 16 hospital-based EDs in a large urban area in the southern United States. Using Andersen's Behavioral Model of Health Service Use for guidance, the study examined the relationship between predisposing (characteristics of the individual, i.e., age, sex, race/ethnicity), enabling (system or structural factors affecting healthcare access), and need (clinical) factors and the likelihood of hospitalization following ED visits for behavioral health conditions (n = 28,716 ED visits). In the adjusted analysis, a logistic fixed-effects model with blockwise entry was used to estimate the relative importance of predisposing, enabling, and need variables added separately as blocks while controlling for variation in unobserved hospital-specific practices across hospitals and time in years. Significant predisposing factors associated with an increased likelihood of hospitalization following an ED visit included increasing age, while African American race was associated with a lower likelihood of hospitalization. Among enabling factors, arrival by emergency transport and a longer ED LOS were associated with a greater likelihood of hospitalization while being uninsured and the availability of community-based behavioral health services within 5 miles of the ED were associated with lower odds. Among need factors, having a discharge diagnosis of schizophrenia/psychotic spectrum disorder, an affective disorder, a personality disorder, dementia, or an impulse control disorder as well as secondary diagnoses of suicidal ideation and/or suicidal behavior increased the likelihood of hospitalization following an ED visit. The block of enabling factors was the strongest predictor of hospitalization following an ED visit compared to predisposing and need factors. Our findings also provide evidence of disparities in hospitalization of the uninsured and racial and ethnic minority patients with ED visits for behavioral health conditions. Thus, improved access to community-based behavioral health services and an increased capacity for inpatient psychiatric hospitals for treating indigent patients may be needed to improve the efficiency of ED services in our region for patients with behavioral health conditions. Among need factors, a discharge diagnosis of schizophrenia/psychotic spectrum disorder, an affective disorder, a personality disorder, an impulse control disorder, or dementia as well as secondary diagnoses of suicidal ideation and/or suicidal behavior increased the likelihood of hospitalization following an ED visit, also suggesting an opportunity for improving the efficiency of ED care through the provision of psychiatric services to stabilize and treat patients with serious mental illness. © 2016 by the Society for Academic Emergency Medicine.
FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods.
Zierke, Stephanie; Bakos, Jason D
2010-04-12
Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10x speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs).
Sobotta, Svantje; Raue, Andreas; Huang, Xiaoyun; Vanlier, Joep; Jünger, Anja; Bohl, Sebastian; Albrecht, Ute; Hahnel, Maximilian J.; Wolf, Stephanie; Mueller, Nikola S.; D'Alessandro, Lorenza A.; Mueller-Bohl, Stephanie; Boehm, Martin E.; Lucarelli, Philippe; Bonefas, Sandra; Damm, Georg; Seehofer, Daniel; Lehmann, Wolf D.; Rose-John, Stefan; van der Hoeven, Frank; Gretz, Norbert; Theis, Fabian J.; Ehlting, Christian; Bode, Johannes G.; Timmer, Jens; Schilling, Marcel; Klingmüller, Ursula
2017-01-01
IL-6 is a central mediator of the immediate induction of hepatic acute phase proteins (APP) in the liver during infection and after injury, but increased IL-6 activity has been associated with multiple pathological conditions. In hepatocytes, IL-6 activates JAK1-STAT3 signaling that induces the negative feedback regulator SOCS3 and expression of APPs. While different inhibitors of IL-6-induced JAK1-STAT3-signaling have been developed, understanding their precise impact on signaling dynamics requires a systems biology approach. Here we present a mathematical model of IL-6-induced JAK1-STAT3 signaling that quantitatively links physiological IL-6 concentrations to the dynamics of IL-6-induced signal transduction and expression of target genes in hepatocytes. The mathematical model consists of coupled ordinary differential equations (ODE) and the model parameters were estimated by a maximum likelihood approach, whereas identifiability of the dynamic model parameters was ensured by the Profile Likelihood. Using model simulations coupled with experimental validation we could optimize the long-term impact of the JAK-inhibitor Ruxolitinib, a therapeutic compound that is quickly metabolized. Model-predicted doses and timing of treatments helps to improve the reduction of inflammatory APP gene expression in primary mouse hepatocytes close to levels observed during regenerative conditions. The concept of improved efficacy of the inhibitor through multiple treatments at optimized time intervals was confirmed in primary human hepatocytes. Thus, combining quantitative data generation with mathematical modeling suggests that repetitive treatment with Ruxolitinib is required to effectively target excessive inflammatory responses without exceeding doses recommended by the clinical guidelines. PMID:29062282
Sobotta, Svantje; Raue, Andreas; Huang, Xiaoyun; Vanlier, Joep; Jünger, Anja; Bohl, Sebastian; Albrecht, Ute; Hahnel, Maximilian J; Wolf, Stephanie; Mueller, Nikola S; D'Alessandro, Lorenza A; Mueller-Bohl, Stephanie; Boehm, Martin E; Lucarelli, Philippe; Bonefas, Sandra; Damm, Georg; Seehofer, Daniel; Lehmann, Wolf D; Rose-John, Stefan; van der Hoeven, Frank; Gretz, Norbert; Theis, Fabian J; Ehlting, Christian; Bode, Johannes G; Timmer, Jens; Schilling, Marcel; Klingmüller, Ursula
2017-01-01
IL-6 is a central mediator of the immediate induction of hepatic acute phase proteins (APP) in the liver during infection and after injury, but increased IL-6 activity has been associated with multiple pathological conditions. In hepatocytes, IL-6 activates JAK1-STAT3 signaling that induces the negative feedback regulator SOCS3 and expression of APPs. While different inhibitors of IL-6-induced JAK1-STAT3-signaling have been developed, understanding their precise impact on signaling dynamics requires a systems biology approach. Here we present a mathematical model of IL-6-induced JAK1-STAT3 signaling that quantitatively links physiological IL-6 concentrations to the dynamics of IL-6-induced signal transduction and expression of target genes in hepatocytes. The mathematical model consists of coupled ordinary differential equations (ODE) and the model parameters were estimated by a maximum likelihood approach, whereas identifiability of the dynamic model parameters was ensured by the Profile Likelihood. Using model simulations coupled with experimental validation we could optimize the long-term impact of the JAK-inhibitor Ruxolitinib, a therapeutic compound that is quickly metabolized. Model-predicted doses and timing of treatments helps to improve the reduction of inflammatory APP gene expression in primary mouse hepatocytes close to levels observed during regenerative conditions. The concept of improved efficacy of the inhibitor through multiple treatments at optimized time intervals was confirmed in primary human hepatocytes. Thus, combining quantitative data generation with mathematical modeling suggests that repetitive treatment with Ruxolitinib is required to effectively target excessive inflammatory responses without exceeding doses recommended by the clinical guidelines.
Schrepf, Andrew; Bradley, Catherine S.; O'Donnell, Michael; Luo, Yi; Harte, Steven E.; Kreder, Karl; Lutgendorf, Susan
2015-01-01
Background Interstitial Cystitis/Bladder Pain Syndrome (IC/BPS) is a condition characterized by pelvic pain and urinary symptoms. Some IC/BPS patients have pain confined to the pelvic region, while others suffer widespread pain. Inflammatory processes have previously been linked to pelvic pain in IC/BPS, but their association with widespread pain in IC/BPS has not been characterized. Methods Sixty-six women meeting criteria for IC/BPS completed self-report measures of pain as part of the Multidisciplinary Approach to the Study of Chronic Pelvic Pain (MAPP), collected 3 days of saliva for cortisol assays, and provided blood samples. Peripheral blood mononuclear cells (PBMCs) were stimulated with Toll-Like Receptor (TLR) 2 and 4 agonists and cytokines were measured in supernatant; IL-6 was also measured in plasma. Associations between inflammatory variables and the likelihood of endorsing extra-pelvic pain, or the presence of a comorbid syndrome, were tested by logistic regression and General Linear Models, respectively. A subset of patients (n=32) completed Quantitative Sensory Testing. Results A one standard deviation increase in TLR-4 inflammatory response was associated with a 1.59 greater likelihood of endorsing extra-pelvic pain (p = .019). Participants with comorbid syndromes also had higher inflammatory responses to TLR-4 stimulation in PBMCs (p = .016). Lower pressure pain thresholds were marginally associated with higher TLR-4 inflammatory responses (p = .062), and significantly associated with higher IL-6 in plasma (p = .031). Conclusions TLR-4 inflammatory responses in PBMCs are a marker of widespread pain in IC/BPS, and should be explored in other conditions characterized by medically unexplained pain. PMID:25771510
Cham, Heining; West, Stephen G.; Ma, Yue; Aiken, Leona S.
2012-01-01
A Monte Carlo simulation was conducted to investigate the robustness of four latent variable interaction modeling approaches (Constrained Product Indicator [CPI], Generalized Appended Product Indicator [GAPI], Unconstrained Product Indicator [UPI], and Latent Moderated Structural Equations [LMS]) under high degrees of non-normality of the observed exogenous variables. Results showed that the CPI and LMS approaches yielded biased estimates of the interaction effect when the exogenous variables were highly non-normal. When the violation of non-normality was not severe (normal; symmetric with excess kurtosis < 1), the LMS approach yielded the most efficient estimates of the latent interaction effect with the highest statistical power. In highly non-normal conditions, the GAPI and UPI approaches with ML estimation yielded unbiased latent interaction effect estimates, with acceptable actual Type-I error rates for both the Wald and likelihood ratio tests of interaction effect at N ≥ 500. An empirical example illustrated the use of the four approaches in testing a latent variable interaction between academic self-efficacy and positive family role models in the prediction of academic performance. PMID:23457417
Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures
ERIC Educational Resources Information Center
Atar, Burcu; Kamata, Akihito
2011-01-01
The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…
Approximated maximum likelihood estimation in multifractal random walks
NASA Astrophysics Data System (ADS)
Løvsletten, O.; Rypdal, M.
2012-04-01
We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.
Attenuation correction in emission tomography using the emission data—A review
Li, Yusheng
2016-01-01
The problem of attenuation correction (AC) for quantitative positron emission tomography (PET) had been considered solved to a large extent after the commercial availability of devices combining PET with computed tomography (CT) in 2001; single photon emission computed tomography (SPECT) has seen a similar development. However, stimulated in particular by technical advances toward clinical systems combining PET and magnetic resonance imaging (MRI), research interest in alternative approaches for PET AC has grown substantially in the last years. In this comprehensive literature review, the authors first present theoretical results with relevance to simultaneous reconstruction of attenuation and activity. The authors then look back at the early history of this research area especially in PET; since this history is closely interwoven with that of similar approaches in SPECT, these will also be covered. We then review algorithmic advances in PET, including analytic and iterative algorithms. The analytic approaches are either based on the Helgason–Ludwig data consistency conditions of the Radon transform, or generalizations of John’s partial differential equation; with respect to iterative methods, we discuss maximum likelihood reconstruction of attenuation and activity (MLAA), the maximum likelihood attenuation correction factors (MLACF) algorithm, and their offspring. The description of methods is followed by a structured account of applications for simultaneous reconstruction techniques: this discussion covers organ-specific applications, applications specific to PET/MRI, applications using supplemental transmission information, and motion-aware applications. After briefly summarizing SPECT applications, we consider recent developments using emission data other than unscattered photons. In summary, developments using time-of-flight (TOF) PET emission data for AC have shown promising advances and open a wide range of applications. These techniques may both remedy deficiencies of purely MRI-based AC approaches in PET/MRI and improve standalone PET imaging. PMID:26843243
Attenuation correction in emission tomography using the emission data—A review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berker, Yannick, E-mail: berker@mail.med.upenn.edu; Li, Yusheng
2016-02-15
The problem of attenuation correction (AC) for quantitative positron emission tomography (PET) had been considered solved to a large extent after the commercial availability of devices combining PET with computed tomography (CT) in 2001; single photon emission computed tomography (SPECT) has seen a similar development. However, stimulated in particular by technical advances toward clinical systems combining PET and magnetic resonance imaging (MRI), research interest in alternative approaches for PET AC has grown substantially in the last years. In this comprehensive literature review, the authors first present theoretical results with relevance to simultaneous reconstruction of attenuation and activity. The authors thenmore » look back at the early history of this research area especially in PET; since this history is closely interwoven with that of similar approaches in SPECT, these will also be covered. We then review algorithmic advances in PET, including analytic and iterative algorithms. The analytic approaches are either based on the Helgason–Ludwig data consistency conditions of the Radon transform, or generalizations of John’s partial differential equation; with respect to iterative methods, we discuss maximum likelihood reconstruction of attenuation and activity (MLAA), the maximum likelihood attenuation correction factors (MLACF) algorithm, and their offspring. The description of methods is followed by a structured account of applications for simultaneous reconstruction techniques: this discussion covers organ-specific applications, applications specific to PET/MRI, applications using supplemental transmission information, and motion-aware applications. After briefly summarizing SPECT applications, we consider recent developments using emission data other than unscattered photons. In summary, developments using time-of-flight (TOF) PET emission data for AC have shown promising advances and open a wide range of applications. These techniques may both remedy deficiencies of purely MRI-based AC approaches in PET/MRI and improve standalone PET imaging.« less
Pseudo and conditional score approach to joint analysis of current count and current status data.
Wen, Chi-Chung; Chen, Yi-Hau
2018-04-17
We develop a joint analysis approach for recurrent and nonrecurrent event processes subject to case I interval censorship, which are also known in literature as current count and current status data, respectively. We use a shared frailty to link the recurrent and nonrecurrent event processes, while leaving the distribution of the frailty fully unspecified. Conditional on the frailty, the recurrent event is assumed to follow a nonhomogeneous Poisson process, and the mean function of the recurrent event and the survival function of the nonrecurrent event are assumed to follow some general form of semiparametric transformation models. Estimation of the models is based on the pseudo-likelihood and the conditional score techniques. The resulting estimators for the regression parameters and the unspecified baseline functions are shown to be consistent with rates of square and cubic roots of the sample size, respectively. Asymptotic normality with closed-form asymptotic variance is derived for the estimator of the regression parameters. We apply the proposed method to a fracture-osteoporosis survey data to identify risk factors jointly for fracture and osteoporosis in elders, while accounting for association between the two events within a subject. © 2018, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Scholz, D.; Fuhs, N.; Hixson, M.
1979-01-01
The overall objective of this study was to apply and evaluate several of the currently available classification schemes for crop identification. The approaches examined were: (1) a per point Gaussian maximum likelihood classifier, (2) a per point sum of normal densities classifier, (3) a per point linear classifier, (4) a per point Gaussian maximum likelihood decision tree classifier, and (5) a texture sensitive per field Gaussian maximum likelihood classifier. Three agricultural data sets were used in the study: areas from Fayette County, Illinois, and Pottawattamie and Shelby Counties in Iowa. The segments were located in two distinct regions of the Corn Belt to sample variability in soils, climate, and agricultural practices.
Bayesian model comparison and parameter inference in systems biology using nested sampling.
Pullen, Nick; Morris, Richard J
2014-01-01
Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling's nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multi-dimensional integral to a 1D integration over likelihood space. This approach focuses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverse-engineer a system's behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design.
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1986-01-01
This paper outlines methods for modeling, identification and estimation for static determination of flexible structures. The shape estimation schemes are based on structural models specified by (possibly interconnected) elliptic partial differential equations. The identification techniques provide approximate knowledge of parameters in elliptic systems. The techniques are based on the method of maximum-likelihood that finds parameter values such that the likelihood functional associated with the system model is maximized. The estimation methods are obtained by means of a function-space approach that seeks to obtain the conditional mean of the state given the data and a white noise characterization of model errors. The solutions are obtained in a batch-processing mode in which all the data is processed simultaneously. After methods for computing the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the related estimation error is conducted. In addition to outlining the above theoretical results, the paper presents typical flexible structure simulations illustrating performance of the shape determination methods.
A quasi-likelihood approach to non-negative matrix factorization
Devarajan, Karthik; Cheung, Vincent C.K.
2017-01-01
A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511
Liu, Zhiyong; Li, Chao; Zhou, Ping; Chen, Xiuzhi
2016-10-07
Climate change significantly impacts the vegetation growth and terrestrial ecosystems. Using satellite remote sensing observations, here we focus on investigating vegetation dynamics and the likelihood of vegetation-related drought under varying climate conditions across China. We first compare temporal trends of Normalized Difference Vegetation Index (NDVI) and climatic variables over China. We find that in fact there is no significant change in vegetation over the cold regions where warming is significant. Then, we propose a joint probability model to estimate the likelihood of vegetation-related drought conditioned on different precipitation/temperature scenarios in growing season across China. To the best of our knowledge, this study is the first to examine the vegetation-related drought risk over China from a perspective based on joint probability. Our results demonstrate risk patterns of vegetation-related drought under both low and high precipitation/temperature conditions. We further identify the variations in vegetation-related drought risk under different climate conditions and the sensitivity of drought risk to climate variability. These findings provide insights for decision makers to evaluate drought risk and vegetation-related develop drought mitigation strategies over China in a warming world. The proposed methodology also has a great potential to be applied for vegetation-related drought risk assessment in other regions worldwide.
Liu, Zhiyong; Li, Chao; Zhou, Ping; Chen, Xiuzhi
2016-01-01
Climate change significantly impacts the vegetation growth and terrestrial ecosystems. Using satellite remote sensing observations, here we focus on investigating vegetation dynamics and the likelihood of vegetation-related drought under varying climate conditions across China. We first compare temporal trends of Normalized Difference Vegetation Index (NDVI) and climatic variables over China. We find that in fact there is no significant change in vegetation over the cold regions where warming is significant. Then, we propose a joint probability model to estimate the likelihood of vegetation-related drought conditioned on different precipitation/temperature scenarios in growing season across China. To the best of our knowledge, this study is the first to examine the vegetation-related drought risk over China from a perspective based on joint probability. Our results demonstrate risk patterns of vegetation-related drought under both low and high precipitation/temperature conditions. We further identify the variations in vegetation-related drought risk under different climate conditions and the sensitivity of drought risk to climate variability. These findings provide insights for decision makers to evaluate drought risk and vegetation-related develop drought mitigation strategies over China in a warming world. The proposed methodology also has a great potential to be applied for vegetation-related drought risk assessment in other regions worldwide. PMID:27713530
Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates
ERIC Educational Resources Information Center
Lee, Sik-Yum; Song, Xin-Yuan
2005-01-01
In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…
Optimal Methods for Classification of Digitally Modulated Signals
2013-03-01
of using a ratio of likelihood functions, the proposed approach uses the Kullback - Leibler (KL) divergence. KL...58 List of Acronyms ALRT Average LRT BPSK Binary Shift Keying BPSK-SS BPSK Spread Spectrum or CDMA DKL Kullback - Leibler Information Divergence...blind demodulation for develop classification algorithms for wider set of signals types. Two methodologies were used : Likelihood Ratio Test
ERIC Educational Resources Information Center
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
ERIC Educational Resources Information Center
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
NASA Astrophysics Data System (ADS)
Staley, Dennis; Negri, Jacquelyn; Kean, Jason
2016-04-01
Population expansion into fire-prone steeplands has resulted in an increase in post-fire debris-flow risk in the western United States. Logistic regression methods for determining debris-flow likelihood and the calculation of empirical rainfall intensity-duration thresholds for debris-flow initiation represent two common approaches for characterizing hazard and reducing risk. Logistic regression models are currently being used to rapidly assess debris-flow hazard in response to design storms of known intensities (e.g. a 10-year recurrence interval rainstorm). Empirical rainfall intensity-duration thresholds comprise a major component of the United States Geological Survey (USGS) and the National Weather Service (NWS) debris-flow early warning system at a regional scale in southern California. However, these two modeling approaches remain independent, with each approach having limitations that do not allow for synergistic local-scale (e.g. drainage-basin scale) characterization of debris-flow hazard during intense rainfall. The current logistic regression equations consider rainfall a unique independent variable, which prevents the direct calculation of the relation between rainfall intensity and debris-flow likelihood. Regional (e.g. mountain range or physiographic province scale) rainfall intensity-duration thresholds fail to provide insight into the basin-scale variability of post-fire debris-flow hazard and require an extensive database of historical debris-flow occurrence and rainfall characteristics. Here, we present a new approach that combines traditional logistic regression and intensity-duration threshold methodologies. This method allows for local characterization of both the likelihood that a debris-flow will occur at a given rainfall intensity, the direct calculation of the rainfall rates that will result in a given likelihood, and the ability to calculate spatially explicit rainfall intensity-duration thresholds for debris-flow generation in recently burned areas. Our approach synthesizes the two methods by incorporating measured rainfall intensity into each model variable (based on measures of topographic steepness, burn severity and surface properties) within the logistic regression equation. This approach provides a more realistic representation of the relation between rainfall intensity and debris-flow likelihood, as likelihood values asymptotically approach zero when rainfall intensity approaches 0 mm/h, and increase with more intense rainfall. Model performance was evaluated by comparing predictions to several existing regional thresholds. The model, based upon training data collected in southern California, USA, has proven to accurately predict rainfall intensity-duration thresholds for other areas in the western United States not included in the original training dataset. In addition, the improved logistic regression model shows promise for emergency planning purposes and real-time, site-specific early warning. With further validation, this model may permit the prediction of spatially-explicit intensity-duration thresholds for debris-flow generation in areas where empirically derived regional thresholds do not exist. This improvement would permit the expansion of the early-warning system into other regions susceptible to post-fire debris flow.
Model-based tomographic reconstruction of objects containing known components.
Stayman, J Webster; Otake, Yoshito; Prince, Jerry L; Khanna, A Jay; Siewerdsen, Jeffrey H
2012-10-01
The likelihood of finding manufactured components (surgical tools, implants, etc.) within a tomographic field-of-view has been steadily increasing. One reason is the aging population and proliferation of prosthetic devices, such that more people undergoing diagnostic imaging have existing implants, particularly hip and knee implants. Another reason is that use of intraoperative imaging (e.g., cone-beam CT) for surgical guidance is increasing, wherein surgical tools and devices such as screws and plates are placed within or near to the target anatomy. When these components contain metal, the reconstructed volumes are likely to contain severe artifacts that adversely affect the image quality in tissues both near and far from the component. Because physical models of such components exist, there is a unique opportunity to integrate this knowledge into the reconstruction algorithm to reduce these artifacts. We present a model-based penalized-likelihood estimation approach that explicitly incorporates known information about component geometry and composition. The approach uses an alternating maximization method that jointly estimates the anatomy and the position and pose of each of the known components. We demonstrate that the proposed method can produce nearly artifact-free images even near the boundary of a metal implant in simulated vertebral pedicle screw reconstructions and even under conditions of substantial photon starvation. The simultaneous estimation of device pose also provides quantitative information on device placement that could be valuable to quality assurance and verification of treatment delivery.
Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions
Barrett, Harrison H.; Dainty, Christopher; Lara, David
2008-01-01
Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255
Hua, Wei; Sun, Guoying; Dodd, Caitlin N; Romio, Silvana A; Whitaker, Heather J; Izurieta, Hector S; Black, Steven; Sturkenboom, Miriam C J M; Davis, Robert L; Deceuninck, Genevieve; Andrews, N J
2013-08-01
The assumption that the occurrence of outcome event must not alter subsequent exposure probability is critical for preserving the validity of the self-controlled case series (SCCS) method. This assumption is violated in scenarios in which the event constitutes a contraindication for exposure. In this simulation study, we compared the performance of the standard SCCS approach and two alternative approaches when the event-independent exposure assumption was violated. Using the 2009 H1N1 and seasonal influenza vaccines and Guillain-Barré syndrome as a model, we simulated a scenario in which an individual may encounter multiple unordered exposures and each exposure may be contraindicated by the occurrence of outcome event. The degree of contraindication was varied at 0%, 50%, and 100%. The first alternative approach used only cases occurring after exposure with follow-up time starting from exposure. The second used a pseudo-likelihood method. When the event-independent exposure assumption was satisfied, the standard SCCS approach produced nearly unbiased relative incidence estimates. When this assumption was partially or completely violated, two alternative SCCS approaches could be used. While the post-exposure cases only approach could handle only one exposure, the pseudo-likelihood approach was able to correct bias for both exposures. Violation of the event-independent exposure assumption leads to an overestimation of relative incidence which could be corrected by alternative SCCS approaches. In multiple exposure situations, the pseudo-likelihood approach is optimal; the post-exposure cases only approach is limited in handling a second exposure and may introduce additional bias, thus should be used with caution. Copyright © 2013 John Wiley & Sons, Ltd.
The Effect of Occupational Growth on Labor Force Task Characteristics.
ERIC Educational Resources Information Center
Szafran, Robert F.
1996-01-01
Examination of changes in 495 occupations from 1950-1990 shows an increased likelihood of tasks with high levels of complexity and social interaction, decreased likelihood of fine or gross motor skills or harsh climatic conditions. There is evidence that jobs have become polarized on the need for fine motor skills and level of social interaction.…
Rate of convergence of k-step Newton estimators to efficient likelihood estimators
Steve Verrill
2007-01-01
We make use of Cramer conditions together with the well-known local quadratic convergence of Newton?s method to establish the asymptotic closeness of k-step Newton estimators to efficient likelihood estimators. In Verrill and Johnson [2007. Confidence bounds and hypothesis tests for normal distribution coefficients of variation. USDA Forest Products Laboratory Research...
Bayesian structural equation modeling in sport and exercise psychology.
Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus
2015-08-01
Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.
Robbins, L G
2000-01-01
Graduate school programs in genetics have become so full that courses in statistics have often been eliminated. In addition, typical introductory statistics courses for the "statistics user" rather than the nascent statistician are laden with methods for analysis of measured variables while genetic data are most often discrete numbers. These courses are often seen by students and genetics professors alike as largely irrelevant cookbook courses. The powerful methods of likelihood analysis, although commonly employed in human genetics, are much less often used in other areas of genetics, even though current computational tools make this approach readily accessible. This article introduces the MLIKELY.PAS computer program and the logic of do-it-yourself maximum-likelihood statistics. The program itself, course materials, and expanded discussions of some examples that are only summarized here are available at http://www.unisi. it/ricerca/dip/bio_evol/sitomlikely/mlikely.h tml. PMID:10628965
Modeling Adversaries in Counterterrorism Decisions Using Prospect Theory.
Merrick, Jason R W; Leclerc, Philip
2016-04-01
Counterterrorism decisions have been an intense area of research in recent years. Both decision analysis and game theory have been used to model such decisions, and more recently approaches have been developed that combine the techniques of the two disciplines. However, each of these approaches assumes that the attacker is maximizing its utility. Experimental research shows that human beings do not make decisions by maximizing expected utility without aid, but instead deviate in specific ways such as loss aversion or likelihood insensitivity. In this article, we modify existing methods for counterterrorism decisions. We keep expected utility as the defender's paradigm to seek for the rational decision, but we use prospect theory to solve for the attacker's decision to descriptively model the attacker's loss aversion and likelihood insensitivity. We study the effects of this approach in a critical decision, whether to screen containers entering the United States for radioactive materials. We find that the defender's optimal decision is sensitive to the attacker's levels of loss aversion and likelihood insensitivity, meaning that understanding such descriptive decision effects is important in making such decisions. © 2014 Society for Risk Analysis.
Modeling of the Terminal Velocities of the Dust Ejected Material by the Impact
NASA Astrophysics Data System (ADS)
Rengel, M.; Küppers, M.; Keller, H. U.; Gutiérrez, P.
We compute the distribution of velocities of the particles ejected by the impact of the projectile released from NASA Deep Impact spacecraft on the nucleus of comet 9P/Tempel 1 on the successive 20 h following the collision. This is performed by the development and use of an ill-conditioned inverse problem approach, whose main ingredients are a set of observations taken by the Narrow Angle Camera (NAC) of OSIRIS onboard the Rosetta spacecraft, and a set of simple models of the expansion of the dust ejecta plume for different velocities. Terminal velocities are derived using a maximum likelihood estimator.
Chen, Feng; Chen, Suren; Ma, Xiaoxiang
2018-06-01
Driving environment, including road surface conditions and traffic states, often changes over time and influences crash probability considerably. It becomes stretched for traditional crash frequency models developed in large temporal scales to capture the time-varying characteristics of these factors, which may cause substantial loss of critical driving environmental information on crash prediction. Crash prediction models with refined temporal data (hourly records) are developed to characterize the time-varying nature of these contributing factors. Unbalanced panel data mixed logit models are developed to analyze hourly crash likelihood of highway segments. The refined temporal driving environmental data, including road surface and traffic condition, obtained from the Road Weather Information System (RWIS), are incorporated into the models. Model estimation results indicate that the traffic speed, traffic volume, curvature and chemically wet road surface indicator are better modeled as random parameters. The estimation results of the mixed logit models based on unbalanced panel data show that there are a number of factors related to crash likelihood on I-25. Specifically, weekend indicator, November indicator, low speed limit and long remaining service life of rutting indicator are found to increase crash likelihood, while 5-am indicator and number of merging ramps per lane per mile are found to decrease crash likelihood. The study underscores and confirms the unique and significant impacts on crash imposed by the real-time weather, road surface, and traffic conditions. With the unbalanced panel data structure, the rich information from real-time driving environmental big data can be well incorporated. Copyright © 2018 National Safety Council and Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mengel, S.K.; Morrison, D.B.
1985-01-01
Consideration is given to global biogeochemical issues, image processing, remote sensing of tropical environments, global processes, geology, landcover hydrology, and ecosystems modeling. Topics discussed include multisensor remote sensing strategies, geographic information systems, radars, and agricultural remote sensing. Papers are presented on fast feature extraction; a computational approach for adjusting TM imagery terrain distortions; the segmentation of a textured image by a maximum likelihood classifier; analysis of MSS Landsat data; sun angle and background effects on spectral response of simulated forest canopies; an integrated approach for vegetation/landcover mapping with digital Landsat images; geological and geomorphological studies using an image processing technique;more » and wavelength intensity indices in relation to tree conditions and leaf-nutrient content.« less
A hidden Markov model approach to neuron firing patterns.
Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G
1996-11-01
Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing.
Collinear Latent Variables in Multilevel Confirmatory Factor Analysis
van de Schoot, Rens; Hox, Joop
2014-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827
Design of simplified maximum-likelihood receivers for multiuser CPM systems.
Bing, Li; Bai, Baoming
2014-01-01
A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.
Quantum state estimation when qubits are lost: a no-data-left-behind approach
Williams, Brian P.; Lougovski, Pavel
2017-04-06
We present an approach to Bayesian mean estimation of quantum states using hyperspherical parametrization and an experiment-specific likelihood which allows utilization of all available data, even when qubits are lost. With this method, we report the first closed-form Bayesian mean and maximum likelihood estimates for the ideal single qubit. Due to computational constraints, we utilize numerical sampling to determine the Bayesian mean estimate for a photonic two-qubit experiment in which our novel analysis reduces burdens associated with experimental asymmetries and inefficiencies. This method can be applied to quantum states of any dimension and experimental complexity.
Overcoming Barriers to Firewise Actions by Residents. Final Report to Joint Fire Science Program
James D. Absher; Jerry J. Vaske; Katie M. Lyon
2013-01-01
Encouraging the public to take action (e.g., creating defensible space) that can reduce the likelihood of wildfire damage and decrease the likelihood of injury is a common approach to increasing wildfire safety and damage mitigation. This study was designed to improve our understanding of both individual and community actions that homeowners currently do or might take...
Stamatakis, Alexandros; Ott, Michael
2008-12-27
The continuous accumulation of sequence data, for example, due to novel wet-laboratory techniques such as pyrosequencing, coupled with the increasing popularity of multi-gene phylogenies and emerging multi-core processor architectures that face problems of cache congestion, poses new challenges with respect to the efficient computation of the phylogenetic maximum-likelihood (ML) function. Here, we propose two approaches that can significantly speed up likelihood computations that typically represent over 95 per cent of the computational effort conducted by current ML or Bayesian inference programs. Initially, we present a method and an appropriate data structure to efficiently compute the likelihood score on 'gappy' multi-gene alignments. By 'gappy' we denote sampling-induced gaps owing to missing sequences in individual genes (partitions), i.e. not real alignment gaps. A first proof-of-concept implementation in RAXML indicates that this approach can accelerate inferences on large and gappy alignments by approximately one order of magnitude. Moreover, we present insights and initial performance results on multi-core architectures obtained during the transition from an OpenMP-based to a Pthreads-based fine-grained parallelization of the ML function.
Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao
2014-01-01
Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei-Chen; Maitra, Ranjan
2011-01-01
We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less
Integrating Entropy-Based Naïve Bayes and GIS for Spatial Evaluation of Flood Hazard.
Liu, Rui; Chen, Yun; Wu, Jianping; Gao, Lei; Barrett, Damian; Xu, Tingbao; Li, Xiaojuan; Li, Linyi; Huang, Chang; Yu, Jia
2017-04-01
Regional flood risk caused by intensive rainfall under extreme climate conditions has increasingly attracted global attention. Mapping and evaluation of flood hazard are vital parts in flood risk assessment. This study develops an integrated framework for estimating spatial likelihood of flood hazard by coupling weighted naïve Bayes (WNB), geographic information system, and remote sensing. The north part of Fitzroy River Basin in Queensland, Australia, was selected as a case study site. The environmental indices, including extreme rainfall, evapotranspiration, net-water index, soil water retention, elevation, slope, drainage proximity, and density, were generated from spatial data representing climate, soil, vegetation, hydrology, and topography. These indices were weighted using the statistics-based entropy method. The weighted indices were input into the WNB-based model to delineate a regional flood risk map that indicates the likelihood of flood occurrence. The resultant map was validated by the maximum inundation extent extracted from moderate resolution imaging spectroradiometer (MODIS) imagery. The evaluation results, including mapping and evaluation of the distribution of flood hazard, are helpful in guiding flood inundation disaster responses for the region. The novel approach presented consists of weighted grid data, image-based sampling and validation, cell-by-cell probability inferring and spatial mapping. It is superior to an existing spatial naive Bayes (NB) method for regional flood hazard assessment. It can also be extended to other likelihood-related environmental hazard studies. © 2016 Society for Risk Analysis.
Lonnie H. Williams
1980-01-01
Fewer houses are being built with crawl spaces and more houses have central heating and air-conditioning, so the number of anobiid beetle infestations should decline. The likelihood of lyctid infestations in domestic hardwoods has been decreased by improved processing and marketing, but increased imports of tropical hardwoods likely will increase the frequency of...
Medical home services for children with behavioral health conditions.
Sheldrick, Radley C; Perrin, Ellen C
2010-01-01
Whether medical services received by children and youth with behavioral health conditions are consistent with a Medical Home has not been systematically studied. The objectives of this study were to examine the variation among four behavioral health conditions in regard to services related to the Medical Home. Cross-sectional analyses of the 2003 National Survey of Children's Health were conducted. Multiple logistic regression analyses tested the impact of behavioral health conditions on medical needs, on Medical Home components, and on likelihood of having a Medical Home overall. Autism, Depression/Anxiety, and Behavior/Conduct problems were associated with reduced likelihood of having a Medical Home, whereas Attention-Deficit Hyperactivity Disorder was associated with increased likelihood. All health conditions predicted increased access to a primary care physician (PCP) and a preventive visit in the past year. However, all were also associated with higher needs for specialty care and all behavioral health conditions except Attention-Deficit Hyperactivity Disorder were associated with difficulties accessing this care. A detailed examination of the receipt of services among children and youth with behavioral health conditions reveals two primary reasons why such care is less likely to be consistent with a Medical Home model: (1) parents are more likely to report needing specialty care; and (2) these needs are less likely to be met. These data suggest that the reason why services received by children and youth with behavioral health conditions are not consistent with the Medical Home has more to do with difficulty accessing specialty care than with problems accessing quality primary care.
A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.
Carreau, Julie; Bengio, Yoshua
2009-07-01
In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.
Stochastic control system parameter identifiability
NASA Technical Reports Server (NTRS)
Lee, C. H.; Herget, C. J.
1975-01-01
The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.
NASA Astrophysics Data System (ADS)
Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran
2016-09-01
In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong
2011-01-01
Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.
Cross-validation to select Bayesian hierarchical models in phylogenetics.
Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C
2016-05-26
Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Cusimano, Natalie; Sousa, Aretuza; Renner, Susanne S.
2012-01-01
Background and Aims For 84 years, botanists have relied on calculating the highest common factor for series of haploid chromosome numbers to arrive at a so-called basic number, x. This was done without consistent (reproducible) reference to species relationships and frequencies of different numbers in a clade. Likelihood models that treat polyploidy, chromosome fusion and fission as events with particular probabilities now allow reconstruction of ancestral chromosome numbers in an explicit framework. We have used a modelling approach to reconstruct chromosome number change in the large monocot family Araceae and to test earlier hypotheses about basic numbers in the family. Methods Using a maximum likelihood approach and chromosome counts for 26 % of the 3300 species of Araceae and representative numbers for each of the other 13 families of Alismatales, polyploidization events and single chromosome changes were inferred on a genus-level phylogenetic tree for 113 of the 117 genera of Araceae. Key Results The previously inferred basic numbers x = 14 and x = 7 are rejected. Instead, maximum likelihood optimization revealed an ancestral haploid chromosome number of n = 16, Bayesian inference of n = 18. Chromosome fusion (loss) is the predominant inferred event, whereas polyploidization events occurred less frequently and mainly towards the tips of the tree. Conclusions The bias towards low basic numbers (x) introduced by the algebraic approach to inferring chromosome number changes, prevalent among botanists, may have contributed to an unrealistic picture of ancestral chromosome numbers in many plant clades. The availability of robust quantitative methods for reconstructing ancestral chromosome numbers on molecular phylogenetic trees (with or without branch length information), with confidence statistics, makes the calculation of x an obsolete approach, at least when applied to large clades. PMID:22210850
A Solution to Separation and Multicollinearity in Multiple Logistic Regression
Shen, Jianzhao; Gao, Sujuan
2010-01-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286
A Solution to Separation and Multicollinearity in Multiple Logistic Regression.
Shen, Jianzhao; Gao, Sujuan
2008-10-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.
Hospital mergers and market overlap.
Brooks, G R; Jones, V G
1997-01-01
OBJECTIVE: To address two questions: What are the characteristics of hospitals that affect the likelihood of their being involved in a merger? What characteristics of particular pairs of hospitals affect the likelihood of the pair engaging in a merger? DATA SOURCES/STUDY SETTING: Hospitals in the 12 county region surrounding the San Francisco Bay during the period 1983 to 1992 were the focus of the study. Data were drawn from secondary sources, including the Lexis/Nexis database, the American Hospital Association, and the Office of Statewide Health Planning and Development of the State of California. STUDY DESIGN: Seventeen hospital mergers during the study period were identified. A random sample of pairs of hospitals that did not merge was drawn to establish a statistically efficient control set. Models constructed from hypotheses regarding hospital and market characteristics believed to be related to merger likelihood were tested using logistic regression analysis. DATA COLLECTION: See Data Sources/Study Setting. PRINCIPAL FINDINGS: The analysis shows that the likelihood of a merger between a particular pair of hospitals is positively related to the degree of market overlap that exists between them. Furthermore, market overlap and performance difference interact in their effect on merger likelihood. In an analysis of individual hospitals, conditions of rivalry, hospital market share, and hospital size were not found to influence the likelihood that a hospital will engage in a merger. CONCLUSIONS: Mergers between hospitals are not driven directly by considerations of market power or efficiency as much as by the existence of specific merger opportunities in the hospitals' local markets. Market overlap is a condition that enables a merger to occur, but other factors, such as the relative performance levels of the hospitals in question and their ownership and teaching status, also play a role in influencing the likelihood that a merger will in fact take place. PMID:9018212
... a heart-lung condition or a depressed immune system, may be given the medication palivizumab (Synagis) to decrease the likelihood of RSV infections. By Mayo Clinic Staff . Mayo Clinic Footer Legal Conditions and Terms Any use of this site ...
ATAC Autocuer Modeling Analysis.
1981-01-01
the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
New estimates of the CMB angular power spectra from the WMAP 5 year low-resolution data
NASA Astrophysics Data System (ADS)
Gruppuso, A.; de Rosa, A.; Cabella, P.; Paci, F.; Finelli, F.; Natoli, P.; de Gasperis, G.; Mandolesi, N.
2009-11-01
A quadratic maximum likelihood (QML) estimator is applied to the Wilkinson Microwave Anisotropy Probe (WMAP) 5 year low-resolution maps to compute the cosmic microwave background angular power spectra (APS) at large scales for both temperature and polarization. Estimates and error bars for the six APS are provided up to l = 32 and compared, when possible, to those obtained by the WMAP team, without finding any inconsistency. The conditional likelihood slices are also computed for the Cl of all the six power spectra from l = 2 to 10 through a pixel-based likelihood code. Both the codes treat the covariance for (T, Q, U) in a single matrix without employing any approximation. The inputs of both the codes (foreground-reduced maps, related covariances and masks) are provided by the WMAP team. The peaks of the likelihood slices are always consistent with the QML estimates within the error bars; however, an excellent agreement occurs when the QML estimates are used as a fiducial power spectrum instead of the best-fitting theoretical power spectrum. By the full computation of the conditional likelihood on the estimated spectra, the value of the temperature quadrupole CTTl=2 is found to be less than 2σ away from the WMAP 5 year Λ cold dark matter best-fitting value. The BB spectrum is found to be well consistent with zero, and upper limits on the B modes are provided. The parity odd signals TB and EB are found to be consistent with zero.
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
A quantum framework for likelihood ratios
NASA Astrophysics Data System (ADS)
Bond, Rachael L.; He, Yang-Hui; Ormerod, Thomas C.
The ability to calculate precise likelihood ratios is fundamental to science, from Quantum Information Theory through to Quantum State Estimation. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes’ theorem either defaults to the marginal probability driven “naive Bayes’ classifier”, or requires the use of compensatory expectation-maximization techniques. This paper takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement, and demonstrates that Bayes’ theorem is a special case of a more general quantum mechanical expression.
A thirteen-year comparison in patterns of attitudes toward counseling.
Rule, W R; Gandy, G L
1994-01-01
Two comparable samples of college students were administered the same survey of attitudes toward counseling in 1976 and 1989. Ratings were obtained for (1) likelihood of seeking counseling, (2) likelihood of seeking help from professional and nonprofessional helpers, (3) likelihood of seeking help for differing types of problems, (4) degree of responsibility the professional should assume, and (5) preferences for five of the major counseling approaches (Adlerian, Behavioral, Gestalt, Person-Centered, Rational-Emotive). Consistencies and changing patterns were noted within each year and between years. Findings are discussed in relation to existing research as well as to possible gender and societal determinants.
Mimosa: Mixture Model of Co-expression to Detect Modulators of Regulatory Interaction
NASA Astrophysics Data System (ADS)
Hansen, Matthew; Everett, Logan; Singh, Larry; Hannenhalli, Sridhar
Functionally related genes tend to be correlated in their expression patterns across multiple conditions and/or tissue-types. Thus co-expression networks are often used to investigate functional groups of genes. In particular, when one of the genes is a transcription factor (TF), the co-expression-based interaction is interpreted, with caution, as a direct regulatory interaction. However, any particular TF, and more importantly, any particular regulatory interaction, is likely to be active only in a subset of experimental conditions. Moreover, the subset of expression samples where the regulatory interaction holds may be marked by presence or absence of a modifier gene, such as an enzyme that post-translationally modifies the TF. Such subtlety of regulatory interactions is overlooked when one computes an overall expression correlation. Here we present a novel mixture modeling approach where a TF-Gene pair is presumed to be significantly correlated (with unknown coefficient) in a (unknown) subset of expression samples. The parameters of the model are estimated using a Maximum Likelihood approach. The estimated mixture of expression samples is then mined to identify genes potentially modulating the TF-Gene interaction. We have validated our approach using synthetic data and on three biological cases in cow and in yeast. While limited in some ways, as discussed, the work represents a novel approach to mine expression data and detect potential modulators of regulatory interactions.
The Hypothesis-Driven Physical Examination.
Garibaldi, Brian T; Olson, Andrew P J
2018-05-01
The physical examination remains a vital part of the clinical encounter. However, physical examination skills have declined in recent years, in part because of decreased time at the bedside. Many clinicians question the relevance of physical examinations in the age of technology. A hypothesis-driven approach to teaching and practicing the physical examination emphasizes the performance of maneuvers that can alter the likelihood of disease. Likelihood ratios are diagnostic weights that allow clinicians to estimate the post-probability of disease. This hypothesis-driven approach to the physical examination increases its value and efficiency, while preserving its cultural role in the patient-physician relationship. Copyright © 2017 Elsevier Inc. All rights reserved.
Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki
2015-01-01
Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.
Norman, Cameron D; Haresign, Helen; Mehling, Christine; Bloomberg, Honey
2016-01-01
A changing and cluttered information landscape has put pressure on health organizations to produce consumer information materials that are not only factual but high quality and engaging to audiences. User-centered design methods can be useful in obtaining feedback from consumers; however, they are labor intensive and slow, which is not responsive to the fast-paced communication landscape influenced by social media. EatRight Ontario (ERO), a provincial nutrition and health support program of Dietitians of Canada, develops evidence-based resources for consumers and sought to increase user-centered design activities by exploring whether the standard approach to feedback could be replicated online. While online feedback has been used in marketing research, few examples are available in health promotion and public health to guide programming and policy. This study compared a traditional in-person approach for recruitment and feedback using paper surveys with an Internet-based approach using Facebook as a recruitment tool and collecting user feedback via the Web. The purpose of the proof-of-concept study was to explore the feasibility of the approach and compare an online versus traditional approach in terms of recruitment issues and response. An exploratory, two-group comparative trial was conducted using a convenience and purposive sampling. Participants reviewed a handout on healthy eating and then completed an 18-item survey with both forced-choice items and open-ended responses. One group viewed a hard-copy prototype and completed a paper survey and the other viewed a PDF prototype via Web links and completed a Web survey. The total days required to fulfill the sample for each group were used as the primary method of efficiency calculation. In total, 44 participants (22 per condition) completed the study, consisting of 42 women and 2 men over the age of 18. Few significant differences were detected between the groups. Statistically significant (P≤.05) differences were detected on four attitudinal variables related to the document reviewed and include perceived length of the document, perceived attractiveness, likelihood of contacting ERO for food and nutrition questions in the future, and likelihood of recommending ERO to a friend. In all cases, the responses were more favorable to the document or ERO with the online group. All other variables showed no difference between them. A content review of the qualitative feedback found relative consistency in word use and number of words used, indicating relative parity in the amount of data generated between conditions. The online condition achieved its sampling target in 9 days, while the in-person method took 79 days to achieve the target. An online process of recruitment through Facebook and solicitation of online feedback is a feasible model that yields comparable response levels to in-person methods for user feedback. The online approach appears to be a faster and less resource-intensive approach than traditional in-person methods for feedback generation.
Haresign, Helen; Mehling, Christine; Bloomberg, Honey
2016-01-01
Background A changing and cluttered information landscape has put pressure on health organizations to produce consumer information materials that are not only factual but high quality and engaging to audiences. User-centered design methods can be useful in obtaining feedback from consumers; however, they are labor intensive and slow, which is not responsive to the fast-paced communication landscape influenced by social media. EatRight Ontario (ERO), a provincial nutrition and health support program of Dietitians of Canada, develops evidence-based resources for consumers and sought to increase user-centered design activities by exploring whether the standard approach to feedback could be replicated online. While online feedback has been used in marketing research, few examples are available in health promotion and public health to guide programming and policy. Objective This study compared a traditional in-person approach for recruitment and feedback using paper surveys with an Internet-based approach using Facebook as a recruitment tool and collecting user feedback via the Web. The purpose of the proof-of-concept study was to explore the feasibility of the approach and compare an online versus traditional approach in terms of recruitment issues and response. Methods An exploratory, two-group comparative trial was conducted using a convenience and purposive sampling. Participants reviewed a handout on healthy eating and then completed an 18-item survey with both forced-choice items and open-ended responses. One group viewed a hard-copy prototype and completed a paper survey and the other viewed a PDF prototype via Web links and completed a Web survey. The total days required to fulfill the sample for each group were used as the primary method of efficiency calculation. Results In total, 44 participants (22 per condition) completed the study, consisting of 42 women and 2 men over the age of 18. Few significant differences were detected between the groups. Statistically significant (P≤.05) differences were detected on four attitudinal variables related to the document reviewed and include perceived length of the document, perceived attractiveness, likelihood of contacting ERO for food and nutrition questions in the future, and likelihood of recommending ERO to a friend. In all cases, the responses were more favorable to the document or ERO with the online group. All other variables showed no difference between them. A content review of the qualitative feedback found relative consistency in word use and number of words used, indicating relative parity in the amount of data generated between conditions. The online condition achieved its sampling target in 9 days, while the in-person method took 79 days to achieve the target. Conclusions An online process of recruitment through Facebook and solicitation of online feedback is a feasible model that yields comparable response levels to in-person methods for user feedback. The online approach appears to be a faster and less resource-intensive approach than traditional in-person methods for feedback generation. PMID:27227153
NASA Astrophysics Data System (ADS)
Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro
2016-04-01
The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.
ERIC Educational Resources Information Center
Kehoe, E. James; Ludvig, Elliot A.; Sutton, Richard S.
2013-01-01
Rabbits were classically conditioned using compounds of tone and light conditioned stimuli (CSs) presented with either simultaneous onsets (Experiment 1) or serial onsets (Experiment 2) in a delay conditioning paradigm. Training with the simultaneous compound reduced the likelihood of a conditioned response (CR) to the individual CSs ("mutual…
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)
Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.
2015-01-01
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128
NASA Astrophysics Data System (ADS)
Sproles, Eric A.; Roth, Travis R.; Nolin, Anne W.
2017-02-01
In the Pacific Northwest, USA, the extraordinarily low snowpacks of winters 2013-2014 and 2014-2015 stressed regional water resources and the social-environmental system. We introduce two new approaches to better understand how seasonal snow water storage during these two winters would compare to snow water storage under warmer climate conditions. The first approach calculates a spatial-probabilistic metric representing the likelihood that the snow water storage of 2013-2014 and 2014-2015 would occur under +2 °C perturbed climate conditions. We computed snow water storage (basin-wide and across elevations) and the ratio of snow water equivalent to cumulative precipitation (across elevations) for the McKenzie River basin (3041 km2), a major tributary to the Willamette River in Oregon, USA. We applied these computations to calculate the occurrence probability for similarly low snow water storage under climate warming. Results suggest that, relative to +2 °C conditions, basin-wide snow water storage during winter 2013-2014 would be above average, while that of winter 2014-2015 would be far below average. Snow water storage on 1 April corresponds to a 42 % (2013-2014) and 92 % (2014-2015) probability of being met or exceeded in any given year. The second approach introduces the concept of snow analogs to improve the anticipatory capacity of climate change impacts on snow-derived water resources. The use of a spatial-probabilistic approach and snow analogs provide new methods of assessing basin-wide snow water storage in a non-stationary climate and are readily applicable in other snow-dominated watersheds.
Thompson, Ronald G; Alonzo, Dana; Hasin, Deborah S
2013-01-01
This study examined the influences of parental divorce and maternal-paternal histories of alcohol problems on adult offspring lifetime alcohol dependence using data from the 2001-2002 National Epidemiological Survey on Alcohol and Related Conditions (NESARC). Parental divorce and maternal-paternal alcohol problems interacted to differentially influence the likelihood of offspring lifetime alcohol dependence. Experiencing parental divorce and either maternal or paternal alcohol problems doubled the likelihood of alcohol dependence. Divorce and history of alcohol problems for both parents tripled the likelihood. Offspring of parental divorce may be more vulnerable to developing alcohol dependence, particularly when one or both parents have alcohol problems.
THOMPSON, RONALD G.; ALONZO, DANA; HASIN, DEBORAH S.
2014-01-01
This study examined the influences of parental divorce and maternal-paternal histories of alcohol problems on adult offspring lifetime alcohol dependence using data from the 2001–2002 National Epidemiological Survey on Alcohol and Related Conditions (NESARC). Parental divorce and maternal-paternal alcohol problems interacted to differentially influence the likelihood of offspring lifetime alcohol dependence. Experiencing parental divorce and either maternal or paternal alcohol problems doubled the likelihood of alcohol dependence. Divorce and history of alcohol problems for both parents tripled the likelihood. Offspring of parental divorce may be more vulnerable to developing alcohol dependence, particularly when one or both parents have alcohol problems. PMID:24678271
Aerodynamic parameter estimation via Fourier modulating function techniques
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1995-01-01
Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.
NASA Technical Reports Server (NTRS)
Glaze, L. S.; Baloga, S. M.
2014-01-01
Pahoehoe lavas are recognized as an important landform on Earth, Mars and Io. Observations of such flows on Earth (e.g., Figure 1) indicate that the emplacement process is dominated by random effects. Existing models for lobate a`a lava flows that assume viscous fluid flow on an inclined plane are not appropriate for dealing with the numerous random factors present in pahoehoe emplacement. Thus, interpretation of emplacement conditions for pahoehoe lava flows on Mars requires fundamentally different models. A new model that implements a simulation approach has recently been developed that allows exploration of a variety of key influences on pahoehoe lobe emplacement (e.g., source shape, confinement, slope). One important factor that has an impact on the final topographic shape and morphology of a pahoehoe lobe is the volumetric flow rate of lava, where cooling of lava on the lobe surface influences the likelihood of subsequent breakouts.
Adaptive Stress Testing of Airborne Collision Avoidance Systems
NASA Technical Reports Server (NTRS)
Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Brat, Guillaume P.; Owen, Michael P.
2015-01-01
This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.
A hidden Markov model approach to neuron firing patterns.
Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G
1996-01-01
Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing. Images FIGURE 3 PMID:8913581
Owners' direct gazes increase dogs' attention-getting behaviors.
Ohkita, Midori; Nagasawa, Miho; Kazutaka, Mogi; Kikusui, Takefumi
2016-04-01
This study examined whether dogs gain information about human's attention via their gazes and whether they change their attention-getting behaviors (i.e., whining and whimpering, looking at their owners' faces, pawing, and approaching their owners) in response to their owners' direct gazes. The results showed that when the owners gazed at their dogs, the durations of whining and whimpering and looking at the owners' faces were longer than when the owners averted their gazes. In contrast, there were no differences in duration of pawing and likelihood of approaching the owners between the direct and averted gaze conditions. Therefore, owners' direct gazes increased the behaviors that acted as distant signals and did not necessarily involve touching the owners. We suggest that dogs are sensitive to human gazes, and this sensitivity may act as attachment signals to humans, and may contribute to close relationships between humans and dogs. Copyright © 2016 Elsevier B.V. All rights reserved.
Killiches, Matthias; Czado, Claudia
2018-03-22
We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.
The risk of predation favors cooperation among breeding prey
Krama, Tatjana; Berzins, Arnis; Rantala, Markus J
2010-01-01
Empirical studies have shown that animals often focus on short-term benefits under conditions of predation risk, which reduces the likelihood that they will cooperate with others. However, some theoretical studies predict that animals in adverse conditions should not avoid cooperation with their neighbors since it may decrease individual risks and increase long-term benefits of reciprocal help. We experimentally tested these two alternatives to find out whether increased predation risk enhances or diminishes the occurrence of cooperation in mobbing, a common anti-predator behavior, among breeding pied flycatchers, Ficedula hypoleuca. Our results show that birds attended mobs initiated by their neighbors more often, approached the stuffed predator significantly more closely, and mobbed it at a higher intensity in areas where the perceived risk of predation was experimentally increased. This study demonstrates a positive impact of predation risk on cooperation in breeding songbirds, which might help to explain the emergence and evolution of cooperation. PMID:20714404
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2002-01-01
As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.
On the possibility of Earth-type habitable planets in the 55 Cancri system.
von Bloh, W; Cuntz, M; Franck, S; Bounama, C
2003-01-01
We discuss the possibility of Earth-type planets in the planetary system of 55 Cancri, a nearby G8 V star, which is host to two, possibly three, giant planets. We argue that Earth-type planets around 55 Cancri are in principle possible. Several conditions are necessary. First, Earth-type planets must have formed despite the existence of the close-in giant planet(s). In addition, they must be orbitally stable in the region of habitability considering that the stellar habitable zone is relatively close to the star compared to the Sun because of 55 Cancri's low luminosity and may therefore be affected by the close-in giant planet(s). We estimate the likelihood of Earth-type planets around 55 Cancri based on the integrated system approach previously considered, which provides a way of assessing the long-term possibility of photosynthetic biomass production under geodynamic conditions.
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2014-09-01
Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.
Becker, Sara J; Squires, Daniel D; Strong, David R; Barnett, Nancy P; Monti, Peter M; Petry, Nancy M
2016-01-01
Few prospective studies have evaluated theory-driven approaches to the implementation of evidence-based opioid treatment. This study compared the effectiveness of an implementation model (Science to Service Laboratory; SSL) to training as usual (TAU) in promoting the adoption of contingency management across a multisite opioid addiction treatment program. We also examined whether the SSL affected putative mediators of contingency management adoption (perceived innovation characteristics and organizational readiness to change). Sixty treatment providers (39 SSL, 21 TAU) from 15 geographically diverse satellite clinics (7 SSL, 8 TAU) participated in the 12-month study. Both conditions received didactic contingency management training and those in the predetermined experimental region received 9 months of SSL-enhanced training. Contingency management adoption was monitored biweekly, whereas putative mediators were measured at baseline, 3 months, and 12 months. Relative to providers in the TAU region, treatment providers in the SSL region had comparable likelihood of contingency management adoption in the first 20 weeks of the study, and then significantly higher likelihood of adoption (odds ratios = 2.4-13.5) for the remainder of the study. SSL providers also reported higher levels of one perceived innovation characteristic (Observability) and one aspect of organizational readiness to change (Adequacy of Training Resources), although there was no evidence that the SSL affected these putative mediators over time. Results of this study indicate that a fully powered randomized trial of the SSL is warranted. Considerations for a future evaluation are discussed.
The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.
Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R
2013-01-01
In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.
A water balance approach to enhance national (GB) Daily Landslide Hazard Assessments
NASA Astrophysics Data System (ADS)
Dijkstra, Tom; Reeves, Helen; Freeborough, Katy; Dashwood, Claire; Pennington, Catherine; Jordan, Hannah; Hobbs, Peter; Richardson, Jennifer; Banks, Vanessa; Cole, Steven; Wells, Steven; Moore, Robert
2017-04-01
The British Geological Survey (BGS) is a member of the Natural Hazards Partnership (NHP) and delivers a national (GB) daily landslide hazard assessment (DLHA). The DLHA is based largely on 'expert' driven evaluations of the likelihood of landslides in response to antecedent ground conditions, adverse weather and reported landslide events. It concentrates on shallow translational slides and debris flows - events that most frequently have societal consequences by disrupting transport infrastructure and affecting buildings. Considerable experience with the issuing of DLHAs has been gained since 2012. However, it remains very difficult to appropriately assess changing ground conditions throughout GB even when good quality precipitation forecasts are available. Soil moisture sensors are available, but the network is sparse and not yet capable of covering GB to the detail required to underpin the forecasts. Therefore, we developed an approach where temporal and spatial variations in soil moisture can be obtained from a water balance model, representing processes in the near-surface and configured on a relatively coarse grid of 1 km2. Model outputs are not intended to be relevant to the slope scale. The assumption is that the likelihood of landslides being triggered by rainfall is dependent upon the soil moisture conditions of the near-surface, in combination with how much rain is forecast to occur for the following day. These variables form the basis for establishing thresholds to guide the issuing of DLHA and early warnings. The main aim is to obtain an insight into regional patterns of change and threshold exceedance. The BGS water balance model is still in its infancy and it requires substantial work to fine-tune and validate it. To test the performance of the BGS model we focused on an analysis of Scottish landslides (2004-2015) comprising translational slides and debris flows where the BGS model is conditionally evaluated against the Grid-to-Grid (G2G) Model. G2G is a physical-conceptual distributed hydrological model developed by the Centre for Ecology & Hydrology, also an NHP member. G2G is especially suited to simulate river flows over ungauged areas and has the capability to forecast fluvial river flows at any location across a gridded model domain. This is achieved by using spatial datasets on landscape properties - terrain, land-cover, soil and geology - in combination with gridded time-series of rainfall to shape a rainfall pattern into a river flow response over the model domain. G2G is operational on a 1 km2 grid over the GB and outputs soil moisture estimates that take some account of terrain slope in its water balance calculation. This research is part of an evolutionary process where capabilities of establishing the likelihood of landslides will develop as datasets are becoming increasingly detailed (and accessible) and the representation of hydrogeological and geotechnical processes continues to develop.
The effects of auditive and visual settings on perceived restoration likelihood
Jahncke, Helena; Eriksson, Karolina; Naula, Sanna
2015-01-01
Research has so far paid little attention to how environmental sounds might affect restorative processes. The aim of the present study was to investigate the effects of auditive and visual stimuli on perceived restoration likelihood and attitudes towards varying environmental resting conditions. Assuming a condition of cognitive fatigue, all participants (N = 40) were presented with images of an open plan office and urban nature, each under four sound conditions (nature sound, quiet, broadband noise, office noise). After the presentation of each setting/sound combination, the participants assessed it according to restorative qualities, restoration likelihood and attitude. The results mainly showed predicted effects of the sound manipulations on the perceived restorative qualities of the settings. Further, significant interactions between auditive and visual stimuli were found for all measures. Both nature sounds and quiet more positively influenced evaluations of the nature setting compared to the office setting. When office noise was present, both settings received poor evaluations. The results agree with expectations that nature sounds and quiet areas support restoration, while office noise and broadband noise (e.g. ventilation, traffic noise) do not. The findings illustrate the significance of environmental sound for restorative experience. PMID:25599752
Berg, Gregory D; Leary, Fredric; Medina, Wendie; Donnelly, Shawn; Warnick, Kathleen
2015-02-01
The objective was to estimate clinical metric and medication persistency impacts of a care management program. The data sources were Medicaid administrative claims for a sample population of 32,334 noninstitutionalized Medicaid-only aged, blind, or disabled patients with diagnosed conditions of asthma, coronary artery disease, chronic obstructive pulmonary disease, diabetes, or heart failure between 2005 and 2009. Multivariate regression analysis was used to test the hypothesis that exposure to a care management intervention increased the likelihood of having the appropriate medication or procedures performed, as well as increased medication persistency. Statistically significant clinical metric improvements occurred in each of the 5 conditions studied. Increased medication persistency was found for beta-blocker medication for members with coronary artery disease, angiotensin-converting enzyme inhibitor/angiotensin receptor blocker and diuretic medications for members with heart failure, bronchodilator and corticosteroid medications for members with chronic obstructive pulmonary disease, and aspirin/antiplatelet medications for members with diabetes. This study demonstrates that a care management program increases the likelihood of having an appropriate medication dispensed and/or an appropriate clinical test performed, as well as increased likelihood of medication persistency, in people with chronic conditions.
Fan, Ming; Thongsri, Tepwitoon; Axe, Lisa; Tyson, Trevor A
2005-06-01
A probabilistic approach was applied in an ecological risk assessment (ERA) to characterize risk and address uncertainty employing Monte Carlo simulations for assessing parameter and risk probabilistic distributions. This simulation tool (ERA) includes a Window's based interface, an interactive and modifiable database management system (DBMS) that addresses a food web at trophic levels, and a comprehensive evaluation of exposure pathways. To illustrate this model, ecological risks from depleted uranium (DU) exposure at the US Army Yuma Proving Ground (YPG) and Aberdeen Proving Ground (APG) were assessed and characterized. Probabilistic distributions showed that at YPG, a reduction in plant root weight is considered likely to occur (98% likelihood) from exposure to DU; for most terrestrial animals, likelihood for adverse reproduction effects ranges from 0.1% to 44%. However, for the lesser long-nosed bat, the effects are expected to occur (>99% likelihood) through the reduction in size and weight of offspring. Based on available DU data for the firing range at APG, DU uptake will not likely affect survival of aquatic plants and animals (<0.1% likelihood). Based on field and laboratory studies conducted at APG and YPG on pocket mice, kangaroo rat, white-throated woodrat, deer, and milfoil, body burden concentrations observed fall into the distributions simulated at both sites.
Estimation for general birth-death processes
Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.
2013-01-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261
Estimation for general birth-death processes.
Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A
2014-04-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.
Cabin location and the likelihood of motion sickness in cruise ship passengers.
Gahlinger , P M
2000-01-01
The prevalence of motion sickness approaches 100% on rough seas. Some previous studies have reported a strong association between location on a ship and the risk of motion sickness, whereas other studies found no association. This study was undertaken to determine if there is a statistical association between the location of the passenger cabin on a ship and the risk of motion sickness in unadapted passengers. Data were collected on 260 passengers on an expedition ship traversing the Drake Passage between South America and Antarctica, during rough sea conditions. A standard scale was employed to record motion sickness severity. The risk of motion sickness was found to be statistically associated with age and sex. However, no association was found with the location of the passenger cabin. Previous research reporting a strong association of motion sickness and passenger location on a ship, studied passengers in the seated position. Passengers who are able to lie in a supine position are at considerably reduced risk of motion sickness. Expedition or cruise ships that provide ready access to berths, allow passengers to avoid the most nauseogenic positions. The location of the passenger cabin does not appear to be related to the likelihood of seasickness.
Preserving Flow Variability in Watershed Model Calibrations
Background/Question/Methods Although watershed modeling flow calibration techniques often emphasize a specific flow mode, ecological conditions that depend on flow-ecology relationships often emphasize a range of flow conditions. We used informal likelihood methods to investig...
Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn
2013-03-06
Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.
2013-01-01
Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171
NASA Astrophysics Data System (ADS)
Liu, Jie; Wang, Wilson; Ma, Fai
2011-07-01
System current state estimation (or condition monitoring) and future state prediction (or failure prognostics) constitute the core elements of condition-based maintenance programs. For complex systems whose internal state variables are either inaccessible to sensors or hard to measure under normal operational conditions, inference has to be made from indirect measurements using approaches such as Bayesian learning. In recent years, the auxiliary particle filter (APF) has gained popularity in Bayesian state estimation; the APF technique, however, has some potential limitations in real-world applications. For example, the diversity of the particles may deteriorate when the process noise is small, and the variance of the importance weights could become extremely large when the likelihood varies dramatically over the prior. To tackle these problems, a regularized auxiliary particle filter (RAPF) is developed in this paper for system state estimation and forecasting. This RAPF aims to improve the performance of the APF through two innovative steps: (1) regularize the approximating empirical density and redraw samples from a continuous distribution so as to diversify the particles; and (2) smooth out the rather diffused proposals by a rejection/resampling approach so as to improve the robustness of particle filtering. The effectiveness of the proposed RAPF technique is evaluated through simulations of a nonlinear/non-Gaussian benchmark model for state estimation. It is also implemented for a real application in the remaining useful life (RUL) prediction of lithium-ion batteries.
Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema
2016-08-10
Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.
Inferring epidemiological parameters from phylogenies using regression-ABC: A comparative study
Gascuel, Olivier
2017-01-01
Inferring epidemiological parameters such as the R0 from time-scaled phylogenies is a timely challenge. Most current approaches rely on likelihood functions, which raise specific issues that range from computing these functions to finding their maxima numerically. Here, we present a new regression-based Approximate Bayesian Computation (ABC) approach, which we base on a large variety of summary statistics intended to capture the information contained in the phylogeny and its corresponding lineage-through-time plot. The regression step involves the Least Absolute Shrinkage and Selection Operator (LASSO) method, which is a robust machine learning technique. It allows us to readily deal with the large number of summary statistics, while avoiding resorting to Markov Chain Monte Carlo (MCMC) techniques. To compare our approach to existing ones, we simulated target trees under a variety of epidemiological models and settings, and inferred parameters of interest using the same priors. We found that, for large phylogenies, the accuracy of our regression-ABC is comparable to that of likelihood-based approaches involving birth-death processes implemented in BEAST2. Our approach even outperformed these when inferring the host population size with a Susceptible-Infected-Removed epidemiological model. It also clearly outperformed a recent kernel-ABC approach when assuming a Susceptible-Infected epidemiological model with two host types. Lastly, by re-analyzing data from the early stages of the recent Ebola epidemic in Sierra Leone, we showed that regression-ABC provides more realistic estimates for the duration parameters (latency and infectiousness) than the likelihood-based method. Overall, ABC based on a large variety of summary statistics and a regression method able to perform variable selection and avoid overfitting is a promising approach to analyze large phylogenies. PMID:28263987
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
Time series modeling by a regression approach based on a latent process.
Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice
2009-01-01
Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.
A closer look at self-pay segmentation.
Franklin, David; Ingramn, Coy; Levin, Steve
2010-09-01
Successful scoring approaches for self-pay accounts have three common characteristics: Thoughtful selection of a scoring model and segmentation approach. Deployment of workflows (either segmented or account prioritization) consistent with a hospital's capabilities and the likelihood of collection. Ongoing performance monitoring.
Kelsall, H L; Sim, M R; Forbes, A B; Glass, D C; McKenzie, D P; Ikin, J F; Abramson, M J; Blizzard, L; Ittak, P
2004-12-01
To investigate whether Australian Gulf War veterans have a higher than expected prevalence of recent symptoms and medical conditions that were first diagnosed in the period following the 1991 Gulf War; and if so, whether these effects were associated with exposures and experiences that occurred in the Gulf War. Cross-sectional study of 1456 Australian Gulf War veterans and a comparison group who were in operational units at the time of the Gulf War, but were not deployed to that conflict (n = 1588). A postal questionnaire was administered and the likelihood of the diagnosis of self-reported medical conditions was assessed and rated by a medical practitioner. Gulf War veterans had a higher prevalence of all self-reported health symptoms than the comparison group, and more of the Gulf War veterans had severe symptoms. Increased symptom reporting was associated with several exposures, including having more than 10 immunisations, pyridostigmine bromide tablets, anti-biological warfare tablets, pesticides, insect repellents, reportedly being in a chemical weapons area, and stressful military service experiences in a strong dose-response relation. Gulf War veterans reported psychological (particularly post-traumatic stress disorder), skin, eye, and sinus conditions first diagnosed in 1991 or later more commonly than the comparison group. Over 90% of medical conditions reported by both study groups were rated by a medical practitioner as having a high likelihood of diagnosis. More than 10 years after the 1991 Gulf War, Australian veterans self-report all symptoms and some medical conditions more commonly than the comparison group. Further analysis of the severity of symptoms and likelihood of the diagnosis of medical conditions suggested that these findings are not due to over-reporting or to participation bias.
Kelsall, H; Sim, M; Forbes, A; Glass, D; McKenzie, D; Ikin, J; Abramson, M; Blizzard, L; Ittak, P
2004-01-01
Aims: To investigate whether Australian Gulf War veterans have a higher than expected prevalence of recent symptoms and medical conditions that were first diagnosed in the period following the 1991 Gulf War; and if so, whether these effects were associated with exposures and experiences that occurred in the Gulf War. Methods: Cross-sectional study of 1456 Australian Gulf War veterans and a comparison group who were in operational units at the time of the Gulf War, but were not deployed to that conflict (n = 1588). A postal questionnaire was administered and the likelihood of the diagnosis of self-reported medical conditions was assessed and rated by a medical practitioner. Results: Gulf War veterans had a higher prevalence of all self-reported health symptoms than the comparison group, and more of the Gulf War veterans had severe symptoms. Increased symptom reporting was associated with several exposures, including having more than 10 immunisations, pyridostigmine bromide tablets, anti-biological warfare tablets, pesticides, insect repellents, reportedly being in a chemical weapons area, and stressful military service experiences in a strong dose-response relation. Gulf War veterans reported psychological (particularly post-traumatic stress disorder), skin, eye, and sinus conditions first diagnosed in 1991 or later more commonly than the comparison group. Over 90% of medical conditions reported by both study groups were rated by a medical practitioner as having a high likelihood of diagnosis. Conclusion: More than 10 years after the 1991 Gulf War, Australian veterans self-report all symptoms and some medical conditions more commonly than the comparison group. Further analysis of the severity of symptoms and likelihood of the diagnosis of medical conditions suggested that these findings are not due to over-reporting or to participation bias. PMID:15550607
Liyanage, Harshana; Luzi, Daniela; De Lusignan, Simon; Pecoraro, Fabrizio; McNulty, Richard; Tamburis, Oscar; Krause, Paul; Rigby, Michael; Blair, Mitch
2016-04-18
Background Modelling is an important part of information science. Models are abstractions of reality. We use models in the following contexts: (1) to describe the data and information flows in clinical practice to information scientists, (2) to compare health systems and care pathways, (3) to understand how clinical cases are recorded in record systems and (4) to model health care business models.Asthma is an important condition associated with a substantial mortality and morbidity. However, there are difficulties in determining who has the condition, making both its incidence and prevalence uncertain.Objective To demonstrate an approach for modelling complexity in health using asthma prevalence and incidence as an exemplar.Method The four steps in our process are:1. Drawing a rich picture, following Checkland's soft systems methodology;2. Constructing data flow diagrams (DFDs);3. Creating Unified Modelling Language (UML) use case diagrams to describe the interaction of the key actors with the system;4. Activity diagrams, either UML activity diagram or business process modelling notation diagram.Results Our rich picture flagged the complexity of factors that might impact on asthma diagnosis. There was consensus that the principle issue was that there were undiagnosed and misdiagnosed cases as well as correctly diagnosed. Genetic predisposition to atopy; exposure to environmental triggers; impact of respiratory health on earnings or ability to attend education or participate in sport, charities, pressure groups and the pharmaceutical industry all increased the likelihood of a diagnosis of asthma. Stigma and some factors within the health system diminished the likelihood of a diagnosis. The DFDs and other elements focused on better case finding.Conclusions This approach flagged the factors that might impact on the reported prevalence or incidence of asthma. The models suggested that applying selection criteria may improve the specificity of new or confirmed diagnosis.
Kim, D; Burge, J; Lane, T; Pearlson, G D; Kiehl, K A; Calhoun, V D
2008-10-01
We utilized a discrete dynamic Bayesian network (dDBN) approach (Burge, J., Lane, T., Link, H., Qiu, S., Clark, V.P., 2007. Discrete dynamic Bayesian network analysis of fMRI data. Hum Brain Mapp.) to determine differences in brain regions between patients with schizophrenia and healthy controls on a measure of effective connectivity, termed the approximate conditional likelihood score (ACL) (Burge, J., Lane, T., 2005. Learning Class-Discriminative Dynamic Bayesian Networks. Proceedings of the International Conference on Machine Learning, Bonn, Germany, pp. 97-104.). The ACL score represents a class-discriminative measure of effective connectivity by measuring the relative likelihood of the correlation between brain regions in one group versus another. The algorithm is capable of finding non-linear relationships between brain regions because it uses discrete rather than continuous values and attempts to model temporal relationships with a first-order Markov and stationary assumption constraint (Papoulis, A., 1991. Probability, random variables, and stochastic processes. McGraw-Hill, New York.). Since Bayesian networks are overly sensitive to noisy data, we introduced an independent component analysis (ICA) filtering approach that attempted to reduce the noise found in fMRI data by unmixing the raw datasets into a set of independent spatial component maps. Components that represented noise were removed and the remaining components reconstructed into the dimensions of the original fMRI datasets. We applied the dDBN algorithm to a group of 35 patients with schizophrenia and 35 matched healthy controls using an ICA filtered and unfiltered approach. We determined that filtering the data significantly improved the magnitude of the ACL score. Patients showed the greatest ACL scores in several regions, most markedly the cerebellar vermis and hemispheres. Our findings suggest that schizophrenia patients exhibit weaker connectivity than healthy controls in multiple regions, including bilateral temporal, frontal, and cerebellar regions during an auditory paradigm.
Management of ticks and tick-borne diseases
Ginsberg, H.S.; Stafford, K.C.; Goodman, J.L.; Dennis, D.T.; Sonenshine, D .E.
2005-01-01
The mainstays of tick management and protection from tick-borne diseases have traditionally been personal precautions and the application of acaricides. These techniques maintain their value, and current innovations hold considerable promise for future improvement in effective targeting of materials for tick control. Furthermore, an explosion of research in the past few decades has resulted in the development and expansion of several novel and potentially valuable approaches to tick control, including vaccination against tick-borne pathogen transmission and against tick attachment, host management, use of natural enemies (especially entomopathogenic fungi), and pheromone-based techniques. The situations that require tick management are diverse, and occur under varied ecological conditions. Therefore, the likelihood of finding a single ?magic bullet? for tick management is low. In practical terms, the approach to tick management or to management of tick-borne disease must be tailored to the specific conditions at hand. One area that needs increased attention is the decision-making process in applying IPM to tick control. Further development of novel tick control measures, and increased efficiency in their integration and application to achieve desired goals, holds great promise for effective future management of ticks and tick-borne diseases.
Improving graduated driver licensing systems: a conceptual approach and its implications.
Foss, Robert D
2007-01-01
Graduated driver licensing (GDL) is a concept for how to transform non-drivers into reasonably safe drivers while minimizing the risks as they learn. Several state GDL programs can be improved by moving their structures closer to an adequate implementation of that concept. The learner stage of a GDL system needs to be long enough for beginners to obtain a thorough introduction to the vagaries of driving. The second or intermediate stage needs to effectively limit exposure to known high risk conditions as novices adapt to being fully in charge of the vehicle. The benefits of GDL to date are due almost entirely to the risk-reducing conditions it implements. To improve the functioning of GDL will probably require a better understanding of teen driving than we presently have. The likelihood of further gains will be enhanced by efforts to learn more about the actual causes of teen crashes, the nature and type of teen driver exposures, and what parents do with their teens during the supervised driving stage of GDL. Without a better understanding of these, and other, phenomena it will be difficult to further reduce crashes among young beginning drivers, whether through GDL enhancements or with other approaches.
Hierarchical Probabilistic Inference of Cosmic Shear
NASA Astrophysics Data System (ADS)
Schneider, Michael D.; Hogg, David W.; Marshall, Philip J.; Dawson, William A.; Meyers, Joshua; Bard, Deborah J.; Lang, Dustin
2015-07-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.
Estimating residual fault hitting rates by recapture sampling
NASA Technical Reports Server (NTRS)
Lee, Larry; Gupta, Rajan
1988-01-01
For the recapture debugging design introduced by Nayak (1988) the problem of estimating the hitting rates of the faults remaining in the system is considered. In the context of a conditional likelihood, moment estimators are derived and are shown to be asymptotically normal and fully efficient. Fixed sample properties of the moment estimators are compared, through simulation, with those of the conditional maximum likelihood estimators. Properties of the conditional model are investigated such as the asymptotic distribution of linear functions of the fault hitting frequencies and a representation of the full data vector in terms of a sequence of independent random vectors. It is assumed that the residual hitting rates follow a log linear rate model and that the testing process is truncated when the gaps between the detection of new errors exceed a fixed amount of time.
Participation motives in physical education: an expectancy-value approach.
Goudas, Marios; Dermitzaki, Irini
2004-12-01
This study applied an expectancy-value approach in examining participation motives of students in physical education. As predicted outcome expectancy, a variable formed by the combination of outcome value and outcome likelihood correlated significantly higher with motivational indices than these two factors.
NASA Technical Reports Server (NTRS)
Klein, V.
1980-01-01
A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.
1990-11-01
1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and
Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes
NASA Astrophysics Data System (ADS)
Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen
2016-06-01
Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
Overweight and obesity in India: policy issues from an exploratory multi-level analysis.
Siddiqui, Md Zakaria; Donato, Ronald
2016-06-01
This article analyses a nationally representative household dataset-the National Family Health Survey (NFHS-3) conducted in 2005 to 2006-to examine factors influencing the prevalence of overweight/obesity in India. The dataset was disaggregated into four sub-population groups-urban and rural females and males-and multi-level logit regression models were used to estimate the impact of particular covariates on the likelihood of overweight/obesity. The multi-level modelling approach aimed to identify individual and macro-level contextual factors influencing this health outcome. In contrast to most studies on low-income developing countries, the findings reveal that education for females beyond a particular level of educational attainment exhibits a negative relationship with the likelihood of overweight/obesity. This relationship was not observed for males. Muslim females and all Sikh sub-populations have a higher likelihood of overweight/obesity suggesting the importance of socio-cultural influences. The results also show that the relationship between wealth and the probability of overweight/obesity is stronger for males than females highlighting the differential impact of increasing socio-economic status on gender. Multi-level analysis reveals that states exerted an independent influence on the likelihood of overweight/obesity beyond individual-level covariates, reflecting the importance of spatially related contextual factors on overweight/obesity. While this study does not disentangle macro-level 'obesogenic' environmental factors from socio-cultural network influences, the results highlight the need to refrain from adopting a 'one size fits all' policy approach in addressing the overweight/obesity epidemic facing India. Instead, policy implementation requires a more nuanced and targeted approach to incorporate the growing recognition of socio-cultural and spatial contextual factors impacting on healthy behaviours. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si
2014-12-01
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less
Testing students' e-learning via Facebook through Bayesian structural equation modeling.
Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad
2017-01-01
Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.
Testing students’ e-learning via Facebook through Bayesian structural equation modeling
Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad
2017-01-01
Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. PMID:28886019
Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik
2014-12-01
Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation approach for frailty Cox models based on the penalized partial likelihood. The simulation study showed good performance for the Poisson maximum likelihood approach with Gaussian quadrature and biased variance component estimates for both the Poisson maximum likelihood with Laplace approximation and penalized partial likelihood approaches. Copyright © 2014. Published by Elsevier B.V.
Driving safety in elderly individuals.
Marottoli, R A
1993-05-01
Driving safety in elderly individuals is becoming an increasingly important issue in geriatrics and in medical practice. The number of elderly drivers is increasing as the population ages, and especially as current generations of female drivers age. Concern is raised about their safe operation of motor vehicles because of the increasing likelihood with advancing age of developing conditions that may adversely affect the visual, cognitive, and motor abilities integral to driving. But this issue is not only a medical one, since there are social and political components as well. This discussion will describe the background of this issue, focus on the changes that may occur with aging and their potential relationship to driving ability, and, finally, will outline an approach that physicians may employ in their practice.
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
Learning quadratic receptive fields from neural responses to natural stimuli.
Rajan, Kanaka; Marre, Olivier; Tkačik, Gašper
2013-07-01
Models of neural responses to stimuli with complex spatiotemporal correlation structure often assume that neurons are selective for only a small number of linear projections of a potentially high-dimensional input. In this review, we explore recent modeling approaches where the neural response depends on the quadratic form of the input rather than on its linear projection, that is, the neuron is sensitive to the local covariance structure of the signal preceding the spike. To infer this quadratic dependence in the presence of arbitrary (e.g., naturalistic) stimulus distribution, we review several inference methods, focusing in particular on two information theory-based approaches (maximization of stimulus energy and of noise entropy) and two likelihood-based approaches (Bayesian spike-triggered covariance and extensions of generalized linear models). We analyze the formal relationship between the likelihood-based and information-based approaches to demonstrate how they lead to consistent inference. We demonstrate the practical feasibility of these procedures by using model neurons responding to a flickering variance stimulus.
Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition
Islam, Md. Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676
Feature and score fusion based multiple classifier selection for iris recognition.
Islam, Md Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.
Robb, L A; Doverspike, D
2001-02-01
The interaction between the likelihood of males engaging in sexual harassment and the effectiveness of a 1-hr. sexual harassment-prevention training was explored in a laboratory study. An interaction of scores on the Likelihood to Sexually Harass Scale and training condition for 90 undergraduate men was found, such that sexual harassment-prevention training had a small negative effect on the attitudes of males with a high proclivity to harass.
Oviedo-Trespalacios, Oscar; Haque, Md Mazharul; King, Mark; Washington, Simon
2018-05-29
This study investigated how situational characteristics typically encountered in the transport system influence drivers' perceived likelihood of engaging in mobile phone multitasking. The impacts of mobile phone tasks, perceived environmental complexity/risk, and drivers' individual differences were evaluated as relevant individual predictors within the behavioral adaptation framework. An innovative questionnaire, which includes randomized textual and visual scenarios, was administered to collect data from a sample of 447 drivers in South East Queensland-Australia (66% females; n = 296). The likelihood of engaging in a mobile phone task across various scenarios was modeled by a random parameters ordered probit model. Results indicated that drivers who are female, are frequent users of phones for texting/answering calls, have less favorable attitudes towards safety, and are highly disinhibited were more likely to report stronger intentions of engaging in mobile phone multitasking. However, more years with a valid driving license, self-efficacy toward self-regulation in demanding traffic conditions and police enforcement, texting tasks, and demanding traffic conditions were negatively related to self-reported likelihood of mobile phone multitasking. The unobserved heterogeneity warned of riskier groups among female drivers and participants who need a lot of convincing to believe that multitasking while driving is dangerous. This research concludes that behavioral adaptation theory is a robust framework explaining self-regulation of distracted drivers. © 2018 Society for Risk Analysis.
NASA Technical Reports Server (NTRS)
Kelly, D. A.; Fermelia, A.; Lee, G. K. F.
1990-01-01
An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.
Spectral likelihood expansions for Bayesian inference
NASA Astrophysics Data System (ADS)
Nagel, Joseph B.; Sudret, Bruno
2016-03-01
A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.
ERIC Educational Resources Information Center
Suh, Youngsuk; Talley, Anna E.
2015-01-01
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-04-01
Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.
Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier
2010-05-01
PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.
Disclosure of Medical Errors: What Factors Influence How Patients Respond?
Mazor, Kathleen M; Reed, George W; Yood, Robert A; Fischer, Melissa A; Baril, Joann; Gurwitz, Jerry H
2006-01-01
BACKGROUND Disclosure of medical errors is encouraged, but research on how patients respond to specific practices is limited. OBJECTIVE This study sought to determine whether full disclosure, an existing positive physician-patient relationship, an offer to waive associated costs, and the severity of the clinical outcome influenced patients' responses to medical errors. PARTICIPANTS Four hundred and seven health plan members participated in a randomized experiment in which they viewed video depictions of medical error and disclosure. DESIGN Subjects were randomly assigned to experimental condition. Conditions varied in type of medication error, level of disclosure, reference to a prior positive physician-patient relationship, an offer to waive costs, and clinical outcome. MEASURES Self-reported likelihood of changing physicians and of seeking legal advice; satisfaction, trust, and emotional response. RESULTS Nondisclosure increased the likelihood of changing physicians, and reduced satisfaction and trust in both error conditions. Nondisclosure increased the likelihood of seeking legal advice and was associated with a more negative emotional response in the missed allergy error condition, but did not have a statistically significant impact on seeking legal advice or emotional response in the monitoring error condition. Neither the existence of a positive relationship nor an offer to waive costs had a statistically significant impact. CONCLUSIONS This study provides evidence that full disclosure is likely to have a positive effect or no effect on how patients respond to medical errors. The clinical outcome also influences patients' responses. The impact of an existing positive physician-patient relationship, or of waiving costs associated with the error remains uncertain. PMID:16808770
Dissociating response conflict and error likelihood in anterior cingulate cortex.
Yeung, Nick; Nieuwenhuis, Sander
2009-11-18
Neuroimaging studies consistently report activity in anterior cingulate cortex (ACC) in conditions of high cognitive demand, leading to the view that ACC plays a crucial role in the control of cognitive processes. According to one prominent theory, the sensitivity of ACC to task difficulty reflects its role in monitoring for the occurrence of competition, or "conflict," between responses to signal the need for increased cognitive control. However, a contrasting theory proposes that ACC is the recipient rather than source of monitoring signals, and that ACC activity observed in relation to task demand reflects the role of this region in learning about the likelihood of errors. Response conflict and error likelihood are typically confounded, making the theories difficult to distinguish empirically. The present research therefore used detailed computational simulations to derive contrasting predictions regarding ACC activity and error rate as a function of response speed. The simulations demonstrated a clear dissociation between conflict and error likelihood: fast response trials are associated with low conflict but high error likelihood, whereas slow response trials show the opposite pattern. Using the N2 component as an index of ACC activity, an EEG study demonstrated that when conflict and error likelihood are dissociated in this way, ACC activity tracks conflict and is negatively correlated with error likelihood. These findings support the conflict-monitoring theory and suggest that, in speeded decision tasks, ACC activity reflects current task demands rather than the retrospective coding of past performance.
COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION FOR BANDED PROBABILITY DISTRIBUTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gjerløw, E.; Mikkelsen, K.; Eriksen, H. K.
We investigate sets of random variables that can be arranged sequentially such that a given variable only depends conditionally on its immediate predecessor. For such sets, we show that the full joint probability distribution may be expressed exclusively in terms of uni- and bivariate marginals. Under the assumption that the cosmic microwave background (CMB) power spectrum likelihood only exhibits correlations within a banded multipole range, Δl{sub C}, we apply this expression to two outstanding problems in CMB likelihood analysis. First, we derive a statistically well-defined hybrid likelihood estimator, merging two independent (e.g., low- and high-l) likelihoods into a single expressionmore » that properly accounts for correlations between the two. Applying this expression to the Wilkinson Microwave Anisotropy Probe (WMAP) likelihood, we verify that the effect of correlations on cosmological parameters in the transition region is negligible in terms of cosmological parameters for WMAP; the largest relative shift seen for any parameter is 0.06σ. However, because this may not hold for other experimental setups (e.g., for different instrumental noise properties or analysis masks), but must rather be verified on a case-by-case basis, we recommend our new hybridization scheme for future experiments for statistical self-consistency reasons. Second, we use the same expression to improve the convergence rate of the Blackwell-Rao likelihood estimator, reducing the required number of Monte Carlo samples by several orders of magnitude, and thereby extend it to high-l applications.« less
Hyperspectral image reconstruction for x-ray fluorescence tomography
Gürsoy, Doǧa; Biçer, Tekin; Lanzirotti, Antonio; ...
2015-01-01
A penalized maximum-likelihood estimation is proposed to perform hyperspectral (spatio-spectral) image reconstruction for X-ray fluorescence tomography. The approach minimizes a Poisson-based negative log-likelihood of the observed photon counts, and uses a penalty term that has the effect of encouraging local continuity of model parameter estimates in both spatial and spectral dimensions simultaneously. The performance of the reconstruction method is demonstrated with experimental data acquired from a seed of arabidopsis thaliana collected at the 13-ID-E microprobe beamline at the Advanced Photon Source. The resulting element distribution estimates with the proposed approach show significantly better reconstruction quality than the conventional analytical inversionmore » approaches, and allows for a high data compression factor which can reduce data acquisition times remarkably. In particular, this technique provides the capability to tomographically reconstruct full energy dispersive spectra without compromising reconstruction artifacts that impact the interpretation of results.« less
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.
2017-01-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
NASA Astrophysics Data System (ADS)
Zhang, Chao; Zhang, Qian; Zheng, Chi; Qiu, Guoping
2018-04-01
Video foreground segmentation is one of the key problems in video processing. In this paper, we proposed a novel and fully unsupervised approach for foreground object co-localization and segmentation of unconstrained videos. We firstly compute both the actual edges and motion boundaries of the video frames, and then align them by their HOG feature maps. Then, by filling the occlusions generated by the aligned edges, we obtained more precise masks about the foreground object. Such motion-based masks could be derived as the motion-based likelihood. Moreover, the color-base likelihood is adopted for the segmentation process. Experimental Results show that our approach outperforms most of the State-of-the-art algorithms.
NASA Technical Reports Server (NTRS)
Joyce, A. T.
1974-01-01
Significant progress has been made in the classification of surface conditions (land uses) with computer-implemented techniques based on the use of ERTS digital data and pattern recognition software. The supervised technique presently used at the NASA Earth Resources Laboratory is based on maximum likelihood ratioing with a digital table look-up approach to classification. After classification, colors are assigned to the various surface conditions (land uses) classified, and the color-coded classification is film recorded on either positive or negative 9 1/2 in. film at the scale desired. Prints of the film strips are then mosaicked and photographed to produce a land use map in the format desired. Computer extraction of statistical information is performed to show the extent of each surface condition (land use) within any given land unit that can be identified in the image. Evaluations of the product indicate that classification accuracy is well within the limits for use by land resource managers and administrators. Classifications performed with digital data acquired during different seasons indicate that the combination of two or more classifications offer even better accuracy.
Experimental Design for Parameter Estimation of Gene Regulatory Networks
Timmer, Jens
2012-01-01
Systems biology aims for building quantitative models to address unresolved issues in molecular biology. In order to describe the behavior of biological cells adequately, gene regulatory networks (GRNs) are intensively investigated. As the validity of models built for GRNs depends crucially on the kinetic rates, various methods have been developed to estimate these parameters from experimental data. For this purpose, it is favorable to choose the experimental conditions yielding maximal information. However, existing experimental design principles often rely on unfulfilled mathematical assumptions or become computationally demanding with growing model complexity. To solve this problem, we combined advanced methods for parameter and uncertainty estimation with experimental design considerations. As a showcase, we optimized three simulated GRNs in one of the challenges from the Dialogue for Reverse Engineering Assessment and Methods (DREAM). This article presents our approach, which was awarded the best performing procedure at the DREAM6 Estimation of Model Parameters challenge. For fast and reliable parameter estimation, local deterministic optimization of the likelihood was applied. We analyzed identifiability and precision of the estimates by calculating the profile likelihood. Furthermore, the profiles provided a way to uncover a selection of most informative experiments, from which the optimal one was chosen using additional criteria at every step of the design process. In conclusion, we provide a strategy for optimal experimental design and show its successful application on three highly nonlinear dynamic models. Although presented in the context of the GRNs to be inferred for the DREAM6 challenge, the approach is generic and applicable to most types of quantitative models in systems biology and other disciplines. PMID:22815723
Agadjanian, Victor; Yao, Jing; Hayford, Sarah R.
2017-01-01
CONTEXT Although institutional coverage of childbirth is increasing in the developing world, a substantial minority of births in rural Mozambique still occur outside of health facilities. Identifying the remaining barriers to safe professional delivery services can aid in achieving universal coverage. METHODS Survey data collected in 2009 from 1,373 women in Gaza, Mozambique, were used in combination with spatial, meteorological and health facility data to examine patterns in place of delivery. Geographic information system–based visualization and mapping and exploratory spatial data analysis were used to outline the spatial distribution of home deliveries. Multilevel logistic regression models were constructed to identify associations between individual, spatial and other characteristics and whether women’s most recent delivery took place at home. RESULTS Spatial analysis revealed high- and low-prevalence clusters of home births. In multivariate analyses, women with a higher number of clinics within 10 kilometers of their home had a reduced likelihood of home delivery, but those living closer to urban centers had an increased likelihood. Giving birth during the rainy, high agricultural season was positively associated with home delivery, while household wealth was negatively associated with home birth. No associations were evident for measures of exposure to and experience with health institutions. CONCLUSIONS The results suggest the need for a comprehensive approach to expansion of professional delivery services. Such an approach should complement measures facilitating physical access to health institutions for residents of harder-to-reach areas with community-based interventions aimed at improving rural women’s living conditions and opportunities, while also taking into account seasonal and other variables. PMID:28770025
Using scenarios to assess possible future impacts of invasive species in the Laurentian Great Lakes
Lauber, T. Bruce; Stedman, Richard C.; Connelly, Nancy A; Rudstam, Lars G.; Ready, Richard C; Poe, Gregory L; Bunnell, David B.; Hook, Tomas O.; Koops, Marten A.; Ludsin, Stuart A.; Rutherford, Edward S; Wittmann, Marion E.
2016-01-01
The expected impacts of invasive species are key considerations in selecting policy responses to potential invasions. But predicting the impacts of invasive species is daunting, particularly in large systems threatened by multiple invasive species, such as North America’s Laurentian Great Lakes. We developed and evaluated a scenario-building process that relied on an expert panel to assess possible future impacts of aquatic invasive species on recreational fishing in the Great Lakes. To maximize its usefulness to policy makers, this process was designed to be implemented relatively rapidly and consider a range of species. The expert panel developed plausible, internally-consistent invasion scenarios for 5 aquatic invasive species, along with subjective probabilities of those scenarios. We describe these scenarios and evaluate this approach for assessing future invasive species impacts. The panel held diverse opinions about the likelihood of the scenarios, and only one scenario with impacts on sportfish species was considered likely by most of the experts. These outcomes are consistent with the literature on scenario building, which advocates for developing a range of plausible scenarios in decision making because the uncertainty of future conditions makes the likelihood of any particular scenario low. We believe that this scenario-building approach could contribute to policy decisions about whether and how to address the possible impacts of invasive species. In this case, scenarios could allow policy makers to narrow the range of possible impacts on Great Lakes fisheries they consider and help set a research agenda for further refining invasive species predictions.
Becker, Sara J.; Squires, Daniel D.; Strong, David R.; Barnett, Nancy P.; Monti, Peter M.; Petry, Nancy M.
2016-01-01
Background Few prospective studies have evaluated theory-driven approaches to the implementation of evidence-based opioid treatment. This study compared the effectiveness of an implementation model (Science to Service Laboratory; SSL) to training as usual (TAU) in promoting the adoption of contingency management across a multi-site opiate addiction treatment program. We also examined whether the SSL affected putative mediators of contingency management adoption (perceived innovation characteristics and organizational readiness to change). Methods Sixty treatment providers (39 SSL, 21 TAU) from 15 geographically diverse satellite clinics (7 SSL, 8 TAU) participated in the 12-month study. Both conditions received didactic contingency management training and those in the pre-determined experimental region received 9 months of SSL-enhanced training. Contingency management adoption was monitored biweekly, while putative mediators were measured at baseline, 3-, and 12-months. Results Relative to providers in the TAU region, treatment providers in the SSL region had comparable likelihood of contingency management adoption in the first 20 weeks of the study, and then significantly higher likelihood of adoption (odds ratios = 2.4-13.5) for the remainder of the study. SSL providers also reported higher levels of one perceived innovation characteristic (Observability) and one aspect of organizational readiness to change (Adequacy of Training Resources), although there was no evidence that the SSL affected these putative mediators over time. Conclusions Results of this study indicate that a fully powered randomized trial of the SSL is warranted. Considerations for a future evaluation are discussed. PMID:26682582
Brown, Stephen L; Whiting, Demian
2014-04-01
Distressing health promotion advertising involves the elicitation of negative emotion to increase the likelihood that health messages will stimulate audience members to adopt healthier behaviors. Irrespective of its effectiveness, distressing advertising risks harming audience members who do not consent to the intervention and are unable to withdraw from it. Further, the use of these approaches may increase the potential for unfairness or stigmatization toward those targeted, or be considered unacceptable by some sections of the public. We acknowledge and discuss these concerns, but, using the public health ethics literature as a guide, argue that distressing advertising can be ethically defensible if conditions of effectiveness, proportionality necessity, least infringement, and public accountability are satisfied. We do not take a broad view as to whether distressing advertising is ethical or unethical, because we see the evidence for both the effectiveness of distressing approaches and their potential to generate iatrogenic effects to be inconclusive. However, we believe it possible to use the current evidence base to make informed estimates of the likely consequences of specific message presentations. Messages can be pre-tested and monitored to identify and deal with potential problems. We discuss how advertisers can approach the problems of deciding on the appropriate intensity of ethical review, and evaluating prospective distressing advertising campaigns against the conditions outlined. © 2013 International Union of Psychological Science.
Childhood Obesity and Medical Neglect
Varness, Todd; Allen, David B.; Carrel, Aaron L.; Fost, Norman
2011-01-01
The incidence of childhood obesity has increased dramatically, including severe childhood obesity and obesity-related comorbid conditions. Cases of severe childhood obesity have prompted the following question: does childhood obesity ever constitute medical neglect? In our opinion, removal of a child from the home is justified when all 3 of the following conditions are present: (1) a high likelihood that serious imminent harm will occur; (2) a reasonable likelihood that coercive state intervention will result in effective treatment; and (3) the absence of alternative options for addressing the problem. It is not the mere presence or degree of obesity but rather the presence of comorbid conditions that is critical for the determination of serious imminent harm. All 3 criteria are met in very limited cases, that is, the subset of obese children who have serious comorbid conditions and for whom all alternative options have been exhausted. In these limited cases, a trial of enforced treatment outside the home may be indicated, to protect the child from irreversible harm. PMID:19117907
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez, Tommy Robert; Romero, Philbert Roland; Garcia, Samuel Anthony
During low voltage electrical equipment maintenance, a bad breaker was identified. The breaker was racked out from the substation cubicle without following the hazardous energy control process identified in the Integrated Work Document (IWD). The IWD required the substation to be in an electrically safe work condition prior to racking the breaker. Per NFPA 70E requirements, electrical equipment shall be put into an electrically safe work condition before an employee performs work on or interacts with equipment in a manner that increases the likelihood of creating an arc flash. Racking in or out a breaker on an energized bus maymore » increase the likelihood of creating an arc flash dependent on equipment conditions. A thorough risk assessment must be performed prior to performing such a task. The risk assessment determines the risk control measures to be put in place prior to performing the work. Electrical Safety Officers (ESO) can assist in performing risk assessments and incorporating risk control measures.« less
On parametrized cold dense matter equation-of-state inference
NASA Astrophysics Data System (ADS)
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-07-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrized dense matter equations of state. In particular, we generalize and examine two inference paradigms from the literature: (i) direct posterior equation-of-state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective while the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilizing archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation-of-state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.
Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed
2016-07-01
Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
On parametrised cold dense matter equation of state inference
NASA Astrophysics Data System (ADS)
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-04-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrised dense matter equations of state. In particular we generalise and examine two inference paradigms from the literature: (i) direct posterior equation of state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective whilst the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilising archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation of state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.
NASA Astrophysics Data System (ADS)
Yoon, Ilsang; Weinberg, Martin D.; Katz, Neal
2011-06-01
We introduce a new galaxy image decomposition tool, GALPHAT (GALaxy PHotometric ATtributes), which is a front-end application of the Bayesian Inference Engine (BIE), a parallel Markov chain Monte Carlo package, to provide full posterior probability distributions and reliable confidence intervals for all model parameters. The BIE relies on GALPHAT to compute the likelihood function. GALPHAT generates scale-free cumulative image tables for the desired model family with precise error control. Interpolation of this table yields accurate pixellated images with any centre, scale and inclination angle. GALPHAT then rotates the image by position angle using a Fourier shift theorem, yielding high-speed, accurate likelihood computation. We benchmark this approach using an ensemble of simulated Sérsic model galaxies over a wide range of observational conditions: the signal-to-noise ratio S/N, the ratio of galaxy size to the point spread function (PSF) and the image size, and errors in the assumed PSF; and a range of structural parameters: the half-light radius re and the Sérsic index n. We characterize the strength of parameter covariance in the Sérsic model, which increases with S/N and n, and the results strongly motivate the need for the full posterior probability distribution in galaxy morphology analyses and later inferences. The test results for simulated galaxies successfully demonstrate that, with a careful choice of Markov chain Monte Carlo algorithms and fast model image generation, GALPHAT is a powerful analysis tool for reliably inferring morphological parameters from a large ensemble of galaxies over a wide range of different observational conditions.
Predicting future protection of respirator users: Statistical approaches and practical implications.
Hu, Chengcheng; Harber, Philip; Su, Jing
2016-01-01
The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.
8 CFR 241.14 - Continued detention of removable aliens on account of special circumstances.
Code of Federal Regulations, 2013 CFR
2013-01-01
... to a mental condition or personality disorder and behavior associated with that condition or disorder... personality disorder and behavior associated with that condition or disorder, the alien is likely to engage in... personality disorder; (iv) The likelihood that the alien will engage in acts of violence in the future; and (v...
8 CFR 241.14 - Continued detention of removable aliens on account of special circumstances.
Code of Federal Regulations, 2012 CFR
2012-01-01
... to a mental condition or personality disorder and behavior associated with that condition or disorder... personality disorder and behavior associated with that condition or disorder, the alien is likely to engage in... personality disorder; (iv) The likelihood that the alien will engage in acts of violence in the future; and (v...
8 CFR 241.14 - Continued detention of removable aliens on account of special circumstances.
Code of Federal Regulations, 2014 CFR
2014-01-01
... to a mental condition or personality disorder and behavior associated with that condition or disorder... personality disorder and behavior associated with that condition or disorder, the alien is likely to engage in... personality disorder; (iv) The likelihood that the alien will engage in acts of violence in the future; and (v...
Perspective-Taking Judgments Among Young Adults, Middle-Aged, and Elderly People
ERIC Educational Resources Information Center
Ligneau-Herve, Catherine; Mullet, Etienne
2005-01-01
Perspective-taking judgments among young adults, middle-aged, and elderly people were examined. In 1 condition, participants were instructed to judge the likelihood of acceptance of a painkiller as a function of 3 cues: severity of the condition, potential side effects, and level of trust in the health care provider. In the other condition,…
A Game Theoretical Approach to Hacktivism: Is Attack Likelihood a Product of Risks and Payoffs?
Bodford, Jessica E; Kwan, Virginia S Y
2018-02-01
The current study examines hacktivism (i.e., hacking to convey a moral, ethical, or social justice message) through a general game theoretic framework-that is, as a product of costs and benefits. Given the inherent risk of carrying out a hacktivist attack (e.g., legal action, imprisonment), it would be rational for the user to weigh these risks against perceived benefits of carrying out the attack. As such, we examined computer science students' estimations of risks, payoffs, and attack likelihood through a game theoretic design. Furthermore, this study aims at constructing a descriptive profile of potential hacktivists, exploring two predicted covariates of attack decision making, namely, peer prevalence of hacking and sex differences. Contrary to expectations, results suggest that participants' estimations of attack likelihood stemmed solely from expected payoffs, rather than subjective risks. Peer prevalence significantly predicted increased payoffs and attack likelihood, suggesting an underlying descriptive norm in social networks. Notably, we observed no sex differences in the decision to attack, nor in the factors predicting attack likelihood. Implications for policymakers and the understanding and prevention of hacktivism are discussed, as are the possible ramifications of widely communicated payoffs over potential risks in hacking communities.
el Galta, Rachid; Uitte de Willige, Shirley; de Visser, Marieke C H; Helmer, Quinta; Hsu, Li; Houwing-Duistermaat, Jeanine J
2007-09-24
In this paper, we propose a one degree of freedom test for association between a candidate gene and a binary trait. This method is a generalization of Terwilliger's likelihood ratio statistic and is especially powerful for the situation of one associated haplotype. As an alternative to the likelihood ratio statistic, we derive a score statistic, which has a tractable expression. For haplotype analysis, we assume that phase is known. By means of a simulation study, we compare the performance of the score statistic to Pearson's chi-square statistic and the likelihood ratio statistic proposed by Terwilliger. We illustrate the method on three candidate genes studied in the Leiden Thrombophilia Study. We conclude that the statistic follows a chi square distribution under the null hypothesis and that the score statistic is more powerful than Terwilliger's likelihood ratio statistic when the associated haplotype has frequency between 0.1 and 0.4 and has a small impact on the studied disorder. With regard to Pearson's chi-square statistic, the score statistic has more power when the associated haplotype has frequency above 0.2 and the number of variants is above five.
Healthchecks and Sustainable Livelihoods: A Case Study from Kent.
ERIC Educational Resources Information Center
Butcher, Catherine; McDonald, Brian; Westhorp, Victoria
2003-01-01
Comparison of the sustainable livelihoods approach of a development agency and the British government's "healthcheck" approach to town regeneration found common emphasis on participation and community-led action. However, sustainable likelihoods emphasized a focus on poverty, building on strengths, and social capital as an asset.…
Maximum likelihood estimation for Cox's regression model under nested case-control sampling.
Scheike, Thomas H; Juul, Anders
2004-04-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.
Cluver, Lucie; Orkin, Mark
2009-10-01
Research shows that AIDS-orphaned children are more likely to experience clinical-range psychological problems. Little is known about possible interactions between factors mediating these high distress levels. We assessed how food insecurity, bullying, and AIDS-related stigma interacted with each other and with likelihood of experiencing clinical-range disorder. In South Africa, 1025 adolescents completed standardised measures of depression, anxiety and post-traumatic stress. 52 potential mediators were measured, including AIDS-orphanhood status. Logistic regressions and hierarchical log-linear modelling were used to identify interactions among significant risk factors. Food insecurity, stigma and bullying all independently increased likelihood of disorder. Poverty and stigma were found to interact strongly, and with both present, likelihood of disorder rose from 19% to 83%. Similarly, bullying interacted with AIDS-orphanhood status, and with both present, likelihood of disorder rose from 12% to 76%. Approaches to alleviating psychological distress amongst AIDS-affected children must address cumulative risk effects.
NASA Technical Reports Server (NTRS)
Switzer, Eric Ryan; Watts, Duncan J.
2016-01-01
The B-mode polarization of the cosmic microwave background provides a unique window into tensor perturbations from inflationary gravitational waves. Survey effects complicate the estimation and description of the power spectrum on the largest angular scales. The pixel-space likelihood yields parameter distributions without the power spectrum as an intermediate step, but it does not have the large suite of tests available to power spectral methods. Searches for primordial B-modes must rigorously reject and rule out contamination. Many forms of contamination vary or are uncorrelated across epochs, frequencies, surveys, or other data treatment subsets. The cross power and the power spectrum of the difference of subset maps provide approaches to reject and isolate excess variance. We develop an analogous joint pixel-space likelihood. Contamination not modeled in the likelihood produces parameter-dependent bias and complicates the interpretation of the difference map. We describe a null test that consistently weights the difference map. Excess variance should either be explicitly modeled in the covariance or be removed through reprocessing the data.
Parkhill, Michele R; Norris, Jeanette; Gilmore, Amanda K; Hessler, Danielle M; George, William H; Davis, Kelly Cue; Zawacki, Tina
Assertive resistance to sexual assault can decrease the likelihood of completed rape and its subsequent aftermath; however, this relationship may be influenced by situational characteristics. This study examined how 2 manipulated variables, level of consensual sex during an encounter and acute alcohol intoxication, along with sexual victimization history, affected women's responses to a hypothetical sexual assault scenario. Female participants were assigned to a drink condition (alcohol/control) and to a consent history condition (low/high). Path analysis found that women who were previously victimized, consumed alcohol, and who were in the high consent condition endorsed greater immobility intentions during the assault; only level of consent predicted likelihood of assertive resistance. Resistance strategies were related to subsequent responding. Results suggest that interventions should seek to decrease negative consequences by empowering women to assertively resist unwanted sexual advances.
Assessment of Medical Risks and Optimization of their Management using Integrated Medical Model
NASA Technical Reports Server (NTRS)
Fitts, Mary A.; Madurai, Siram; Butler, Doug; Kerstman, Eric; Risin, Diana
2008-01-01
The Integrated Medical Model (IMM) Project is a software-based technique that will identify and quantify the medical needs and health risks of exploration crew members during space flight and evaluate the effectiveness of potential mitigation strategies. The IMM Project employs an evidence-based approach that will quantify probability and consequences of defined in-flight medical risks, mitigation strategies, and tactics to optimize crew member health. Using stochastic techniques, the IMM will ultimately inform decision makers at both programmatic and institutional levels and will enable objective assessment of crew health and optimization of mission success using data from relevant cohort populations and from the astronaut population. The objectives of the project include: 1) identification and documentation of conditions that may occur during exploration missions (Baseline Medical Conditions List [BMCL), 2) assessment of the likelihood of conditions in the BMCL occurring during exploration missions (incidence rate), 3) determination of the risk associated with these conditions and quantify in terms of end states (Loss of Crew, Loss of Mission, Evacuation), 4) optimization of in-flight hardware mass, volume, power, bandwidth and cost for a given level of risk or uncertainty, and .. validation of the methodologies used.
NASA Astrophysics Data System (ADS)
Sakellariou, J. S.; Fassois, S. D.
2017-01-01
The identification of a single global model for a stochastic dynamical system operating under various conditions is considered. Each operating condition is assumed to have a pseudo-static effect on the dynamics and be characterized by a single measurable scheduling variable. Identification is accomplished within a recently introduced Functionally Pooled (FP) framework, which offers a number of advantages over Linear Parameter Varying (LPV) identification techniques. The focus of the work is on the extension of the framework to include the important FP-ARMAX model case. Compared to their simpler FP-ARX counterparts, FP-ARMAX models are much more general and offer improved flexibility in describing various types of stochastic noise, but at the same time lead to a more complicated, non-quadratic, estimation problem. Prediction Error (PE), Maximum Likelihood (ML), and multi-stage estimation methods are postulated, and the PE estimator optimality, in terms of consistency and asymptotic efficiency, is analytically established. The postulated estimators are numerically assessed via Monte Carlo experiments, while the effectiveness of the approach and its superiority over its FP-ARX counterpart are demonstrated via an application case study pertaining to simulated railway vehicle suspension dynamics under various mass loading conditions.
Krank, Marvin D; O'Neill, Susan; Squarey, Kyna; Jacob, Jackie
2008-02-01
Many theories of addictive behavior propose that cues signaling drug administration influence the likelihood of drug-taking and drug-seeking behavior. We investigated the behavioral impact of cues associated with unsweetened ethanol and their interaction with responding maintained by ethanol self-administration. Our goal was to establish the influence of such cues on ethanol seeking. The experiment used a matching contingency and saccharin-fading procedure to establish equal levels of responding to two spatially distinct levers using unsweetened 10% ethanol solution. After ethanol self-administration was established, a brief cue light located alternately over each lever location was either paired or unpaired (control) with the opportunity to consume the same ethanol solution. Finally, self-administration was re-established, and the effect of the cue was measured in a transfer design. The reaction to lights paired with the opportunity to ingest unsweetened ethanol had three main effects: (1) induction of operant behavior reinforced by ethanol, (2) stimulation of ethanol-seeking behavior (drinker entries), and (3) cue-directed approach and contact behavior (i.e. autoshaping or sign-tracking). Cue-directed behavior to the light interacted with choice behavior in a manner predicted by the location of the cue light, enhancing responding only when the approach response did not interfere with the operant response. These findings replicate and extend the effects of Pavlovian conditioning on ethanol-seeking and support-conditioned incentive theories of addictive behavior. Signals for ethanol influence spatial choice behavior and may be relevant to attentional bias shown to alcohol-associated stimuli in humans.
Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2012-01-01
We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.
EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.
Tong, Xiaoxiao; Bentler, Peter M
2013-01-01
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.
Lin, Feng-Chang; Zhu, Jun
2012-01-01
We develop continuous-time models for the analysis of environmental or ecological monitoring data such that subjects are observed at multiple monitoring time points across space. Of particular interest are additive hazards regression models where the baseline hazard function can take on flexible forms. We consider time-varying covariates and take into account spatial dependence via autoregression in space and time. We develop statistical inference for the regression coefficients via partial likelihood. Asymptotic properties, including consistency and asymptotic normality, are established for parameter estimates under suitable regularity conditions. Feasible algorithms utilizing existing statistical software packages are developed for computation. We also consider a simpler additive hazards model with homogeneous baseline hazard and develop hypothesis testing for homogeneity. A simulation study demonstrates that the statistical inference using partial likelihood has sound finite-sample properties and offers a viable alternative to maximum likelihood estimation. For illustration, we analyze data from an ecological study that monitors bark beetle colonization of red pines in a plantation of Wisconsin.
Rude, Tope L; Donin, Nicholas M; Cohn, Matthew R; Meeks, William; Gulig, Scott; Patel, Samir N; Wysock, James S; Makarov, Danil V; Bjurlin, Marc A
2018-06-07
To define the rates of common Hospital Acquired Conditions (HACs) in patients undergoing major urological surgery over a period of time encompassing the implementation of the Hospital Acquired Condition Reduction program, and to evaluate whether implementation of the HAC reimbursement penalties in 2008 was associated with a change in the rate of HACs. Using American College of Surgeons National Surgical Quality Improvement Program (NSQIP) data, we determined rates of HACs in patients undergoing major inpatient urological surgery from 2005 to 2012. Rates were stratified by procedure type and approach (open vs. laparoscopic/robotic). Multivariable logistic regression was used to determine the association between year of surgery and HACs. We identified 39,257 patients undergoing major urological surgery, of whom 2300 (5.9%) had at least one hospital acquired condition. Urinary tract infection (UTI, 2.6%) was the most common, followed by surgical site infection (SSI, 2.5%) and venous thrombotic events (VTE, 0.7%). Multivariable logistic regression analysis demonstrated that open surgical approach, diabetes, congestive heart failure, chronic obstructive pulmonary disease, weight loss, and ASA class were among the variables associated with higher likelihood of HAC. We observed a non-significant secular trend of decreasing rates of HAC from 7.4% to 5.8% HACs during the study period, which encompassed the implementation of the Hospital Acquired Condition Reduction Program. HACs occurred at a rate of 5.9% after major urological surgery, and are significantly affected by procedure type and patient health status. The rate of HAC appeared unaffected by national reduction program in this cohort. Better understanding of the factors associated with HACs is critical in developing effective reduction programs. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Thelen, Brian T.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.
2017-04-01
With all of the new remote sensing modalities available, and with ever increasing capabilities and frequency of collection, there is a desire to fundamentally understand/quantify the information content in the collected image data relative to various exploitation goals, such as detection/classification. A fundamental approach for this is the framework of Bayesian decision theory, but a daunting challenge is to have significantly flexible and accurate multivariate models for the features and/or pixels that capture a wide assortment of distributions and dependen- cies. In addition, data can come in the form of both continuous and discrete representations, where the latter is often generated based on considerations of robustness to imaging conditions and occlusions/degradations. In this paper we propose a novel suite of "latent" models fundamentally based on multivariate Gaussian copula models that can be used for quantized data from SAR imagery. For this Latent Gaussian Copula (LGC) model, we derive an approximate, maximum-likelihood estimation algorithm and demonstrate very reasonable estimation performance even for the larger images with many pixels. However applying these LGC models to large dimen- sions/images within a Bayesian decision/classification theory is infeasible due to the computational/numerical issues in evaluating the true full likelihood, and we propose an alternative class of novel pseudo-likelihoood detection statistics that are computationally feasible. We show in a few simple examples that these statistics have the potential to provide very good and robust detection/classification performance. All of this framework is demonstrated on a simulated SLICY data set, and the results show the importance of modeling the dependencies, and of utilizing the pseudo-likelihood methods.
The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions
NASA Astrophysics Data System (ADS)
Loaiciga, Hugo A.; MariñO, Miguel A.
1987-01-01
The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.
Scott, David; Burke, Karena; Williams, Susan; Happell, Brenda; Canoy, Doreen; Ronan, Kevin
2012-10-01
To compare chronic physical health disorder prevalence amongst Australian adults with and without mental illness. Total n=1,716 participants (58% female) with a mean age of 52 ± 13 years (range: 18 to 89 years) completed an online survey of Australian adults in 2010. Outcome measures including prevalence of chronic physical conditions and self-reported body mass index (BMI) in n=387 (23%) with a self-reported mental illness diagnosis were compared to respondents without mental illness. A significantly higher proportion of participants with mental illness were obese (BMI ≥ 30; 31 vs 24%, p=0.005). Adjusted odds ratios (OR) for coronary heart disease, diabetes, chronic bronchitis or emphysema, asthma, irritable bowel syndrome, and food allergies or intolerances (OR range: 1.54-3.19) demonstrated that chronic physical disorders were significantly more common in participants with a mental illness. Australian adults with a diagnosis for mental illness have a significantly increased likelihood of demonstrating chronic physical health disorders compared to persons without mental illness. Health professionals must be alert to the increased likelihood of comorbid chronic physical disorders in persons with a mental illness and should consider the adoption of holistic approaches when treating those with either a mental or physical illness. © 2012 The Authors. ANZJPH © 2012 Public Health Association of Australia.
PepMapper: a collaborative web tool for mapping epitopes from affinity-selected peptides.
Chen, Wenhan; Guo, William W; Huang, Yanxin; Ma, Zhiqiang
2012-01-01
Epitope mapping from affinity-selected peptides has become popular in epitope prediction, and correspondingly many Web-based tools have been developed in recent years. However, the performance of these tools varies in different circumstances. To address this problem, we employed an ensemble approach to incorporate two popular Web tools, MimoPro and Pep-3D-Search, together for taking advantages offered by both methods so as to give users more options for their specific purposes of epitope-peptide mapping. The combined operation of Union finds as many associated peptides as possible from both methods, which increases sensitivity in finding potential epitopic regions on a given antigen surface. The combined operation of Intersection achieves to some extent the mutual verification by the two methods and hence increases the likelihood of locating the genuine epitopic region on a given antigen in relation to the interacting peptides. The Consistency between Intersection and Union is an indirect sufficient condition to assess the likelihood of successful peptide-epitope mapping. On average from 27 tests, the combined operations of PepMapper outperformed either MimoPro or Pep-3D-Search alone. Therefore, PepMapper is another multipurpose mapping tool for epitope prediction from affinity-selected peptides. The Web server can be freely accessed at: http://informatics.nenu.edu.cn/PepMapper/
NASA Astrophysics Data System (ADS)
Cobden, Laura; Mosca, Ilaria; Trampert, Jeannot; Ritsema, Jeroen
2012-11-01
Recent experimental studies indicate that perovskite, the dominant lower mantle mineral, undergoes a phase change to post-perovskite at high pressures. However, it has been unclear whether this transition occurs within the Earth's mantle, due to uncertainties in both the thermochemical state of the lowermost mantle and the pressure-temperature conditions of the phase boundary. In this study we compare the relative fit to global seismic data of mantle models which do and do not contain post-perovskite, following a statistical approach. Our data comprise more than 10,000 Pdiff and Sdiff travel-times, global in coverage, from which we extract the global distributions of dln VS and dln VP near the core-mantle boundary (CMB). These distributions are sensitive to the underlying lateral variations in mineralogy and temperature even after seismic uncertainties are taken into account, and are ideally suited for investigating the likelihood of the presence of post-perovskite. A post-perovskite-bearing CMB region provides a significantly closer fit to the seismic data than a post-perovskite-free CMB region on both a global and regional scale. These results complement previous local seismic reflection studies, which have shown a consistency between seismic observations and the physical properties of post-perovskite inside the deep Earth.
A Two-Stage Approach to Missing Data: Theory and Application to Auxiliary Variables
ERIC Educational Resources Information Center
Savalei, Victoria; Bentler, Peter M.
2009-01-01
A well-known ad-hoc approach to conducting structural equation modeling with missing data is to obtain a saturated maximum likelihood (ML) estimate of the population covariance matrix and then to use this estimate in the complete data ML fitting function to obtain parameter estimates. This 2-stage (TS) approach is appealing because it minimizes a…
NASA Astrophysics Data System (ADS)
Yusof, Muhammad Mat; Sulaiman, Tajularipin; Khalid, Ruzelan; Hamid, Mohamad Shukri Abdul; Mansor, Rosnalini
2014-12-01
In professional sporting events, rating competitors before tournament start is a well-known approach to distinguish the favorite team and the weaker teams. Various methodologies are used to rate competitors. In this paper, we explore four ways to rate competitors; least squares rating, maximum likelihood strength ratio, standing points in large round robin simulation and previous league rank position. The tournament metric we used to evaluate different types of rating approach is tournament outcome characteristics measure. The tournament outcome characteristics measure is defined by the probability that a particular team in the top 100q pre-tournament rank percentile progress beyond round R, for all q and R. Based on simulation result, we found that different rating approach produces different effect to the team. Our simulation result shows that from eight teams participate in knockout standard seeding, Perak has highest probability to win for tournament that use the least squares rating approach, PKNS has highest probability to win using the maximum likelihood strength ratio and the large round robin simulation approach, while Perak has the highest probability to win a tournament using previous league season approach.
Weaving a Stronger Fabric for Improved Outcomes
ERIC Educational Resources Information Center
Lobry de Bruyn, Lisa; Prior, Julian; Lenehan, Jo
2014-01-01
Purpose: To explain how training and education events (TEEs) can be designed to increase the likelihood of achieving behavioural objectives. Approach: The approach combined both a quantitative review of evaluation surveys undertaken at the time of the TEE, and qualitative telephone interviews with selected attendees (2025% of the total population…
USDA-ARS?s Scientific Manuscript database
Data assimilation and regression are two commonly used methods for predicting agricultural yield from remote sensing observations. Data assimilation is a generative approach because it requires explicit approximations of the Bayesian prior and likelihood to compute the probability density function...
A Statistical Approach to Passive Target Tracking.
1981-04-01
a fixed heading of 90 degrees. For 7F. A. Graybill , An Introduction to Linear Statistical Models , Vol. 1, New York: John Wiley&-Sons -Inc. (1961). 13...likelihood estimators. 12 NCSC TM 311-81 The adjustment for a changing error variance is easy using the linear model approach; i.e., use weighted
NASA Technical Reports Server (NTRS)
Hoffbeck, Joseph P.; Landgrebe, David A.
1994-01-01
Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.
Hoyland, Meredith A; Rowatt, Wade C; Latendresse, Shawn J
2017-01-01
Prior research has demonstrated that adolescent delinquency and depression are prospectively related to adult alcohol use and that adolescent religiosity may influence these relationships. However, such associations have not been investigated using person-centered approaches that provide nuanced explorations of these constructs. Using data from the National Longitudinal Study of Adolescent to Adult Health, we examined whether adolescent delinquency and depression differentiated typologies of adult alcohol users and whether these relationships varied across religiosity profiles. Three typologies of self-identified Christian adolescents and 4 types of adult alcohol users were identified via latent profile analysis. Delinquency and depression were related to increased likelihood of membership in heavy drinking or problematic alcohol use profiles, but this relationship was most evident among those likely to be involved in religious practices. These results demonstrate the importance of person-centered approaches in characterizing the influences of internalizing and externalizing behaviors on subsequent patterns of alcohol use. PMID:28469423
Intervals for posttest probabilities: a comparison of 5 methods.
Mossman, D; Berger, J O
2001-01-01
Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.
Huang, Chiung-Yu; Qin, Jing
2013-01-01
The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265
Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors
Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.
2009-01-01
In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527
Bayesian experimental design for models with intractable likelihoods.
Drovandi, Christopher C; Pettitt, Anthony N
2013-12-01
In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.
Struchen, R; Vial, F; Andersson, M G
2017-04-26
Delayed reporting of health data may hamper the early detection of infectious diseases in surveillance systems. Furthermore, combining multiple data streams, e.g. aiming at improving a system's sensitivity, can be challenging. In this study, we used a Bayesian framework where the result is presented as the value of evidence, i.e. the likelihood ratio for the evidence under outbreak versus baseline conditions. Based on a historical data set of routinely collected cattle mortality events, we evaluated outbreak detection performance (sensitivity, time to detection, in-control run length) under the Bayesian approach among three scenarios: presence of delayed data reporting, but not accounting for it; presence of delayed data reporting accounted for; and absence of delayed data reporting (i.e. an ideal system). Performance on larger and smaller outbreaks was compared with a classical approach, considering syndromes separately or combined. We found that the Bayesian approach performed better than the classical approach, especially for the smaller outbreaks. Furthermore, the Bayesian approach performed similarly well in the scenario where delayed reporting was accounted for to the scenario where it was absent. We argue that the value of evidence framework may be suitable for surveillance systems with multiple syndromes and delayed reporting of data.
Keith, S A; Baird, A H; Hughes, T P; Madin, J S; Connolly, S R
2013-07-22
Species richness gradients are ubiquitous in nature, but the mechanisms that generate and maintain these patterns at macroecological scales remain unresolved. We use a new approach that focuses on overlapping geographical ranges of species to reveal that Indo-Pacific corals are assembled within 11 distinct faunal provinces. Province limits are characterized by co-occurrence of multiple species range boundaries. Unexpectedly, these faunal breaks are poorly predicted by contemporary environmental conditions and the present-day distribution of habitat. Instead, faunal breaks show striking concordance with geological features (tectonic plates and mantle plume tracks). The depth range over which a species occurs, its larval development rate and genus age are important determinants of the likelihood that species will straddle faunal breaks. Our findings indicate that historical processes, habitat heterogeneity and species colonization ability account for more of the present-day biogeographical patterns of corals than explanations based on the contemporary distribution of reefs or environmental conditions.
Keith, S. A.; Baird, A. H.; Hughes, T. P.; Madin, J. S.; Connolly, S. R.
2013-01-01
Species richness gradients are ubiquitous in nature, but the mechanisms that generate and maintain these patterns at macroecological scales remain unresolved. We use a new approach that focuses on overlapping geographical ranges of species to reveal that Indo-Pacific corals are assembled within 11 distinct faunal provinces. Province limits are characterized by co-occurrence of multiple species range boundaries. Unexpectedly, these faunal breaks are poorly predicted by contemporary environmental conditions and the present-day distribution of habitat. Instead, faunal breaks show striking concordance with geological features (tectonic plates and mantle plume tracks). The depth range over which a species occurs, its larval development rate and genus age are important determinants of the likelihood that species will straddle faunal breaks. Our findings indicate that historical processes, habitat heterogeneity and species colonization ability account for more of the present-day biogeographical patterns of corals than explanations based on the contemporary distribution of reefs or environmental conditions. PMID:23698011
... are and may have higher rates of certain medical conditions in offspring, such as psychiatric disorders or certain cancers. Tobacco use. Smoking tobacco or marijuana by either partner reduces the likelihood of pregnancy. ...
Yu, Peng; Shaw, Chad A
2014-06-01
The Dirichlet-multinomial (DMN) distribution is a fundamental model for multicategory count data with overdispersion. This distribution has many uses in bioinformatics including applications to metagenomics data, transctriptomics and alternative splicing. The DMN distribution reduces to the multinomial distribution when the overdispersion parameter ψ is 0. Unfortunately, numerical computation of the DMN log-likelihood function by conventional methods results in instability in the neighborhood of [Formula: see text]. An alternative formulation circumvents this instability, but it leads to long runtimes that make it impractical for large count data common in bioinformatics. We have developed a new method for computation of the DMN log-likelihood to solve the instability problem without incurring long runtimes. The new approach is composed of a novel formula and an algorithm to extend its applicability. Our numerical experiments show that this new method both improves the accuracy of log-likelihood evaluation and the runtime by several orders of magnitude, especially in high-count data situations that are common in deep sequencing data. Using real metagenomic data, our method achieves manyfold runtime improvement. Our method increases the feasibility of using the DMN distribution to model many high-throughput problems in bioinformatics. We have included in our work an R package giving access to this method and a vingette applying this approach to metagenomic data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Empirical likelihood inference in randomized clinical trials.
Zhang, Biao
2017-01-01
In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.
Simultaneous Control of Error Rates in fMRI Data Analysis
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-01-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730
Linking School Facility Conditions to Teacher Satisfaction and Success.
ERIC Educational Resources Information Center
Schneider, Mark.
School facilities directly affect teaching and learning. Poor conditions make it more difficult for teachers to deliver an adequate education to their students, adversely affect teachers' health, and increase the likelihood that teachers will leave their school. This study documented how teachers in Chicago and Washington, DC rated their working…
NASA Astrophysics Data System (ADS)
Huntingford, Chris; Mitchell, Dann; Osprey, Scott
2015-04-01
A recent paper by Petoukhov et al (2013) demonstrates that many of the recent major extreme events in the NH may have been caused by resonant conditions driving very high meridional winds around slowly moving centres-of-action. Besides high amplitudes of planetary wave numbers 6,7 and 8, additional features are identified through 4 further conditions that trigger system resonance. These make the potential for high amplitude waves more likely as well as the possibility of more persistent events. A concern is that human-induced climate change could create conditions more conducive to tropospheric Rossby wave resonance, thereby forcing any periods of extreme weather to become more commonplace and longer lasting. Whilst the CMIP5 ensemble provides much information on expected changes, to fully address changing probabilities of extreme event occurrence - which by definition are relatively rare - is, though, best approached through a massive ensemble modeling framework. The climateprediction-dot-net citizen-science massive ensemble GCM modeling framework provides order 104 simulations for sea-surface temperature, sea-ice extent and atmospheric gas composition representative of both pre-industrial and contemporary conditions. Here we present what these families of simulations imply in terms of the changing likelihood of conditions for mid-latitude resonance, and implications for amplitudes of Rossby waves
MCMC multilocus lod scores: application of a new approach.
George, Andrew W; Wijsman, Ellen M; Thompson, Elizabeth A
2005-01-01
On extended pedigrees with extensive missing data, the calculation of multilocus likelihoods for linkage analysis is often beyond the computational bounds of exact methods. Growing interest therefore surrounds the implementation of Monte Carlo estimation methods. In this paper, we demonstrate the speed and accuracy of a new Markov chain Monte Carlo method for the estimation of linkage likelihoods through an analysis of real data from a study of early-onset Alzheimer's disease. For those data sets where comparison with exact analysis is possible, we achieved up to a 100-fold increase in speed. Our approach is implemented in the program lm_bayes within the framework of the freely available MORGAN 2.6 package for Monte Carlo genetic analysis (http://www.stat.washington.edu/thompson/Genepi/MORGAN/Morgan.shtml).
Harbert, Robert S; Nixon, Kevin C
2015-08-01
• Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Event attribution using data assimilation in an intermediate complexity atmospheric model
NASA Astrophysics Data System (ADS)
Metref, Sammy; Hannart, Alexis; Ruiz, Juan; Carrassi, Alberto; Bocquet, Marc; Ghil, Michael
2016-04-01
A new approach, coined DADA (Data Assimilation for Detection and Attribution) has been recently introduced by Hannart et al. 2015, and is potentially useful for near real time, systematic causal attribution of weather and climate-related events The method is purposely designed to allow its operability at meteorological centers by synergizing causal attribution with Data Assimilation (DA) methods usually designed to deal with large nonlinear models. In Hannart et al. 2015, the DADA proposal is illustrated in the context of a low-order nonlinear model (forced three-variable Lorenz model) that is of course not realistic to represent the events considered. As a continuation of this stream of work, we therefore propose an implementation of the DADA approach in a realistic intermediate complexity atmospheric model (ICTP AGCM, nicknamed SPEEDY). The SPEEDY model is based on a spectral dynamical core developed at the Geophysical Fluid Dynamics Laboratory (see Held and Suarez 1994). It is a hydrostatic, r-coordinate, spectral-transform model in the vorticity-divergence form described by Bourke (1974). A synthetic dataset of observations of an extreme precipitation event over Southeastern South America is extracted from a long SPEEDY simulation under present climatic conditions (i.e. factual conditions). Then, following the DADA approach, observations of this event are assimilated twice in the SPEEDY model: first in the factual configuration of the model and second under its counterfactual, pre-industrial configuration. We show that attribution can be performed based on the likelihood ratio as in Hannart et al. 2015, but we further extend this result by showing that the likelihood can be split in space, time and variables in order to help identify the specific physical features of the event that bear the causal signature. References: Hannart A., A. Carrassi, M. Bocquet, M. Ghil, P. Naveau, M. Pulido, J. Ruiz, P. Tandeo (2015) DADA: Data assimilation for the detection and attribution of weather and climate-related events, Climatic Change, (in press). Held I. M. and M. J. Suarez, (1994): A Proposal for the Intercomparison of the Dynamical Cores of Atmospheric General Circulation Models. Bull. Amer. Meteor. Soc., 75, 1825-1830. Bourke W. (1972): A multi-level spectral model. I. Formulation and hemispheric integrations. Mon. Wea. Rev., 102, 687-701.
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation
Li, Hong; Lu, Mingquan
2017-01-01
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318
Mapping Quantitative Traits in Unselected Families: Algorithms and Examples
Dupuis, Josée; Shi, Jianxin; Manning, Alisa K.; Benjamin, Emelia J.; Meigs, James B.; Cupples, L. Adrienne; Siegmund, David
2009-01-01
Linkage analysis has been widely used to identify from family data genetic variants influencing quantitative traits. Common approaches have both strengths and limitations. Likelihood ratio tests typically computed in variance component analysis can accommodate large families but are highly sensitive to departure from normality assumptions. Regression-based approaches are more robust but their use has primarily been restricted to nuclear families. In this paper, we develop methods for mapping quantitative traits in moderately large pedigrees. Our methods are based on the score statistic which in contrast to the likelihood ratio statistic, can use nonparametric estimators of variability to achieve robustness of the false positive rate against departures from the hypothesized phenotypic model. Because the score statistic is easier to calculate than the likelihood ratio statistic, our basic mapping methods utilize relatively simple computer code that performs statistical analysis on output from any program that computes estimates of identity-by-descent. This simplicity also permits development and evaluation of methods to deal with multivariate and ordinal phenotypes, and with gene-gene and gene-environment interaction. We demonstrate our methods on simulated data and on fasting insulin, a quantitative trait measured in the Framingham Heart Study. PMID:19278016
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.
Wang, Fei; Li, Hong; Lu, Mingquan
2017-06-30
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.
Instructor perceptions of the accident likelihood faced by recently trained glider pilots.
Jarvis, Steve; Harris, Don
2011-12-01
U.K. glider pilots with less than 10 h of solo flying time have been shown to have the highest accident rate and be most vulnerable to accidents during the 'final approach' phase. There were 58 gliding instructors who were asked to indicate what experience level they thought was associated with the highest accident rate and provide the reason behind their estimate. They were also asked to rank six flight phases by the relative probability of accidents to inexperienced pilots. The mean estimate for the accident peak was 296.3 h as pilot-in-command (SD = 337.9) with no instructor giving a figure of less than 10 h. Common reasons for these estimates were 'over-confidence', 'risk-taking', or 'complacency'. Instructors also ranked six flight phases by the likelihood of an accident being caused by inexperienced pilots during that phase. Despite the approach phase having the highest objective accident probability, it was only ranked fifth by instructors, indicating an underestimate of the danger it presents to newly trained pilots. The results suggest that instructors do not appreciate the high accident likelihood of early solo pilots or the main dangers they face. This has implications for the decisions made when sending pilots solo.
An Empirical Comparison of Heterogeneity Variance Estimators in 12,894 Meta-Analyses
ERIC Educational Resources Information Center
Langan, Dean; Higgins, Julian P. T.; Simmonds, Mark
2015-01-01
Heterogeneity in meta-analysis is most commonly estimated using a moment-based approach described by DerSimonian and Laird. However, this method has been shown to produce biased estimates. Alternative methods to estimate heterogeneity include the restricted maximum likelihood approach and those proposed by Paule and Mandel, Sidik and Jonkman, and…
Influence of Image Interactivity on Approach Responses towards an Online Retailer.
ERIC Educational Resources Information Center
Fiore, Ann Marie; Jin, Hyun-Jeong
2003-01-01
Measured the effect of exposure to an image interactivity function from an apparel retailer's Web site on approach responses towards the retailer. Dependent variables included attitude towards the online store, willingness to purchase, probability of spending more time than planned shopping, and likelihood of patronizing the online retailer's…
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
NASA Astrophysics Data System (ADS)
De Santis, Alberto; Dellepiane, Umberto; Lucidi, Stefano
2012-11-01
In this paper we investigate the estimation problem for a model of the commodity prices. This model is a stochastic state space dynamical model and the problem unknowns are the state variables and the system parameters. Data are represented by the commodity spot prices, very seldom time series of Futures contracts are available for free. Both the system joint likelihood function (state variables and parameters) and the system marginal likelihood (the state variables are eliminated) function are addressed.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
HIERARCHICAL PROBABILISTIC INFERENCE OF COSMIC SHEAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, Michael D.; Dawson, William A.; Hogg, David W.
2015-07-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxymore » properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.« less
Extreme value modelling of Ghana stock exchange index.
Nortey, Ezekiel N N; Asare, Kwabena; Mettle, Felix Okoe
2015-01-01
Modelling of extreme events has always been of interest in fields such as hydrology and meteorology. However, after the recent global financial crises, appropriate models for modelling of such rare events leading to these crises have become quite essential in the finance and risk management fields. This paper models the extreme values of the Ghana stock exchange all-shares index (2000-2010) by applying the extreme value theory (EVT) to fit a model to the tails of the daily stock returns data. A conditional approach of the EVT was preferred and hence an ARMA-GARCH model was fitted to the data to correct for the effects of autocorrelation and conditional heteroscedastic terms present in the returns series, before the EVT method was applied. The Peak Over Threshold approach of the EVT, which fits a Generalized Pareto Distribution (GPD) model to excesses above a certain selected threshold, was employed. Maximum likelihood estimates of the model parameters were obtained and the model's goodness of fit was assessed graphically using Q-Q, P-P and density plots. The findings indicate that the GPD provides an adequate fit to the data of excesses. The size of the extreme daily Ghanaian stock market movements were then computed using the value at risk and expected shortfall risk measures at some high quantiles, based on the fitted GPD model.
COS Views of Local Galaxies Approaching Primeval Conditions
NASA Astrophysics Data System (ADS)
Wofford, Aida
2014-10-01
We will use COS G160M+G185M to observe the cosmollogically important lines C IV 1548+1551 A, He II 1640 A, O III] 1661+1666 A, and C III] 1907+1909 A in the three closest most metal-poor blue compact dwarf galaxies known. These galaxies approach primeval insterstellar and stellar conditions. One of the galaxies has no existing spectroscopic coverage in the UV. Available spectroscopy of the most metal-poor galaxies in the local universe are scarce, inhomogeneous, mostly low spectral-resolution, and are either noisy in main UV lines or lack their coverage. The proposed spectral resolution of about 20 km/s represents an order of magnitude improvement over existing HST data and allows us to disentangle stellar, nebular, and/or shock components to the lines. The high-quality constraints obtained in the framework of this proposal will make it possible to assess the relative likelihood of new spectral models of star-forming galaxies from different groups, in the best possible way achievable with current instrumentation. This will ensure that the best possible studies of early chemical enrichment of the universe can be achieved. The proposed observations are necessary to minimize large existing systematic uncertainties in the determination of high-redshift galaxy properties that JWST was in large part designed to measure.
Johnson, Rebecca N; Agapow, Paul-Michael; Crozier, Ross H
2003-11-01
The ant subfamily Formicinae is a large assemblage (2458 species (J. Nat. Hist. 29 (1995) 1037), including species that weave leaf nests together with larval silk and in which the metapleural gland-the ancestrally defining ant character-has been secondarily lost. We used sequences from two mitochondrial genes (cytochrome b and cytochrome oxidase 2) from 18 formicine and 4 outgroup taxa to derive a robust phylogeny, employing a search for tree islands using 10000 randomly constructed trees as starting points and deriving a maximum likelihood consensus tree from the ML tree and those not significantly different from it. Non-parametric bootstrapping showed that the ML consensus tree fit the data significantly better than three scenarios based on morphology, with that of Bolton (Identification Guide to the Ant Genera of the World, Harvard University Press, Cambridge, MA) being the best among these alternative trees. Trait mapping showed that weaving had arisen at least four times and possibly been lost once. A maximum likelihood analysis showed that loss of the metapleural gland is significantly associated with the weaver life-pattern. The graph of the frequencies with which trees were discovered versus their likelihood indicates that trees with high likelihoods have much larger basins of attraction than those with lower likelihoods. While this result indicates that single searches are more likely to find high- than low-likelihood tree islands, it also indicates that searching only for the single best tree may lose important information.
Tanasescu, Radu; Cottam, William J; Condon, Laura; Tench, Christopher R; Auer, Dorothee P
2016-09-01
Maladaptive mechanisms of pain processing in chronic pain conditions (CP) are poorly understood. We used coordinate based meta-analysis of 266 fMRI pain studies to study functional brain reorganisation in CP and experimental models of hyperalgesia. The pattern of nociceptive brain activation was similar in CP, hyperalgesia and normalgesia in controls. However, elevated likelihood of activation was detected in the left putamen, left frontal gyrus and right insula in CP comparing stimuli of the most painful vs. other site. Meta-analysis of contrast maps showed no difference between CP, controls, mood conditions. In contrast, experimental hyperalgesia induced stronger activation in the bilateral insula, left cingulate and right frontal gyrus. Activation likelihood maps support a shared neural pain signature of cutaneous nociception in CP and controls. We also present a double dissociation between neural correlates of transient and persistent pain sensitisation with general increased activation intensity but unchanged pattern in experimental hyperalgesia and, by contrast, focally increased activation likelihood, but unchanged intensity, in CP when stimulated at the most painful body part. Copyright © 2016. Published by Elsevier Ltd.
How to get statistically significant effects in any ERP experiment (and why you shouldn't).
Luck, Steven J; Gaspelin, Nicholas
2017-01-01
ERP experiments generate massive datasets, often containing thousands of values for each participant, even after averaging. The richness of these datasets can be very useful in testing sophisticated hypotheses, but this richness also creates many opportunities to obtain effects that are statistically significant but do not reflect true differences among groups or conditions (bogus effects). The purpose of this paper is to demonstrate how common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant but bogus effects, with the likelihood of obtaining at least one such bogus effect exceeding 50% in many experiments. We focus on two specific problems: using the grand-averaged data to select the time windows and electrode sites for quantifying component amplitudes and latencies, and using one or more multifactor statistical analyses. Reanalyses of prior data and simulations of typical experimental designs are used to show how these problems can greatly increase the likelihood of significant but bogus results. Several strategies are described for avoiding these problems and for increasing the likelihood that significant effects actually reflect true differences among groups or conditions. © 2016 Society for Psychophysiological Research.
How to Get Statistically Significant Effects in Any ERP Experiment (and Why You Shouldn’t)
Luck, Steven J.; Gaspelin, Nicholas
2016-01-01
Event-related potential (ERP) experiments generate massive data sets, often containing thousands of values for each participant, even after averaging. The richness of these data sets can be very useful in testing sophisticated hypotheses, but this richness also creates many opportunities to obtain effects that are statistically significant but do not reflect true differences among groups or conditions (bogus effects). The purpose of this paper is to demonstrate how common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant-but-bogus effects, with the likelihood of obtaining at least one such bogus effect exceeding 50% in many experiments. We focus on two specific problems: using the grand average data to select the time windows and electrode sites for quantifying component amplitudes and latencies, and using one or more multi-factor statistical analyses. Re-analyses of prior data and simulations of typical experimental designs are used to show how these problems can greatly increase the likelihood of significant-but-bogus results. Several strategies are described for avoiding these problems and for increasing the likelihood that significant effects actually reflect true differences among groups or conditions. PMID:28000253
Amirian, E Susan; Zhou, Renke; Wrensch, Margaret R; Olson, Sara H; Scheurer, Michael E; Il'yasova, Dora; Lachance, Daniel; Armstrong, Georgina N; McCoy, Lucie S; Lau, Ching C; Claus, Elizabeth B; Barnholtz-Sloan, Jill S; Schildkraut, Joellen; Ali-Osman, Francis; Sadetzki, Siegal; Johansen, Christoffer; Houlston, Richard S; Jenkins, Robert B; Bernstein, Jonine L; Merrell, Ryan T; Davis, Faith G; Lai, Rose; Shete, Sanjay; Amos, Christopher I; Melin, Beatrice S; Bondy, Melissa L
2016-02-01
Several previous studies have found inverse associations between glioma susceptibility and a history of allergies or other atopic conditions. Some evidence indicates that respiratory allergies are likely to be particularly relevant with regard to glioma risk. Using data from the Glioma International Case-Control Study (GICC), we examined the effects of respiratory allergies and other atopic conditions on glioma risk. The GICC contains detailed information on history of atopic conditions for 4,533 cases and 4,171 controls, recruited from 14 study sites across five countries. Using two-stage random-effects restricted maximum likelihood modeling to calculate meta-analysis ORs, we examined the associations between glioma and allergy status, respiratory allergy status, asthma, and eczema. Having a history of respiratory allergies was associated with an approximately 30% lower glioma risk, compared with not having respiratory allergies (mOR, 0.72; 95% confidence interval, 0.58-0.90). This association was similar when restricting to high-grade glioma cases. Asthma and eczema were also significantly protective against glioma. A substantial amount of data on the inverse association between atopic conditions and glioma has accumulated, and findings from the GICC study further strengthen the existing evidence that the relationship between atopy and glioma is unlikely to be coincidental. As the literature approaches a consensus on the impact of allergies in glioma risk, future research can begin to shift focus to what the underlying biologic mechanism behind this association may be, which could, in turn, yield new opportunities for immunotherapy or cancer prevention. ©2016 American Association for Cancer Research.
Amirian, E. Susan; Zhou, Renke; Wrensch, Margaret R.; Olson, Sara H.; Scheurer, Michael E.; Il’yasova, Dora; Lachance, Daniel; Armstrong, Georgina N.; McCoy, Lucie S.; Lau, Ching C.; Claus, Elizabeth B.; Barnholtz-Sloan, Jill S.; Schildkraut, Joellen; Ali-Osman, Francis; Sadetzki, Siegal; Johansen, Christoffer; Houlston, Richard S.; Jenkins, Robert B.; Bernstein, Jonine L.; Merrell, Ryan T.; Davis, Faith G.; Lai, Rose; Shete, Sanjay; Amos, Christopher I.; Melin, Beatrice S.; Bondy, Melissa L.
2015-01-01
Background Several previous studies have found inverse associations between glioma susceptibility and a history of allergies or other atopic conditions. Some evidence indicates that respiratory allergies are likely to be particularly relevant with regard to glioma risk. Using data from the Glioma International Case-Control Study (GICC), we examined the effects of respiratory allergies and other atopic conditions on glioma risk. Methods The GICC contains detailed information on history of atopic conditions for 4533 cases and 4171 controls, recruited from 14 study sites across five countries. Using two-stage random-effects restricted maximum likelihood modeling to calculate meta-analysis odds ratios, we examined the associations between glioma and allergy status, respiratory allergy status, asthma, and eczema. Results Having a history of respiratory allergies was associated with an approximately 30% lower glioma risk, compared to not having respiratory allergies (mOR: 0.72, 95% CI: 0.58–0.90). This association was similar when restricting to high-grade glioma cases. Asthma and eczema were also significantly protective against glioma. Conclusions A substantial amount of data on the inverse association between atopic conditions and glioma has accumulated, and findings from the GICC study further strengthen the existing evidence that the relationship between atopy and glioma is unlikely to be coincidental. Impact As the literature approaches a consensus on the impact of allergies in glioma risk, future research can begin to shift focus to what the underlying biological mechanism behind this association may be, which could, in turn, yield new opportunities for immunotherapy or cancer prevention. PMID:26908595
Background stratified Poisson regression analysis of cohort data.
Richardson, David B; Langholz, Bryan
2012-03-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.
ERIC Educational Resources Information Center
Hopkins, Liza; Green, Julie; Henry, John; Edwards, Brian; Wong, Shanti
2014-01-01
It is well known that having a chronic or traumatic health condition can seriously impact on a young person's educational trajectory, as well as placing the young person at higher risk of experiencing mental health conditions such as depression and anxiety, and increasing their likelihood of participation in risky behaviours. This paper…
NASA Astrophysics Data System (ADS)
Price, O. F.; Bradstock, R. A.
2013-12-01
In order to quantify the risks from fire at the wildland urban interface (WUI), it is important to understand where fires occur and their likelihood of spreading to the WUI. For each of the 999 fires in the Sydney region we calculated the distance between the ignition and the WUI, the fire's weather and wind direction and whether it spread to the WUI. The likelihood of burning the WUI was analysed using binomial regression. Weather and distance interacted such that under mild weather conditions, the model predicted only a 5% chance that a fire starting >2.5 km from the interface would reach it, whereas when the conditions are extreme the predicted chance remained above 30% even at distances >10 km. Fires were more likely to spread to the WUI if the wind was from the west and in the western side of the region. We examined whether the management responses to wildfires are commensurate with risk by comparing the distribution of distance to the WUI of wildfires with roads and prescribed fires. Prescribed fires and roads were concentrated nearer to the WUI than wildfires as a whole, but further away than wildfires that burnt the WUI under extreme weather conditions (high risk fires). Overall, 79% of these high risk fires started within 2 km of the WUI, so there is some argument for concentrating more management effort near the WUI. By substituting climate change scenario weather into the statistical model, we predicted a small increase in the risk of fires spreading to the WUI, but the increase will be greater under extreme weather. This approach has a variety of uses, including mapping fire risk and improving the ability to match fire management responses to the threat from each fire. They also provide a baseline from which a cost-benefit analysis of complementary fire management strategies can be conducted.
NASA Astrophysics Data System (ADS)
Price, O. F.; Bradstock, R. A.
2013-09-01
In order to quantify the risks from fire at the Wildland Urban Interface (WUI), it is important to understand where fires occur and their likelihood of spreading to the WUI. For each of 999 fires in the Sydney region we calculated the distance between the ignition and the WUI, the fire weather and wind direction and whether it spread to the WUI. The likelihood of burning the WUI was analysed using binomial regression. Weather and distance interacted such that under mild weather conditions, the model predicted only a 5% chance that a fire starting more than 2.5 km from the interface would reach it, whereas when the conditions are extreme the predicted chance remained above 30% even at distances further than 10 km. Fires were more likely to spread to the WUI if the wind was from the west and in the western side of the region. We examined whether the management responses to wildfires are commensurate with risk by comparing the distribution of distance to the WUI of wildfires with roads and prescribed fires. Prescribed fires and roads were concentrated nearer to the WUI than wildfires as a whole, but further away than wildfires that burnt the WUI under extreme weather conditions (high risk fires). 79% of these high risk fires started within 2 km of the WUI, so there is some argument for concentrating more management effort near the WUI. By substituting climate change scenario weather into the statistical model, we predicted a small increase in the risk of fires spreading to the WUI, but the increase will be greater under extreme weather. This approach has a variety of uses, including mapping fire risk and improving the ability to match fire management responses to the threat from each fire. They also provide a baseline from which a cost-benefit analysis of complementary fire management strategies can be conducted.
Silverman, Merav H.; Jedd, Kelly; Luciana, Monica
2015-01-01
Behavioral responses to, and the neural processing of, rewards change dramatically during adolescence and may contribute to observed increases in risk-taking during this developmental period. Functional MRI (fMRI) studies suggest differences between adolescents and adults in neural activation during reward processing, but findings are contradictory, and effects have been found in non-predicted directions. The current study uses an activation likelihood estimation (ALE) approach for quantitative meta-analysis of functional neuroimaging studies to: 1) confirm the network of brain regions involved in adolescents’ reward processing, 2) identify regions involved in specific stages (anticipation, outcome) and valence (positive, negative) of reward processing, and 3) identify differences in activation likelihood between adolescent and adult reward-related brain activation. Results reveal a subcortical network of brain regions involved in adolescent reward processing similar to that found in adults with major hubs including the ventral and dorsal striatum, insula, and posterior cingulate cortex (PCC). Contrast analyses find that adolescents exhibit greater likelihood of activation in the insula while processing anticipation relative to outcome and greater likelihood of activation in the putamen and amygdala during outcome relative to anticipation. While processing positive compared to negative valence, adolescents show increased likelihood for activation in the posterior cingulate cortex (PCC) and ventral striatum. Contrasting adolescent reward processing with the existing ALE of adult reward processing (Liu et al., 2011) reveals increased likelihood for activation in limbic, frontolimbic, and striatal regions in adolescents compared with adults. Unlike adolescents, adults also activate executive control regions of the frontal and parietal lobes. These findings support hypothesized elevations in motivated activity during adolescence. PMID:26254587
Lee, E Henry; Wickham, Charlotte; Beedlow, Peter A; Waschmann, Ronald S; Tingey, David T
2017-10-01
A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for climate and forest disturbances (i.e., pests, diseases, fire). The statistical method is illustrated with a tree-ring width time series for a mature closed-canopy Douglas-fir stand on the west slopes of the Cascade Mountains of Oregon, USA that is impacted by Swiss needle cast disease caused by the foliar fungus, Phaecryptopus gaeumannii (Rhode) Petrak. The likelihood-based TSIA method is proposed for the field of dendrochronology to understand the interaction of temperature, water, and forest disturbances that are important in forest ecology and climate change studies.
Exploring Neutrino Oscillation Parameter Space with a Monte Carlo Algorithm
NASA Astrophysics Data System (ADS)
Espejel, Hugo; Ernst, David; Cogswell, Bernadette; Latimer, David
2015-04-01
The χ2 (or likelihood) function for a global analysis of neutrino oscillation data is first calculated as a function of the neutrino mixing parameters. A computational challenge is to obtain the minima or the allowed regions for the mixing parameters. The conventional approach is to calculate the χ2 (or likelihood) function on a grid for a large number of points, and then marginalize over the likelihood function. As the number of parameters increases with the number of neutrinos, making the calculation numerically efficient becomes necessary. We implement a new Monte Carlo algorithm (D. Foreman-Mackey, D. W. Hogg, D. Lang and J. Goodman, Publications of the Astronomical Society of the Pacific, 125 306 (2013)) to determine its computational efficiency at finding the minima and allowed regions. We examine a realistic example to compare the historical and the new methods.
Comparison of wheat classification accuracy using different classifiers of the image-100 system
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.
1981-01-01
Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.
Nagelkerke, Nico; Fidler, Vaclav
2015-01-01
The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.
A comparative assessment of statistical methods for extreme weather analysis
NASA Astrophysics Data System (ADS)
Schlögl, Matthias; Laaha, Gregor
2017-04-01
Extreme weather exposure assessment is of major importance for scientists and practitioners alike. We compare different extreme value approaches and fitting methods with respect to their value for assessing extreme precipitation and temperature impacts. Based on an Austrian data set from 25 meteorological stations representing diverse meteorological conditions, we assess the added value of partial duration series over the standardly used annual maxima series in order to give recommendations for performing extreme value statistics of meteorological hazards. Results show the merits of the robust L-moment estimation, which yielded better results than maximum likelihood estimation in 62 % of all cases. At the same time, results question the general assumption of the threshold excess approach (employing partial duration series, PDS) being superior to the block maxima approach (employing annual maxima series, AMS) due to information gain. For low return periods (non-extreme events) the PDS approach tends to overestimate return levels as compared to the AMS approach, whereas an opposite behavior was found for high return levels (extreme events). In extreme cases, an inappropriate threshold was shown to lead to considerable biases that may outperform the possible gain of information from including additional extreme events by far. This effect was neither visible from the square-root criterion, nor from standardly used graphical diagnosis (mean residual life plot), but from a direct comparison of AMS and PDS in synoptic quantile plots. We therefore recommend performing AMS and PDS approaches simultaneously in order to select the best suited approach. This will make the analyses more robust, in cases where threshold selection and dependency introduces biases to the PDS approach, but also in cases where the AMS contains non-extreme events that may introduce similar biases. For assessing the performance of extreme events we recommend conditional performance measures that focus on rare events only in addition to standardly used unconditional indicators. The findings of this study are of relevance for a broad range of environmental variables, including meteorological and hydrological quantities.
A risk-based approach to flood management decisions in a nonstationary world
NASA Astrophysics Data System (ADS)
Rosner, Ana; Vogel, Richard M.; Kirshen, Paul H.
2014-03-01
Traditional approaches to flood management in a nonstationary world begin with a null hypothesis test of "no trend" and its likelihood, with little or no attention given to the likelihood that we might ignore a trend if it really existed. Concluding a trend exists when it does not, or rejecting a trend when it exists are known as type I and type II errors, respectively. Decision-makers are poorly served by statistical and/or decision methods that do not carefully consider both over- and under-preparation errors, respectively. Similarly, little attention is given to how to integrate uncertainty in our ability to detect trends into a flood management decision context. We show how trend hypothesis test results can be combined with an adaptation's infrastructure costs and damages avoided to provide a rational decision approach in a nonstationary world. The criterion of expected regret is shown to be a useful metric that integrates the statistical, economic, and hydrological aspects of the flood management problem in a nonstationary world.
Liu, Xiang; Peng, Yingwei; Tu, Dongsheng; Liang, Hua
2012-10-30
Survival data with a sizable cure fraction are commonly encountered in cancer research. The semiparametric proportional hazards cure model has been recently used to analyze such data. As seen in the analysis of data from a breast cancer study, a variable selection approach is needed to identify important factors in predicting the cure status and risk of breast cancer recurrence. However, no specific variable selection method for the cure model is available. In this paper, we present a variable selection approach with penalized likelihood for the cure model. The estimation can be implemented easily by combining the computational methods for penalized logistic regression and the penalized Cox proportional hazards models with the expectation-maximization algorithm. We illustrate the proposed approach on data from a breast cancer study. We conducted Monte Carlo simulations to evaluate the performance of the proposed method. We used and compared different penalty functions in the simulation studies. Copyright © 2012 John Wiley & Sons, Ltd.
Markov Chain Monte Carlo: an introduction for epidemiologists
Hamra, Ghassan; MacLehose, Richard; Richardson, David
2013-01-01
Markov Chain Monte Carlo (MCMC) methods are increasingly popular among epidemiologists. The reason for this may in part be that MCMC offers an appealing approach to handling some difficult types of analyses. Additionally, MCMC methods are those most commonly used for Bayesian analysis. However, epidemiologists are still largely unfamiliar with MCMC. They may lack familiarity either with he implementation of MCMC or with interpretation of the resultant output. As with tutorials outlining the calculus behind maximum likelihood in previous decades, a simple description of the machinery of MCMC is needed. We provide an introduction to conducting analyses with MCMC, and show that, given the same data and under certain model specifications, the results of an MCMC simulation match those of methods based on standard maximum-likelihood estimation (MLE). In addition, we highlight examples of instances in which MCMC approaches to data analysis provide a clear advantage over MLE. We hope that this brief tutorial will encourage epidemiologists to consider MCMC approaches as part of their analytic tool-kit. PMID:23569196
NASA Astrophysics Data System (ADS)
Brouwer, Derk H.; van Duuren-Stuurman, Birgit; Berges, Markus; Bard, Delphine; Jankowska, Elzbieta; Moehlmann, Carsten; Pelzer, Johannes; Mark, Dave
2013-11-01
Manufactured nano-objects, agglomerates, and aggregates (NOAA) may have adverse effect on human health, but little is known about occupational risks since actual estimates of exposure are lacking. In a large-scale workplace air-monitoring campaign, 19 enterprises were visited and 120 potential exposure scenarios were measured. A multi-metric exposure assessment approach was followed and a decision logic was developed to afford analysis of all results in concert. The overall evaluation was classified by categories of likelihood of exposure. At task level about 53 % showed increased particle number or surface area concentration compared to "background" level, whereas 72 % of the TEM samples revealed an indication that NOAA were present in the workplace. For 54 out of the 120 task-based exposure scenarios, an overall evaluation could be made based on all parameters of the decision logic. For only 1 exposure scenario (approximately 2 %), the highest level of potential likelihood was assigned, whereas in total in 56 % of the exposure scenarios the overall evaluation revealed the lowest level of likelihood. However, for the remaining 42 % exposure to NOAA could not be excluded.
Dragović, Ivana; Turajlić, Nina; Pilčević, Dejan; Petrović, Bratislav; Radojević, Dragan
2015-01-01
Fuzzy inference systems (FIS) enable automated assessment and reasoning in a logically consistent manner akin to the way in which humans reason. However, since no conventional fuzzy set theory is in the Boolean frame, it is proposed that Boolean consistent fuzzy logic should be used in the evaluation of rules. The main distinction of this approach is that it requires the execution of a set of structural transformations before the actual values can be introduced, which can, in certain cases, lead to different results. While a Boolean consistent FIS could be used for establishing the diagnostic criteria for any given disease, in this paper it is applied for determining the likelihood of peritonitis, as the leading complication of peritoneal dialysis (PD). Given that patients could be located far away from healthcare institutions (as peritoneal dialysis is a form of home dialysis) the proposed Boolean consistent FIS would enable patients to easily estimate the likelihood of them having peritonitis (where a high likelihood would suggest that prompt treatment is indicated), when medical experts are not close at hand. PMID:27069500
Lod score curves for phase-unknown matings.
Hulbert-Shearon, T; Boehnke, M; Lange, K
1996-01-01
For a phase-unknown nuclear family, we show that the likelihood and lod score are unimodal, and we describe conditions under which the maximum occurs at recombination fraction theta = 0, theta = 1/2, and 0 < theta < 1/2. These simply stated necessary and sufficient conditions seem to have escaped the notice of previous statistical geneticists.
Kinematic Structural Modelling in Bayesian Networks
NASA Astrophysics Data System (ADS)
Schaaf, Alexander; de la Varga, Miguel; Florian Wellmann, J.
2017-04-01
We commonly capture our knowledge about the spatial distribution of distinct geological lithologies in the form of 3-D geological models. Several methods exist to create these models, each with its own strengths and limitations. We present here an approach to combine the functionalities of two modeling approaches - implicit interpolation and kinematic modelling methods - into one framework, while explicitly considering parameter uncertainties and thus model uncertainty. In recent work, we proposed an approach to implement implicit modelling algorithms into Bayesian networks. This was done to address the issues of input data uncertainty and integration of geological information from varying sources in the form of geological likelihood functions. However, one general shortcoming of implicit methods is that they usually do not take any physical constraints into consideration, which can result in unrealistic model outcomes and artifacts. On the other hand, kinematic structural modelling intends to reconstruct the history of a geological system based on physically driven kinematic events. This type of modelling incorporates simplified, physical laws into the model, at the cost of a substantial increment of usable uncertain parameters. In the work presented here, we show an integration of these two different modelling methodologies, taking advantage of the strengths of both of them. First, we treat the two types of models separately, capturing the information contained in the kinematic models and their specific parameters in the form of likelihood functions, in order to use them in the implicit modelling scheme. We then go further and combine the two modelling approaches into one single Bayesian network. This enables the direct flow of information between the parameters of the kinematic modelling step and the implicit modelling step and links the exclusive input data and likelihoods of the two different modelling algorithms into one probabilistic inference framework. In addition, we use the capabilities of Noddy to analyze the topology of structural models to demonstrate how topological information, such as the connectivity of two layers across an unconformity, can be used as a likelihood function. In an application to a synthetic case study, we show that our approach leads to a successful combination of the two different modelling concepts. Specifically, we show that we derive ensemble realizations of implicit models that now incorporate the knowledge of the kinematic aspects, representing an important step forward in the integration of knowledge and a corresponding estimation of uncertainties in structural geological models.
Campbell, David J T; King-Shier, Kathryn; Hemmelgarn, Brenda R; Sanmartin, Claudia; Ronksley, Paul E; Weaver, Robert G; Tonelli, Marcello; Hennessy, Deirdre; Manns, Braden J
2014-05-01
People with chronic conditions who do not achieve therapeutic targets have a higher risk of adverse health outcomes. Failure to meet these targets may be due to a variety of barriers. This article examines self-reported financial barriers to health care among people with cardiovascular-related chronic conditions. A population-based survey was administered to western Canadians with cardiovascular-related chronic conditions (n = 1,849). Associations between self-reported financial barriers and statin use, the likelihood of stopping use of prescribed medications, and emergency department visits or hospitalizations were assessed. More than 10% respondents reported general financial barriers (12%) and lack of drug insurance (14%); 4% reported financial barriers to accessing medications. Emergency department visits or hospitalizations were 70% more likely among those reporting a general financial barrier. Those reporting a financial barrier to medications were 50% less likely to take statins and three times more likely to stop using prescribed medications. Individuals without drug insurance were nearly 30% less likely to take statins. In this population, self-reported financial barriers were associated with lower medication use and increased likelihood of emergency department visits or hospitalization.
Mediators of the Development and Prevention of Violent Behavior
Morgan-Lopez, Antonio A.; Howard, Terry-Lee; Browne, Dorothy C.; Flay, Brian R.
2008-01-01
The purpose of this investigation was to determine if the Aban Aya Youth Project, a culturally grounded intervention, produced differences in changes over time in core intervening variables (i.e., communal value orientation, empathy, violence avoidance efficacy beliefs) and whether these variables mediated intervention effects on the development of youth violent behavior. Fifth grade cohorts at 12 schools were randomly assigned to one of two intervention conditions or an attention placebo control condition and followed longitudinally through eighth grade. A total of 668 students (49% male) participated in the study. Mediation analyses suggested that both program conditions (as compared to the control condition) led to steeper increases over time in empathy which, in turn were related to reductions in the likelihood of violent behavior over time. No other significant program effects were detected, although changes over time in violence avoidance efficacy were associated with reduced likelihood of violent behavior. Findings are discussed in terms of theory development, program development and points of refinement of the Aban Aya Youth Project and implications for future research. PMID:17558552
Walker, Elizabeth Reisinger; Druss, Benjamin G
2017-07-01
The health of individuals in the U.S.A. is increasingly being defined by complexity and multimorbidity. We examined the patterns of co-occurrence of mental illness, substance abuse/dependence, and chronic medical conditions and the cumulative burden of these conditions and living in poverty on self-rated health. We conducted a secondary data analysis using publically-available data from the National Survey on Drug Use and Health (NSDUH), which is an annual nationally-representative survey. Pooled data from the 2010-2012 NSDUH surveys included 115,921 adults 18 years of age or older. The majority of adults (52.2%) had at least one type of condition (mental illness, substance abuse/dependence, or chronic medical conditions), with substantial overlap across the conditions. 1.2%, or 2.2 million people, reported all three conditions. Generally, as the number of conditions increased, the odds of reporting worse health also increased. The likelihood of reporting fair/poor health was greatest for people who reported AMI, chronic medical conditions, and poverty (AOR = 9.41; 95% CI: 7.53-11.76), followed by all three conditions and poverty (AOR = 9.32; 95% CI: 6.67-13.02). For each combination of conditions, the addition of poverty increased the likelihood of reporting fair/poor health. Traditional conceptualizations of multimorbidity should be expanded to take into account the complexities of co-occurrence between mental illnesses, chronic medical conditions, and socioeconomic factors.
Bhadra, Dhiman; Daniels, Michael J.; Kim, Sungduk; Ghosh, Malay; Mukherjee, Bhramar
2014-01-01
In a typical case-control study, exposure information is collected at a single time-point for the cases and controls. However, case-control studies are often embedded in existing cohort studies containing a wealth of longitudinal exposure history on the participants. Recent medical studies have indicated that incorporating past exposure history, or a constructed summary measure of cumulative exposure derived from the past exposure history, when available, may lead to more precise and clinically meaningful estimates of the disease risk. In this paper, we propose a flexible Bayesian semiparametric approach to model the longitudinal exposure profiles of the cases and controls and then use measures of cumulative exposure based on a weighted integral of this trajectory in the final disease risk model. The estimation is done via a joint likelihood. In the construction of the cumulative exposure summary, we introduce an influence function, a smooth function of time to characterize the association pattern of the exposure profile on the disease status with different time windows potentially having differential influence/weights. This enables us to analyze how the present disease status of a subject is influenced by his/her past exposure history conditional on the current ones. The joint likelihood formulation allows us to properly account for uncertainties associated with both stages of the estimation process in an integrated manner. Analysis is carried out in a hierarchical Bayesian framework using Reversible jump Markov chain Monte Carlo (RJMCMC) algorithms. The proposed methodology is motivated by, and applied to a case-control study of prostate cancer where longitudinal biomarker information is available for the cases and controls. PMID:22313248
Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model
NASA Astrophysics Data System (ADS)
Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.
2009-04-01
The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.
Khodayari-Rostamabad, Ahmad; Reilly, James P; Hasey, Gary M; de Bruin, Hubert; Maccrimmon, Duncan J
2013-10-01
The problem of identifying, in advance, the most effective treatment agent for various psychiatric conditions remains an elusive goal. To address this challenge, we investigate the performance of the proposed machine learning (ML) methodology (based on the pre-treatment electroencephalogram (EEG)) for prediction of response to treatment with a selective serotonin reuptake inhibitor (SSRI) medication in subjects suffering from major depressive disorder (MDD). A relatively small number of most discriminating features are selected from a large group of candidate features extracted from the subject's pre-treatment EEG, using a machine learning procedure for feature selection. The selected features are fed into a classifier, which was realized as a mixture of factor analysis (MFA) model, whose output is the predicted response in the form of a likelihood value. This likelihood indicates the extent to which the subject belongs to the responder vs. non-responder classes. The overall method was evaluated using a "leave-n-out" randomized permutation cross-validation procedure. A list of discriminating EEG biomarkers (features) was found. The specificity of the proposed method is 80.9% while sensitivity is 94.9%, for an overall prediction accuracy of 87.9%. There is a 98.76% confidence that the estimated prediction rate is within the interval [75%, 100%]. These results indicate that the proposed ML method holds considerable promise in predicting the efficacy of SSRI antidepressant therapy for MDD, based on a simple and cost-effective pre-treatment EEG. The proposed approach offers the potential to improve the treatment of major depression and to reduce health care costs. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Effects of Habitual Anger on Employees’ Behavior during Organizational Change
Bönigk, Mareike; Steffgen, Georges
2013-01-01
Organizational change is a particularly emotional event for those being confronted with it. Anger is a frequently experienced emotion under these conditions. This study analyses the influence of employees’ habitual anger reactions on their reported behavior during organizational change. It was explored whether anger reactions conducive to recovering or increasing individual well-being will enhance the likelihood of functional change behavior. Dysfunctional regulation strategies in terms of individual well-being are expected to decrease the likelihood of functional change behavior—mediated by the commitment to change. Four hundred and twelve employees of different organizations in Luxembourg undergoing organizational change participated in the study. Findings indicate that the anger regulation strategy venting, and humor increase the likelihood of deviant resistance to change. Downplaying the incident’s negative impact and feedback increase the likelihood of active support for change. The mediating effect of commitment to change has been found for humor and submission. The empirical findings suggest that a differentiated conceptualization of resistance to change is required. Specific implications for practical change management and for future research are discussed. PMID:24287849
Effects of habitual anger on employees' behavior during organizational change.
Bönigk, Mareike; Steffgen, Georges
2013-11-25
Organizational change is a particularly emotional event for those being confronted with it. Anger is a frequently experienced emotion under these conditions. This study analyses the influence of employees' habitual anger reactions on their reported behavior during organizational change. It was explored whether anger reactions conducive to recovering or increasing individual well-being will enhance the likelihood of functional change behavior. Dysfunctional regulation strategies in terms of individual well-being are expected to decrease the likelihood of functional change behavior-mediated by the commitment to change. Four hundred and twelve employees of different organizations in Luxembourg undergoing organizational change participated in the study. Findings indicate that the anger regulation strategy venting, and humor increase the likelihood of deviant resistance to change. Downplaying the incident's negative impact and feedback increase the likelihood of active support for change. The mediating effect of commitment to change has been found for humor and submission. The empirical findings suggest that a differentiated conceptualization of resistance to change is required. Specific implications for practical change management and for future research are discussed.
Probabilistic prediction models for aggregate quarry siting
Robinson, G.R.; Larkins, P.M.
2007-01-01
Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.
Toomey, Traci L; Lenk, Kathleen M; Erickson, Darin J; Horvath, Keith J; Ecklund, Alexandra M; Nederhoff, Dawn M; Hunt, Shanda L; Nelson, Toben F
2017-03-01
Overservice of alcohol (i.e., selling alcohol to intoxicated patrons) continues to be a problem at bars and restaurants, contributing to serious consequences such as traffic crashes and violence. We developed a training program for managers of bars and restaurants, eARM™, focusing on preventing overservice of alcohol. The program included online and face-to-face components to help create and implement establishment-specific policies. We conducted a large, randomized controlled trial in bars and restaurants in one metropolitan area in the midwestern United States to evaluate effects of the eARM program on the likelihood of selling alcohol to obviously intoxicated patrons. Our outcome measure was pseudo-intoxicated purchase attempts-buyers acted out signs of intoxication while attempting to purchase alcohol-conducted at baseline and then at 1 month, 3 months, and 6 months after training. We conducted intention-to-treat analyses on changes in purchase attempts in intervention (n = 171) versus control (n = 163) bars/restaurants using a Time × Condition interaction, as well as planned contrasts between baseline and follow-up purchase attempts. The overall Time × Condition interaction was not statistically significant. At 1 month after training, we observed a 6% relative reduction in likelihood of selling to obviously intoxicated patrons in intervention versus control bars/restaurants. At 3 months after training, this difference widened to a 12% relative reduction; however, at 6 months this difference dissipated. None of these specific contrasts were statistically significant (p = .05). The observed effects of this enhanced training program are consistent with prior research showing modest initial effects followed by a decay within 6 months of the core training. Unless better training methods are identified, training programs are inadequate as the sole approach to reduce overservice of alcohol.
Efficient Exploration of the Space of Reconciled Gene Trees
Szöllősi, Gergely J.; Rosikiewicz, Wojciech; Boussau, Bastien; Tannier, Eric; Daubin, Vincent
2013-01-01
Gene trees record the combination of gene-level events, such as duplication, transfer and loss (DTL), and species-level events, such as speciation and extinction. Gene tree–species tree reconciliation methods model these processes by drawing gene trees into the species tree using a series of gene and species-level events. The reconstruction of gene trees based on sequence alone almost always involves choosing between statistically equivalent or weakly distinguishable relationships that could be much better resolved based on a putative species tree. To exploit this potential for accurate reconstruction of gene trees, the space of reconciled gene trees must be explored according to a joint model of sequence evolution and gene tree–species tree reconciliation. Here we present amalgamated likelihood estimation (ALE), a probabilistic approach to exhaustively explore all reconciled gene trees that can be amalgamated as a combination of clades observed in a sample of gene trees. We implement the ALE approach in the context of a reconciliation model (Szöllősi et al. 2013), which allows for the DTL of genes. We use ALE to efficiently approximate the sum of the joint likelihood over amalgamations and to find the reconciled gene tree that maximizes the joint likelihood among all such trees. We demonstrate using simulations that gene trees reconstructed using the joint likelihood are substantially more accurate than those reconstructed using sequence alone. Using realistic gene tree topologies, branch lengths, and alignment sizes, we demonstrate that ALE produces more accurate gene trees even if the model of sequence evolution is greatly simplified. Finally, examining 1099 gene families from 36 cyanobacterial genomes we find that joint likelihood-based inference results in a striking reduction in apparent phylogenetic discord, with respectively. 24%, 59%, and 46% reductions in the mean numbers of duplications, transfers, and losses per gene family. The open source implementation of ALE is available from https://github.com/ssolo/ALE.git. [amalgamation; gene tree reconciliation; gene tree reconstruction; lateral gene transfer; phylogeny.] PMID:23925510
NASA Astrophysics Data System (ADS)
Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.
2014-09-01
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.
A Maximum-Likelihood Approach to Force-Field Calibration.
Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam
2015-09-28
A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-likelihood parameter estimation when the contributions to the maximum-likelihood function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions.
Computational tools for exact conditional logistic regression.
Corcoran, C; Mehta, C; Patel, N; Senchaudhuri, P
Logistic regression analyses are often challenged by the inability of unconditional likelihood-based approximations to yield consistent, valid estimates and p-values for model parameters. This can be due to sparseness or separability in the data. Conditional logistic regression, though useful in such situations, can also be computationally unfeasible when the sample size or number of explanatory covariates is large. We review recent developments that allow efficient approximate conditional inference, including Monte Carlo sampling and saddlepoint approximations. We demonstrate through real examples that these methods enable the analysis of significantly larger and more complex data sets. We find in this investigation that for these moderately large data sets Monte Carlo seems a better alternative, as it provides unbiased estimates of the exact results and can be executed in less CPU time than can the single saddlepoint approximation. Moreover, the double saddlepoint approximation, while computationally the easiest to obtain, offers little practical advantage. It produces unreliable results and cannot be computed when a maximum likelihood solution does not exist. Copyright 2001 John Wiley & Sons, Ltd.
Gaussian copula as a likelihood function for environmental models
NASA Astrophysics Data System (ADS)
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sliwiak, Joanna; Jaskolski, Mariusz, E-mail: mariuszj@amu.edu.pl; A. Mickiewicz University, Grunwaldzka 6, 60-780 Poznan
With the implementation of a molecular-replacement likelihood target that accounts for translational noncrystallographic symmetry, it became possible to solve the crystal structure of a protein with seven tetrameric assemblies arrayed translationally along the c axis. The new algorithm found 56 protein molecules in reduced symmetry (P1), which was used to resolve space-group ambiguity caused by severe twinning. Translational noncrystallographic symmetry (tNCS) is a pathology of protein crystals in which multiple copies of a molecule or assembly are found in similar orientations. Structure solution is problematic because this breaks the assumptions used in current likelihood-based methods. To cope with such cases,more » new likelihood approaches have been developed and implemented in Phaser to account for the statistical effects of tNCS in molecular replacement. Using these new approaches, it was possible to solve the crystal structure of a protein exhibiting an extreme form of this pathology with seven tetrameric assemblies arrayed along the c axis. To resolve space-group ambiguities caused by tetartohedral twinning, the structure was initially solved by placing 56 copies of the monomer in space group P1 and using the symmetry of the solution to define the true space group, C2. The resulting structure of Hyp-1, a pathogenesis-related class 10 (PR-10) protein from the medicinal herb St John’s wort, reveals the binding modes of the fluorescent probe 8-anilino-1-naphthalene sulfonate (ANS), providing insight into the function of the protein in binding or storing hydrophobic ligands.« less
Farris, Samantha G; Zvolensky, Michael J; Schmidt, Norman B
2016-06-01
There is little knowledge about how emotion regulation difficulties interplay with psychopathology in terms of smoking cessation. Participants ( n = 250; 53.2 % female, M age = 39.5, SD = 13.85) were community-recruited daily smokers (≥8 cigarettes per day) who self-reported motivation to quit smoking; 38.8 % of the sample met criteria for a current (past 12-month) psychological disorder. Emotion regulation deficits were assessed pre-quit using the Difficulties with Emotion Regulation Scale (DERS; Gratz and Roemer in J Psychopathol Behav Assess 26(1):41-54, 2004) and smoking behavior in the 28 days post-quit was assessed using the Timeline Follow-Back (TLFB; Sobell and Sobell in Measuring alcohol consumption: psychosocial and biochemical methods. Humana Press, Totowa, 1992). A Cox proportional-hazard regression analysis was used to model the effects of past-year psychopathology, DERS (total score), and their interaction, in terms of time to lapse post-quit day. After adjusting for the effects of gender, age, pre-quit level of nicotine dependence, and treatment condition, the model revealed a non-significant effect of past-year psychopathology ( OR = 1.14, CI 95 % = 0.82-1.61) and difficulties with emotion regulation ( OR = 1.01, CI 95 % = 1.00-1.01) on likelihood of lapse rate. However, the interactive effect of psychopathology status and difficulties with emotion regulation was significant ( OR = 0.98, CI 95 % = 0.97-0.99). Specifically, there was a significant conditional effect of psychopathology status on lapse rate likelihood at low, but not high, levels of emotion regulation difficulties. Plots of the cumulative survival functions indicated that for smokers without a past-year psychological disorder, those with lower DERS scores relative to elevated DERS scores had significantly lower likelihood of early smoking lapse, whereas for smokers with past-year psychopathology, DERS scores did not differentially impact lapse rate likelihood. Smokers with emotion regulation difficulties may have challenges quitting, and not having such difficulties, especially without psychopathology, decreases the potential likelihood of early lapse.
A Systemic Approach to Implementing a Protective Factors Framework
ERIC Educational Resources Information Center
Parsons, Beverly; Jessup, Patricia; Moore, Marah
2014-01-01
The leadership team of the national Quality Improvement Center on early Childhood ventured into the frontiers of deep change in social systems by funding four research projects. The purpose of the research projects was to learn about implementing a protective factors approach with the goal of reducing the likelihood of child abuse and neglect. In…
ERIC Educational Resources Information Center
Bean, Roy A.; Titus, Gayatri
2009-01-01
A more accessible approach to using multicultural counseling competence is presented to bridge the researcher-practitioner gap and increase the likelihood of quality clinical services. The focus of the approach is on counselor awareness, knowledge, and skills as they relate to the most important contextualizing factors: ethnic culture and the…
ERIC Educational Resources Information Center
Lee, Soo; Suh, Youngsuk
2018-01-01
Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…
A Hamiltonian approach to the planar optimization of mid-course corrections
NASA Astrophysics Data System (ADS)
Iorfida, E.; Palmer, P. L.; Roberts, M.
2016-04-01
Lawden's primer vector theory gives a set of necessary conditions that characterize the optimality of a transfer orbit, defined accordingly to the possibility of adding mid-course corrections. In this paper a novel approach is proposed where, through a polar coordinates transformation, the primer vector components decouple. Furthermore, the case when transfer, departure and arrival orbits are coplanar is analyzed using a Hamiltonian approach. This procedure leads to approximate analytic solutions for the in-plane components of the primer vector. Moreover, the solution for the circular transfer case is proven to be the Hill's solution. The novel procedure reduces the mathematical and computational complexity of the original case study. It is shown that the primer vector is independent of the semi-major axis of the transfer orbit. The case with a fixed transfer trajectory and variable initial and final thrust impulses is studied. The acquired related optimality maps are presented and analyzed and they express the likelihood of a set of trajectories to be optimal. Furthermore, it is presented which kind of requirements have to be fulfilled by a set of departure and arrival orbits to have the same profile of primer vector.
Champault, G G; Rizk, N; Catheline, J M; Turner, R; Boutelier, P
1997-12-01
In a prospective randomized trial comparing the totally preperitoneal (TPP) laparoscopic approach and the Stoppa procedure (open), 100 patients with inguinal hernias (Nyhus IIIA, IIIB, IV) were followed over a 3-year period. Both groups were epidemiologically comparable. In the laparoscopic group, operating time was significantly longer (p = 0.01), but hospital stay (3.2 vs. 7.3 days) and delay in return to work (17 vs. 35 days) were significantly reduced (p = 0.01). Postoperative comfort (less pain) was better (p = 0.001) after laparoscopy. In this group, morbidity was also reduced (4 vs. 20%; p = 0.02). The mean follow-up was 605 days, and 93% of the patients were reviewed at 3 years. There were three (6%) recurrences after TPP, especially at the beginning of the surgeon's learning curve, versus one for the Stoppa procedure (NS). For bilateral hernias, the authors suggest the use of a large prosthesis rather than two small ones to minimize the likelihood of recurrence. In the conditions described, the laparoscopic (TPP) approach to inguinal hernia treatment appears to have the same long-term recurrence rate as the open (Stoppa) procedure but a real advantage in the early postoperative period.
Heading Estimation for Pedestrian Dead Reckoning Based on Robust Adaptive Kalman Filtering.
Wu, Dongjin; Xia, Linyuan; Geng, Jijun
2018-06-19
Pedestrian dead reckoning (PDR) using smart phone-embedded micro-electro-mechanical system (MEMS) sensors plays a key role in ubiquitous localization indoors and outdoors. However, as a relative localization method, it suffers from the problem of error accumulation which prevents it from long term independent running. Heading estimation error is one of the main location error sources, and therefore, in order to improve the location tracking performance of the PDR method in complex environments, an approach based on robust adaptive Kalman filtering (RAKF) for estimating accurate headings is proposed. In our approach, outputs from gyroscope, accelerometer, and magnetometer sensors are fused using the solution of Kalman filtering (KF) that the heading measurements derived from accelerations and magnetic field data are used to correct the states integrated from angular rates. In order to identify and control measurement outliers, a maximum likelihood-type estimator (M-estimator)-based model is used. Moreover, an adaptive factor is applied to resist the negative effects of state model disturbances. Extensive experiments under static and dynamic conditions were conducted in indoor environments. The experimental results demonstrate the proposed approach provides more accurate heading estimates and supports more robust and dynamic adaptive location tracking, compared with methods based on conventional KF.
Castillo, Enrico G; Shaner, Roderick; Tang, Lingqi; Chung, Bowen; Jones, Felica; Whittington, Yolanda; Miranda, Jeanne; Wells, Kenneth B
2018-02-01
Community Partners in Care (CPIC) was a group-randomized study of two approaches to implementing expanded collaborative depression care: Community Engagement and Planning (CEP), a coalition approach, and Resources for Services (RS), a technical assistance approach. Collaborative care networks in both arms involved health care and other agencies in five service sectors. This study examined six- and 12-month outcomes for CPIC participants with serious mental illness. This secondary analysis focused on low-income CPIC participants from racial-ethnic minority groups with serious mental illness in underresourced Los Angeles communities (N=504). Serious mental illness was defined as self-reported severe depression (≥20 on the Patient Health Questionnaire-8) at baseline or a lifetime history of bipolar disorder or psychosis. Logistic and Poisson regression with multiple imputation and response weights, controlling for covariates, was used to model intervention effects. Among CPIC participants, 50% had serious mental illness. Among those with serious mental illness, CEP relative to RS reduced the likelihood of poor mental health-related quality of life (OR=.62, 95% CI=.41-.95) but not depression (primary outcomes); reduced the likelihood of having homelessness risk factors and behavioral health hospitalizations; increased the likelihood of mental wellness; reduced specialty mental health medication and counseling visits; and increased faith-based depression visits (each p<.05) at six months. There were no statistically significant 12-month effects. Findings suggest that a coalition approach to implementing expanded collaborative depression care, compared with technical assistance to individual programs, may reduce short-term behavioral health hospitalizations and improve mental health-related quality of life and some social outcomes for adults with serious mental illness, although no evidence was found for long-term effects in this subsample.
2009-07-28
further referred to as normative models of causation. A second type of model, which are based on Pavlovian classical conditioning , is associative... conditions of high cognitive load), the likelihood of the accuracy of the perception is compromised. If an inaccurate perception translates to an inaccurate...correlation and causation detection in specific military operations and under conditions of operational stress. Background Models of correlation
González, L A; Schwartzkopf-Genswein, K S; Bryan, M; Silasi, R; Brown, F
2012-10-01
The objective of the present study was to document the relationships between selected welfare outcomes and transport conditions during commercial long haul transport of cattle (≥400 km; 6,152 journeys; 290,866 animals). Surveys were delivered to transport carriers to collect information related to welfare outcomes including the number of dead, non-ambulatory (downer) and lame animals during each journey. Transport conditions surveyed included the length of time animals spent on truck, ambient temperature, animal density, shrinkage, loading time, cattle origin, season, experience of truck drivers, and vehicle characteristics. Overall 0.012% of assessed animals became lame, 0.022% non-ambulatory and 0.011% died onboard. Calves and cull cattle were more likely to die and become non-ambulatory during the journey, feeders intermediate, and fat cattle appeared to be the most able to cope with the stress of transport (P ≤ 0.01). The likelihood of cattle becoming non-ambulatory, lame, or dead increased sharply after animals spent over 30 h on truck (P < 0.001). The likelihood of animal death increased sharply when the midpoint ambient temperature fell below -15°C (P = 0.01) while the likelihood of becoming non-ambulatory increased when temperatures rose above 30°C (P = 0.03). Animals that lost 10% of their BW during transport had a greater (P < 0.001) likelihood of dying and becoming non-ambulatory or lame. Animals were more likely to die at smaller space allowances (P < 0.05), particularly at allometric coefficients below 0.015 (P = 0.10), which occurred more frequently in the belly and deck compartments of the trailers, and also at high space allowances in the deck (allometric coefficients > 0.035). The proportion of total compromised animals decreased with more years of truck driving experience (P < 0.001). Mortality was greater in cattle loaded at auction markets compared with feed yards and ranches (P < 0.01). Cull cattle, calves and feeders appear to be more affected by transport based on the likelihood of becoming non-ambulatory and dying within a journey. Most important welfare concerns during long distance transport include the total journey duration, too low or high space allowances, too high or too low ambient temperature, and the experience of the truck drivers.
MILLS-REINCKE PHENOMENON AND TYPHOID CONTROL BY VACCINE
McGee, Harold G.
1920-01-01
Assuming typhoid to be an index of conditions favoring other causes of death, this author calls attention to the likelihood that anti-typhoid vaccination, by attacking typhoid alone, really masks sanitary conditions and may permit unnecessary deaths. Complete eradication of typhoid through vaccination would not affect the three other deaths suggested by the Mills-Reincke hypothesis. PMID:18010339
Coast redwood seedling regeneration following fire in a southern coast redwood forest
Rachel Lazzeri-Aerts; Will Russell
2017-01-01
It has been hypothesized that individuals adapted to conditions near the speciesâ range edge, may increase the likelihood that the species will persevere under changing climatic conditions (Rehm et al. 2015). The southern coast redwood (Sequoia sempervirens (D. Don) Endl.) forests vary from more northern redwood forests in terms of stand...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-20
....923(a)(2), Rotor drive system and control mechanism tests. In addition to the applicable airworthiness... stems from the likelihood of encountering hazards such as inconsistent wave heights, floating debris...-state and wind conditions: (i) Sea-State: Wave height of 2.5 meters (8.2 feet), considering both short...
ERIC Educational Resources Information Center
Han, Hyemin
2017-01-01
The present study meta-analyzed 45 experiments with 959 subjects and 463 activation foci reported in 43 published articles that investigated the neural mechanism of moral functions by comparing neural activity between the moral task conditions and non-moral task conditions with the Activation Likelihood Estimation method. The present study…
Analysis of Multiple Contingency Tables by Exact Conditional Tests for Zero Partial Association.
ERIC Educational Resources Information Center
Kreiner, Svend
The tests for zero partial association in a multiple contingency table have gained new importance with the introduction of graphical models. It is shown how these may be performed as exact conditional tests, using as test criteria either the ordinary likelihood ratio, the standard x squared statistic, or any other appropriate statistics. A…
Microbial response to high severity wildfire in the southwest United States
Steven T. Overby; Stephen C. Hart; Gregory S. Newman; Dana Erickson
2006-01-01
Southwest United States ponderosa pine (Pinus ponderosa Dougl. ex Laws) ecosystems have received great attention due to fuel conditions that increase the likelihood of large-scale wildfires with severe fire behavior. The fire season of 2002 demonstrated these extreme fuel load conditions with the largest fires in southwest history. The Jemez District of the Santa Fe...
The impossibility of probabilities
NASA Astrophysics Data System (ADS)
Zimmerman, Peter D.
2017-11-01
This paper discusses the problem of assigning probabilities to the likelihood of nuclear terrorism events, in particular examining the limitations of using Bayesian priors for this purpose. It suggests an alternate approach to analyzing the threat of nuclear terrorism.
NASA Astrophysics Data System (ADS)
Ben Abdessalem, Anis; Dervilis, Nikolaos; Wagg, David; Worden, Keith
2018-01-01
This paper will introduce the use of the approximate Bayesian computation (ABC) algorithm for model selection and parameter estimation in structural dynamics. ABC is a likelihood-free method typically used when the likelihood function is either intractable or cannot be approached in a closed form. To circumvent the evaluation of the likelihood function, simulation from a forward model is at the core of the ABC algorithm. The algorithm offers the possibility to use different metrics and summary statistics representative of the data to carry out Bayesian inference. The efficacy of the algorithm in structural dynamics is demonstrated through three different illustrative examples of nonlinear system identification: cubic and cubic-quintic models, the Bouc-Wen model and the Duffing oscillator. The obtained results suggest that ABC is a promising alternative to deal with model selection and parameter estimation issues, specifically for systems with complex behaviours.
Burns, Rachel J.; Rothman, Alexander J.; Fu, Steven S.; Lindgren, Bruce; Vock, David M.; Joseph, Anne M.
2015-01-01
Background The Tobacco Longitudinal Care study was a randomized controlled trial for smoking cessation. It demonstrated that longitudinal care for smoking cessation, in which telephone-based counseling and nicotine replacement therapy was offered for 12 months, was more effective than standard 8-week treatment. Purpose To identify for whom and how longitudinal care increased the likelihood of abstinence. Methods Mediated moderation analyses across three time points. Results There was a trend towards smokers who did not respond to treatment (i.e., were still smoking) by 21 days being more likely to be abstinent at 6 months if they received longitudinal care rather than usual care. Similarly, those who did not respond to treatment by 3 months were more likely to be abstinent at 12 months if they received longitudinal care. At both time points, the likelihood of abstinence did not differ across treatment conditions among participants who responded to treatment (i.e., quit smoking). The effect on 6-month outcomes was mediated by satisfaction and readiness to quit. Cessation self-efficacy, satisfaction, and readiness to quit mediated the effect on 12-month outcomes. The effect of treatment condition on the likelihood of abstinence at 18 months was not moderated by response to treatment at 6 months. Conclusions Smokers who did not respond to initial treatment benefited from longitudinal care. Differential effects of treatment condition were not observed among those who responded to early treatment. Conditional assignment to longitudinal care may be useful. Determining for whom and how interventions work over time will advance theory and practice. PMID:26373657
Burns, Rachel J; Rothman, Alexander J; Fu, Steven S; Lindgren, Bruce; Vock, David M; Joseph, Anne M
2016-02-01
The Tobacco Longitudinal Care study was a randomized controlled trial for smoking cessation. It demonstrated that longitudinal care for smoking cessation, in which telephone-based counseling and nicotine replacement therapy were offered for 12 months, was more effective than the standard 8-week treatment. This study aims to identify for whom and how longitudinal care increased the likelihood of abstinence. Mediated moderation analyses were utilized across three time points. There was a trend towards smokers who did not respond to treatment (i.e., were still smoking) by 21 days being more likely to be abstinent at 6 months if they received longitudinal care rather than usual care. Similarly, those who did not respond to treatment by 3 months were more likely to be abstinent at 12 months if they received longitudinal care. At both time points, the likelihood of abstinence did not differ across treatment conditions among participants who responded to treatment (i.e., quit smoking). The effect on 6-month outcomes was mediated by satisfaction and readiness to quit. Cessation self-efficacy, satisfaction, and readiness to quit mediated the effect on 12-month outcomes. The effect of treatment condition on the likelihood of abstinence at 18 months was not moderated by response to treatment at 6 months. Smokers who did not respond to initial treatment benefited from longitudinal care. Differential effects of treatment condition were not observed among those who responded to early treatment. Conditional assignment to longitudinal care may be useful. Determining for whom and how interventions work over time will advance theory and practice.
Ranking and combining multiple predictors without labeled data
Parisi, Fabio; Strino, Francesco; Nadler, Boaz; Kluger, Yuval
2014-01-01
In a broad range of classification and decision-making problems, one is given the advice or predictions of several classifiers, of unknown reliability, over multiple questions or queries. This scenario is different from the standard supervised setting, where each classifier’s accuracy can be assessed using available labeled data, and raises two questions: Given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to (i) reliably rank them and (ii) construct a metaclassifier more accurate than most classifiers in the ensemble? Here we present a spectral approach to address these questions. First, assuming conditional independence between classifiers, we show that the off-diagonal entries of their covariance matrix correspond to a rank-one matrix. Moreover, the classifiers can be ranked using the leading eigenvector of this covariance matrix, because its entries are proportional to their balanced accuracies. Second, via a linear approximation to the maximum likelihood estimator, we derive the Spectral Meta-Learner (SML), an unsupervised ensemble classifier whose weights are equal to these eigenvector entries. On both simulated and real data, SML typically achieves a higher accuracy than most classifiers in the ensemble and can provide a better starting point than majority voting for estimating the maximum likelihood solution. Furthermore, SML is robust to the presence of small malicious groups of classifiers designed to veer the ensemble prediction away from the (unknown) ground truth. PMID:24474744
Assessing Landscape Scale Wildfire Exposure for Highly Valued Resources in a Mediterranean Area
NASA Astrophysics Data System (ADS)
Alcasena, Fermín J.; Salis, Michele; Ager, Alan A.; Arca, Bachisio; Molina, Domingo; Spano, Donatella
2015-05-01
We used a fire simulation modeling approach to assess landscape scale wildfire exposure for highly valued resources and assets (HVR) on a fire-prone area of 680 km2 located in central Sardinia, Italy. The study area was affected by several wildfires in the last half century: some large and intense fire events threatened wildland urban interfaces as well as other socioeconomic and cultural values. Historical wildfire and weather data were used to inform wildfire simulations, which were based on the minimum travel time algorithm as implemented in FlamMap. We simulated 90,000 fires that replicated recent large fire events in the area spreading under severe weather conditions to generate detailed maps of wildfire likelihood and intensity. Then, we linked fire modeling outputs to a geospatial risk assessment framework focusing on buffer areas around HVR. The results highlighted a large variation in burn probability and fire intensity in the vicinity of HVRs, and allowed us to identify the areas most exposed to wildfires and thus to a higher potential damage. Fire intensity in the HVR buffers was mainly related to fuel types, while wind direction, topographic features, and historically based ignition pattern were the key factors affecting fire likelihood. The methodology presented in this work can have numerous applications, in the study area and elsewhere, particularly to address and inform fire risk management, landscape planning and people safety on the vicinity of HVRs.
Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification
NASA Technical Reports Server (NTRS)
Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,
A unifying framework for marginalized random intercept models of correlated binary outcomes
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.
2013-01-01
We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871
A parimutuel gambling perspective to compare probabilistic seismicity forecasts
NASA Astrophysics Data System (ADS)
Zechar, J. Douglas; Zhuang, Jiancang
2014-10-01
Using analogies to gaming, we consider the problem of comparing multiple probabilistic seismicity forecasts. To measure relative model performance, we suggest a parimutuel gambling perspective which addresses shortcomings of other methods such as likelihood ratio, information gain and Molchan diagrams. We describe two variants of the parimutuel approach for a set of forecasts: head-to-head, in which forecasts are compared in pairs, and round table, in which all forecasts are compared simultaneously. For illustration, we compare the 5-yr forecasts of the Regional Earthquake Likelihood Models experiment for M4.95+ seismicity in California.
The Spawns of Creative Behavior in Team Sports: A Creativity Developmental Framework.
Santos, Sara D L; Memmert, Daniel; Sampaio, Jaime; Leite, Nuno
2016-01-01
Developing creativity in team sports players is becoming an increasing focus in sports sciences. The Creativity Developmental Framework is presented to provide an updated science based background. This Framework describes five incremental creative stages (beginner, explorer, illuminati, creator, and rise) and combines them into multidisciplinary approaches embodied in creative assumptions. In the first training stages, the emphasis is placed on the enrollment in diversification, deliberate play and physical literacy approaches grounded in nonlinear pedagogies. These approaches allow more freedom to discover different movement patterns increasing the likelihood of emerging novel, adaptive and functional solutions. In the later stages, the progressive specialization in sports and the differential learning commitment are extremely important to push the limits of the creative progress at higher levels of performance by increasing the range of skills configurations. Notwithstanding, during all developmental stages the teaching games for understanding, a game-centered approach, linked with the constraints-led approach play an important role to boost the tactical creative behavior. Both perspectives might encourage players to explore all actions possibilities (improving divergent thinking) and prevents the standardization in their actions. Overall, considering the aforementioned practice conditions the Creativity Developmental Framework scrutinizes the main directions that lead to a long-term improvement of the creative behavior in team sports. Nevertheless, this framework should be seen as a work in progress to be later used as the paramount reference in creativity training.
The Spawns of Creative Behavior in Team Sports: A Creativity Developmental Framework
Santos, Sara D. L.; Memmert, Daniel; Sampaio, Jaime; Leite, Nuno
2016-01-01
Developing creativity in team sports players is becoming an increasing focus in sports sciences. The Creativity Developmental Framework is presented to provide an updated science based background. This Framework describes five incremental creative stages (beginner, explorer, illuminati, creator, and rise) and combines them into multidisciplinary approaches embodied in creative assumptions. In the first training stages, the emphasis is placed on the enrollment in diversification, deliberate play and physical literacy approaches grounded in nonlinear pedagogies. These approaches allow more freedom to discover different movement patterns increasing the likelihood of emerging novel, adaptive and functional solutions. In the later stages, the progressive specialization in sports and the differential learning commitment are extremely important to push the limits of the creative progress at higher levels of performance by increasing the range of skills configurations. Notwithstanding, during all developmental stages the teaching games for understanding, a game-centered approach, linked with the constraints-led approach play an important role to boost the tactical creative behavior. Both perspectives might encourage players to explore all actions possibilities (improving divergent thinking) and prevents the standardization in their actions. Overall, considering the aforementioned practice conditions the Creativity Developmental Framework scrutinizes the main directions that lead to a long-term improvement of the creative behavior in team sports. Nevertheless, this framework should be seen as a work in progress to be later used as the paramount reference in creativity training. PMID:27617000
Gruebner, Oliver; Lowe, Sarah R; Tracy, Melissa; Cerdá, Magdalena; Joshi, Spruha; Norris, Fran H; Galea, Sandro
2016-04-01
To demonstrate a spatial epidemiologic approach that could be used in the aftermath of disasters to (1) detect spatial clusters and (2) explore geographic heterogeneity in predictors for mental health and general wellness. We used a cohort study of Hurricane Ike survivors (n=508) to assess the spatial distribution of postdisaster mental health wellness (most likely resilience trajectory for posttraumatic stress symptoms [PTSS] and depression) and general wellness (most likely resilience trajectory for PTSS, depression, functional impairment, and days of poor health) in Galveston, Texas. We applied the spatial scan statistic (SaTScan) and geographically weighted regression. We found spatial clusters of high likelihood wellness in areas north of Texas City and spatial concentrations of low likelihood wellness in Galveston Island. Geographic variation was found in predictors of wellness, showing increasing associations with both forms of wellness the closer respondents were located to Galveston City in Galveston Island. Predictors for postdisaster wellness may manifest differently across geographic space with concentrations of lower likelihood wellness and increased associations with predictors in areas of higher exposure. Our approach could be used to inform geographically targeted interventions to promote mental health and general wellness in disaster-affected communities.
Harrell-Williams, Leigh; Wolfe, Edward W
2014-01-01
Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.
Socially acquired predator avoidance: is it just classical conditioning?
Griffin, Andrea S
2008-06-15
Associative learning theories presume the existence of a general purpose learning process, the structure of which does not mirror the demands of any particular learning problem. In contrast, learning scientists working within an Evolutionary Biology tradition believe that learning processes have been shaped by ecological demands. One potential means of exploring how ecology may have modified properties of acquisition is to use associative learning theory as a framework within which to analyse a particular learning phenomenon. Recent work has used this approach to examine whether socially transmitted predator avoidance can be conceptualised as a classical conditioning process in which a novel predator stimulus acts as a conditioned stimulus (CS) and acquires control over an avoidance response after it has become associated with alarm signals of social companions, the unconditioned stimulus (US). I review here a series of studies examining the effect of CS/US presentation timing on the likelihood of acquisition. Results suggest that socially acquired predator avoidance may be less sensitive to forward relationships than traditional classical conditioning paradigms. I make the case that socially acquired predator avoidance is an exciting novel one-trial learning paradigm that could be studied along side fear conditioning. Comparisons between social and non-social learning of danger at both the behavioural and neural level may yield a better understanding of how ecology might shape properties and mechanisms of learning.
Saavedra, Serguei; Rohr, Rudolf P; Fortuna, Miguel A; Selva, Nuria; Bascompte, Jordi
2016-04-01
Many of the observed species interactions embedded in ecological communities are not permanent, but are characterized by temporal changes that are observed along with abiotic and biotic variations. While work has been done describing and quantifying these changes, little is known about their consequences for species coexistence. Here, we investigate the extent to which changes of species composition impact the likelihood of persistence of the predator-prey community in the highly seasonal Białowieza Primeval Forest (northeast Poland), and the extent to which seasonal changes of species interactions (predator diet) modulate the expected impact. This likelihood is estimated extending recent developments on the study of structural stability in ecological communities. We find that the observed species turnover strongly varies the likelihood of community persistence between summer and winter. Importantly, we demonstrate that the observed seasonal interaction changes minimize the variation in the likelihood of persistence associated with species turnover across the year. We find that these community dynamics can be explained as the coupling of individual species to their environment by minimizing both the variation in persistence conditions and the interaction changes between seasons. Our results provide a homeostatic explanation for seasonal species interactions and suggest that monitoring the association of interactions changes with the level of variation in community dynamics can provide a good indicator of the response of species to environmental pressures.
F-8C adaptive flight control extensions. [for maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Stein, G.; Hartmann, G. L.
1977-01-01
An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.
The analysis of the pilot's cognitive and decision processes
NASA Technical Reports Server (NTRS)
Curry, R. E.
1975-01-01
Articles are presented on pilot performance in zero-visibility precision approach, failure detection by pilots during automatic landing, experiments in pilot decision-making during simulated low visibility approaches, a multinomial maximum likelihood program, and a random search algorithm for laboratory computers. Other topics discussed include detection of system failures in multi-axis tasks and changes in pilot workload during an instrument landing.
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2014-01-01
This research note contributes to the discussion of methods that can be used to identify useful auxiliary variables for analyses of incomplete data sets. A latent variable approach is discussed, which is helpful in finding auxiliary variables with the property that if included in subsequent maximum likelihood analyses they may enhance considerably…
Teesson, M; Newton, N C; Slade, T; Carragher, N; Barrett, E L; Champion, K E; Kelly, E V; Nair, N K; Stapinski, L A; Conrod, P J
2017-07-01
No existing models of alcohol prevention concurrently adopt universal and selective approaches. This study aims to evaluate the first combined universal and selective approach to alcohol prevention. A total of 26 Australian schools with 2190 students (mean age: 13.3 years) were randomized to receive: universal prevention (Climate Schools); selective prevention (Preventure); combined prevention (Climate Schools and Preventure; CAP); or health education as usual (control). Primary outcomes were alcohol use, binge drinking and alcohol-related harms at 6, 12 and 24 months. Climate, Preventure and CAP students demonstrated significantly lower growth in their likelihood to drink and binge drink, relative to controls over 24 months. Preventure students displayed significantly lower growth in their likelihood to experience alcohol harms, relative to controls. While adolescents in both the CAP and Climate groups demonstrated slower growth in drinking compared with adolescents in the control group over the 2-year study period, CAP adolescents demonstrated faster growth in drinking compared with Climate adolescents. Findings support universal, selective and combined approaches to alcohol prevention. Particularly novel are the findings of no advantage of the combined approach over universal or selective prevention alone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vrugt, Jasper A; Robinson, Bruce A; Ter Braak, Cajo J F
In recent years, a strong debate has emerged in the hydrologic literature regarding what constitutes an appropriate framework for uncertainty estimation. Particularly, there is strong disagreement whether an uncertainty framework should have its roots within a proper statistical (Bayesian) context, or whether such a framework should be based on a different philosophy and implement informal measures and weaker inference to summarize parameter and predictive distributions. In this paper, we compare a formal Bayesian approach using Markov Chain Monte Carlo (MCMC) with generalized likelihood uncertainty estimation (GLUE) for assessing uncertainty in conceptual watershed modeling. Our formal Bayesian approach is implemented usingmore » the recently developed differential evolution adaptive metropolis (DREAM) MCMC scheme with a likelihood function that explicitly considers model structural, input and parameter uncertainty. Our results demonstrate that DREAM and GLUE can generate very similar estimates of total streamflow uncertainty. This suggests that formal and informal Bayesian approaches have more common ground than the hydrologic literature and ongoing debate might suggest. The main advantage of formal approaches is, however, that they attempt to disentangle the effect of forcing, parameter and model structural error on total predictive uncertainty. This is key to improving hydrologic theory and to better understand and predict the flow of water through catchments.« less
Appreciative inquiry: a radically different approach to change.
2002-07-01
Appreciative Inquiry, or Al, seeks to identify what went right and duplicate the experience. Adjustment in thinking may be difficult for defensive-minded health care professionals. Likelihood of success appears greater when smaller groups are involved.
NASA Astrophysics Data System (ADS)
Krestyannikov, E.; Tohka, J.; Ruotsalainen, U.
2008-06-01
This paper presents a novel statistical approach for joint estimation of regions-of-interest (ROIs) and the corresponding time-activity curves (TACs) from dynamic positron emission tomography (PET) brain projection data. It is based on optimizing the joint objective function that consists of a data log-likelihood term and two penalty terms reflecting the available a priori information about the human brain anatomy. The developed local optimization strategy iteratively updates both the ROI and TAC parameters and is guaranteed to monotonically increase the objective function. The quantitative evaluation of the algorithm is performed with numerically and Monte Carlo-simulated dynamic PET brain data of the 11C-Raclopride and 18F-FDG tracers. The results demonstrate that the method outperforms the existing sequential ROI quantification approaches in terms of accuracy, and can noticeably reduce the errors in TACs arising due to the finite spatial resolution and ROI delineation.
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Evaluation of Differential DependencY (EDDY) is a statistical test for the differential dependency relationship of a set of genes between two given conditions. For each condition, possible dependency network structures are enumerated and their likelihoods are computed to represent a probability distribution of dependency networks. The difference between the probability distributions of dependency networks is computed between conditions, and its statistical significance is evaluated with random permutations of condition labels on the samples.
Evaluation of Differential DependencY (EDDY) is a statistical test for the differential dependency relationship of a set of genes between two given conditions. For each condition, possible dependency network structures are enumerated and their likelihoods are computed to represent a probability distribution of dependency networks. The difference between the probability distributions of dependency networks is computed between conditions, and its statistical significance is evaluated with random permutations of condition labels on the samples.
Empirical likelihood method for non-ignorable missing data problems.
Guan, Zhong; Qin, Jing
2017-01-01
Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data
Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu
2012-01-01
SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840
Characterising switching behaviour in perceptual multi-stability.
Denham, Susan; Bendixen, Alexandra; Mill, Robert; Tóth, Dénes; Wennekers, Thomas; Coath, Martin; Bőhm, Tamás; Szalardy, Orsolya; Winkler, István
2012-09-15
When people experience an unchanging sensory input for a long period of time, their perception tends to switch stochastically and unavoidably between alternative interpretations of the sensation; a phenomenon known as perceptual bi-stability or multi-stability. The huge variability in the experimental data obtained in such paradigms makes it difficult to distinguish typical patterns of behaviour, or to identify differences between switching patterns. Here we propose a new approach to characterising switching behaviour based upon the extraction of transition matrices from the data, which provide a compact representation that is well-understood mathematically. On the basis of this representation we can characterise patterns of perceptual switching, visualise and simulate typical switching patterns, and calculate the likelihood of observing a particular switching pattern. The proposed method can support comparisons between different observers, experimental conditions and even experiments. We demonstrate the insights offered by this approach using examples from our experiments investigating multi-stability in auditory streaming. However, the methodology is generic and thus widely applicable in studies of multi-stability in any domain. Copyright © 2012 Elsevier B.V. All rights reserved.
Diagnosing pulmonary embolisms: the clinician's point of view.
Carrillo Alcaraz, A; Martínez, A López; Solano, F J Sotos
Pulmonary thromboembolism is common and potentially severe. To ensure the correct approach to the diagnostic workup of pulmonary thromboembolism, it is essential to know the basic concepts governing the use of the different tests available. The diagnostic approach to pulmonary thromboembolism is an example of the application of the conditional probabilities of Bayes' theorem in daily practice. To interpret the available diagnostic tests correctly, it is necessary to analyze different concepts that are fundamental for decision making. Thus, it is necessary to know what the likelihood ratios, 95% confidence intervals, and decision thresholds mean. Whether to determine the D-dimer concentration or to do CT angiography or other imaging tests depends on their capacity to modify the pretest probability of having the disease to a posttest probability that is higher or lower than the thresholds for action. This review aims to clarify the diagnostic sequence of thromboembolic pulmonary disease, analyzing the main diagnostic tools (clinical examination, laboratory tests, and imaging tests), placing special emphasis on the principles that govern evidence-based medicine. Copyright © 2016 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
Freeman, Carl D.; Emlen, John M.
1995-01-01
Interspecific interactions influence both the productivity and composition of plant communities. Here, we propose new field procedures and analytical approaches for assessing interspecific interactions in nature and apply these procedures to the salt desert shrub grasslands of western Utah. Data were collected from two grazing treatments over a period of 2 years. The proposed equations were fairly consistent across both treatments and years. In addition to illustrating how to assess interspecific interactions within a community, we also develop a new approach for projecting the community composition as a result of some alteration, i.e. increase or decrease in the abundance of one or more species. Results demonstrate competition both within and between plant life-form groups. While introduced annuals were found to depress profoundly the likelihood of perennial plants replacing themselves, perennials had little influence on annuals. Thus, as native perennials die, they are more likely to be replaced by perennials than for the reverse to occur. Our results suggest that unless conditions change, these communities will become increasingly dominated by introduced annuals.
Markov-random-field-based super-resolution mapping for identification of urban trees in VHR images
NASA Astrophysics Data System (ADS)
Ardila, Juan P.; Tolpekin, Valentyn A.; Bijker, Wietske; Stein, Alfred
2011-11-01
Identification of tree crowns from remote sensing requires detailed spectral information and submeter spatial resolution imagery. Traditional pixel-based classification techniques do not fully exploit the spatial and spectral characteristics of remote sensing datasets. We propose a contextual and probabilistic method for detection of tree crowns in urban areas using a Markov random field based super resolution mapping (SRM) approach in very high resolution images. Our method defines an objective energy function in terms of the conditional probabilities of panchromatic and multispectral images and it locally optimizes the labeling of tree crown pixels. Energy and model parameter values are estimated from multiple implementations of SRM in tuning areas and the method is applied in QuickBird images to produce a 0.6 m tree crown map in a city of The Netherlands. The SRM output shows an identification rate of 66% and commission and omission errors in small trees and shrub areas. The method outperforms tree crown identification results obtained with maximum likelihood, support vector machines and SRM at nominal resolution (2.4 m) approaches.
Revision of laser-induced damage threshold evaluation from damage probability data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas
2013-04-15
In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametricmore » regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).« less
Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul
2015-01-01
Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis.
Murphy, Devin; Sawczyn, Kelly K; Quinn, Gwendolyn P
2012-04-01
Most pediatric education materials are designed for a parent audience. Social marketing techniques rely on the principles called the "4 P's": product, price, place, and promotion. The objective of this study was to test the design, readability, likelihood to read, and overall opinion of a pediatric fertility preservation brochure with patients, parents, and providers. Qualitative face-to-face interviews. The Children's Cancer Center in Tampa, FL, and All Children's Hospital in St. Petersburg, FL. Male and female cancer patients and survivors aged 12-21 (N = 7), their parents (N = 11), and healthcare providers (N = 6). Patients, survivors, parents, and healthcare providers were given two versions of gender concordant brochures on fertility preservation designed for both pediatric oncology patients and their parents. Design, readability, likelihood to read, and overall opinion from interviews in order to identify facilitators of involving patients in fertility preservation discussions. Parents and teens differed on the design, readability, and likelihood to read, the highest discord being preferences for medical terminology used in the brochures. While parents remarked that much of the language was 'too advanced,' the majority of teens explained that they understood the terminology and preferred it remained on the brochure. Overall feedback from all three groups was utilized to revise the brochures into final versions to increase the likelihood of reading. Information about the development of the 4 P's of social marketing highlights needs from the intended audience. Barriers to patient education in pediatrics can be ameliorated when using the social marketing approach. Copyright © 2012 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.
Bayesian SEM for Specification Search Problems in Testing Factorial Invariance.
Shi, Dexin; Song, Hairong; Liao, Xiaolan; Terry, Robert; Snyder, Lori A
2017-01-01
Specification search problems refer to two important but under-addressed issues in testing for factorial invariance: how to select proper reference indicators and how to locate specific non-invariant parameters. In this study, we propose a two-step procedure to solve these issues. Step 1 is to identify a proper reference indicator using the Bayesian structural equation modeling approach. An item is selected if it is associated with the highest likelihood to be invariant across groups. Step 2 is to locate specific non-invariant parameters, given that a proper reference indicator has already been selected in Step 1. A series of simulation analyses show that the proposed method performs well under a variety of data conditions, and optimal performance is observed under conditions of large magnitude of non-invariance, low proportion of non-invariance, and large sample sizes. We also provide an empirical example to demonstrate the specific procedures to implement the proposed method in applied research. The importance and influences are discussed regarding the choices of informative priors with zero mean and small variances. Extensions and limitations are also pointed out.
Stratigraphic Transfer Thresholds of Sediment Supply Signals in Channelized Systems
NASA Astrophysics Data System (ADS)
Toby, S. C.; De Angelis, S.; Duller, R.; Straub, K. M.
2016-12-01
The stratigraphic record is a unique physical archive for past climate and tectonic boundary conditions on Earth and other planetary bodies. These boundary and forcing conditions set the rate and volume of sediment delivered to sedimentary basins, which can be, theoretically, linked back to the stratigraphic record. However for sediment supply signals to make their way through to stratigraphy they must pass through the active layer of the Earth's surface, which is scaled to channel depth. For the long-term, the likelihood of this taking place can be evaluated using the vertical time-scale of autogenics. The current study tests whether or not cyclic sediment supply to an experimental delta can influence morphodynamics and if so, can this be recovered from synthetic and physical stratigraphic dataset collected during the experiments. Preliminary results suggest that short period sediment supply signals are less likely to be transferred to the stratigraphic record, which is predicted by our theoretical framework for channelized systems. Once fully validated by the experiments the theoretical approach will be applied to field stratigraphy and used to guide more reliable interpretation of ancient sediment supply signals.
Cardiovascular hospitalizations and associations with environmental quality
Cardiovascular disease has been identified as a condition that may be associated with environmental factors. Air pollution in particular has been demonstrated to be associated with cardiovascular disease and atherosclerosis, which can increase the likelihood of cardiovascular eve...
Yokoyama, Masako; Yokoyama, Tetsuji; Funazu, Kazuo; Yamashita, Takeshi; Kondo, Shuji; Hosoai, Hiroshi; Yokoyama, Akira; Nakamura, Haruo
2009-06-01
We conducted a cross-sectional survey of 12,988 subjects aged 20-79 years (5,908 men and 7,090 women) receiving health checkups at a Tokyo clinic. They filled out a self-administered structured questionnaire, and 5.4% of the men and 15.4% of the women reported having headaches. Younger subjects were more prone to having headaches. The likelihood of having headaches increased with stress level and decreased ability to relieve stress in both genders. There was an inverse dose-response relationship between having headaches and alcohol consumption, and less walking/exercise and sleep problems increased the likelihood of headaches in both genders. Headache sufferers of both genders were more likely to report multiple additional poor health conditions. A multivariate stepwise logistic analysis showed that age, self-estimated degree of stress, reported number of additional poor health conditions, and less alcohol consumption were independently correlated with having headaches. In conclusion, although women were more susceptible to headache, Japanese men and women in Tokyo shared factors associated with headache, including age, stress, having other poor health conditions, alcohol consumption, sleep, and exercise.
Comparing Types of Financial Incentives to Promote Walking: An Experimental Test.
Burns, Rachel J; Rothman, Alexander J
2018-04-19
Offering people financial incentives to increase their physical activity is an increasingly prevalent intervention strategy. However, little is known about the relative effectiveness of different types of incentives. This study tested whether incentives based on specified reinforcement types and schedules differentially affected the likelihood of meeting a walking goal and explored if observed behavioural changes may have been attributable to the perceived value of the incentive. A 2 (reinforcement type: cash reward, deposit contract) × 2 (schedule: fixed, variable) between-subjects experiment with a hanging control condition was conducted over 8 weeks (n = 153). Although walking was greater in the incentive conditions relative to the control condition, walking did not differ across incentive conditions. Exploratory analyses indicated that the perceived value of the incentive was associated with the likelihood of meeting the walking goal, but was not affected by reinforcement type or schedule. The reinforcement type and schedule manipulations tested in this study did not differentially affect walking. Given that walking behaviour was associated with perceived value, designing incentive strategies that optimise the perceived value of the incentive may be a promising avenue for future research. © 2018 The International Association of Applied Psychology.
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.
1986-01-01
The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.
Forecasting drought risks for a water supply storage system using bootstrap position analysis
Tasker, Gary; Dunne, Paul
1997-01-01
Forecasting the likelihood of drought conditions is an integral part of managing a water supply storage and delivery system. Position analysis uses a large number of possible flow sequences as inputs to a simulation of a water supply storage and delivery system. For a given set of operating rules and water use requirements, water managers can use such a model to forecast the likelihood of specified outcomes such as reservoir levels falling below a specified level or streamflows falling below statutory passing flows a few months ahead conditioned on the current reservoir levels and streamflows. The large number of possible flow sequences are generated using a stochastic streamflow model with a random resampling of innovations. The advantages of this resampling scheme, called bootstrap position analysis, are that it does not rely on the unverifiable assumption of normality and it allows incorporation of long-range weather forecasts into the analysis.
MODELING LEFT-TRUNCATED AND RIGHT-CENSORED SURVIVAL DATA WITH LONGITUDINAL COVARIATES
Su, Yu-Ru; Wang, Jane-Ling
2018-01-01
There is a surge in medical follow-up studies that include longitudinal covariates in the modeling of survival data. So far, the focus has been largely on right censored survival data. We consider survival data that are subject to both left truncation and right censoring. Left truncation is well known to produce biased sample. The sampling bias issue has been resolved in the literature for the case which involves baseline or time-varying covariates that are observable. The problem remains open however for the important case where longitudinal covariates are present in survival models. A joint likelihood approach has been shown in the literature to provide an effective way to overcome those difficulties for right censored data, but this approach faces substantial additional challenges in the presence of left truncation. Here we thus propose an alternative likelihood to overcome these difficulties and show that the regression coefficient in the survival component can be estimated unbiasedly and efficiently. Issues about the bias for the longitudinal component are discussed. The new approach is illustrated numerically through simulations and data from a multi-center AIDS cohort study. PMID:29479122
Schwarting Miller, Lindsay; La Peyre, Jerome F.; LaPeyre, Megan K.
2017-01-01
Recognition of the global loss of subtidal oyster reefs has led to a rise in reef restoration efforts, including in the Gulf of Mexico. Created reef success depends entirely on selecting a location that supports long-term oyster growth and survival, including the recruitment and survival of on-reef oysters. Significant changes in estuarine salinity through management of freshwater inflows and through changed precipitation patterns may significantly impact the locations of optimal oyster restoration sites. These rapid shifts in conditions necessitate a need to better understand both impacts to on-reef oyster growth and population development, and variation in oyster stock performance. Oyster growth, mortality, condition, and disease prevalence were examined in three different stocks of oysters located in protected cages, as well as oyster recruitment and mortality on experimental reef units in three different locations representing a salinity gradient, along the Louisiana Gulf coast in 2011 and 2012. Over a 2-y period, the high-salinity site had highest oyster growth rate in protected cages but demonstrated the least likelihood for reef development based on on-reef oyster population failure, likely because of predation-related mortality (high recruitment and 100% mortality). In contrast, the midsalinity site with moderate oyster growth and on-reef recruitment and low mortality demonstrated a higher likelihood for reef development. The lowest salinity site exhibited extreme variability in all oyster responses between years because of extreme variation in environmental conditions during the study, indicating a low likelihood of long-term reef development. Whereas limited differences in stock performance between sites were found, the range of site environmental conditions tested was ultimately much lower than expected and may not have provided a wide enough range of conditions. In areas with limited, low recruitment, or rapidly changing environmental conditions, seeding with stocks selected for best growth and survival under expected future environmental conditions could better ensure reef development by using oyster populations best suited to the predicted conditions. With rapidly changing estuarine conditions from anthropogenic activities and climate change, siting of oyster reef restoration incorporating both oyster population dynamics and in situ biotic and abiotic interactions is critical in better directing site selection for reef restoration efforts.
Assessing Success on the Uniform CPA Exam: A Logit Approach.
ERIC Educational Resources Information Center
Brahmasrene, Tantatape; Whitten, Donna
2001-01-01
A logit model was used to test the likelihood of success of 231 candidates on the Uniform Certified Public Accountants Examination. Significant determinants of success included undergraduate grade point average, age, private accounting experience, and gender. (SK)
Taylor, Natalie; Long, Janet C; Debono, Deborah; Williams, Rachel; Salisbury, Elizabeth; O'Neill, Sharron; Eykman, Elizabeth; Braithwaite, Jeffrey; Chin, Melvin
2016-03-12
Lynch syndrome is an inherited disorder associated with a range of cancers, and found in 2-5 % of colorectal cancers. Lynch syndrome is diagnosed through a combination of significant family and clinical history and pathology. The definitive diagnostic germline test requires formal patient consent after genetic counselling. If diagnosed early, carriers of Lynch syndrome can undergo increased surveillance for cancers, which in turn can prevent late stage cancers, optimise treatment and decrease mortality for themselves and their relatives. However, over the past decade, international studies have reported that only a small proportion of individuals with suspected Lynch syndrome were referred for genetic consultation and possible genetic testing. The aim of this project is to use behaviour change theory and implementation science approaches to increase the number and speed of healthcare professional referrals of colorectal cancer patients with a high-likelihood risk of Lynch syndrome to appropriate genetic counselling services. The six-step Theoretical Domains Framework Implementation (TDFI) approach will be used at two large, metropolitan hospitals treating colorectal cancer patients. Steps are: 1) form local multidisciplinary teams to map current referral processes; 2) identify target behaviours that may lead to increased referrals using discussion supported by a retrospective audit; 3) identify barriers to those behaviours using the validated Influences on Patient Safety Behaviours Questionnaire and TDFI guided focus groups; 4) co-design interventions to address barriers using focus groups; 5) co-implement interventions; and 6) evaluate intervention impact. Chi square analysis will be used to test the difference in the proportion of high-likelihood risk Lynch syndrome patients being referred for genetic testing before and after intervention implementation. A paired t-test will be used to assess the mean time from the pathology test results to referral for high-likelihood Lynch syndrome patients pre-post intervention. Run charts will be used to continuously monitor change in referrals over time, based on scheduled monthly audits. This project is based on a tested and refined implementation strategy (TDFI approach). Enhancing the process of identifying and referring people at high-likelihood risk of Lynch syndrome for genetic counselling will improve outcomes for patients and their relatives, and potentially save public money.
NASA Astrophysics Data System (ADS)
Mearns, L. O.; Sain, S. R.; McGinnis, S. A.; Steinschneider, S.; Brown, C. M.
2015-12-01
In this talk we present the development of a joint Bayesian Probabilistic Model for the climate change results of the North American Regional Climate Change Assessment Program (NARCCAP) that uses a unique prior in the model formulation. We use the climate change results (joint distribution of seasonal temperature and precipitation changes (future vs. current)) from the global climate models (GCMs) that provided boundary conditions for the six different regional climate models used in the program as informative priors for the bivariate Bayesian Model. The two variables involved are seasonal temperature and precipitation over sub-regions (i.e., Bukovsky Regions) of the full NARCCAP domain. The basic approach to the joint Bayesian hierarchical model follows the approach of Tebaldi and Sansó (2009). We compare model results using informative (i.e., GCM information) as well as uninformative priors. We apply these results to the Water Evaluation and Planning System (WEAP) model for the Colorado Springs Utility in Colorado. We investigate the layout of the joint pdfs in the context of the water model sensitivities to ranges of temperature and precipitation results to determine the likelihoods of future climate conditions that cannot be accommodated by possible adaptation options. Comparisons may also be made with joint pdfs formed from the CMIP5 collection of global climate models and empirically downscaled to the region of interest.
NASA Astrophysics Data System (ADS)
Gershunov, A.; Guirguis, K.; Shulgina, T.; Clemesha, R.; Ralph, M.
2017-12-01
Atmospheric Rivers (ARs) contribute the lion's share of water resources for California, but can also cause flooding and draw heavily on emergency resources of state and local governments. Comprehensive probabilistic tools relating landfalling ARs to pre-existing weather/climate conditions could be useful for subseasonal forecasting, emergency preparedness and water resource management. We examine ARs targeting the Northern California coast using long-term observations of synoptic-scale circulation, high-resolution precipitation, and a seven-decade-long catalog of AR landfalls to quantify distinct orientations of landfalling ARs. Using a probabilistic approach to relate these historic events to precursor weather patterns, we identify synoptic circulation patterns that precede AR landfalls at various lead times in the range of 0-30 days. Examination of the evolution of these precursor patterns reveals subtle but important differences in the atmospheric states that lead to AR landfalls versus those that don't. Synoptic precursors can also differentiate between orientations of ARs at landfall, which produce rather different precipitation patterns over the region's complex topography. Moreover, low-frequency climate forcing appears to modulate the likelihood of AR landfalls, as well as their preferred orientations. These results provide a link between seasonal and subseasonal timescales and suggest a new approach toward extended-range prediction of land-falling atmospheric rivers and their related precipitation.
A Robust and Efficient Method for Steady State Patterns in Reaction-Diffusion Systems
Lo, Wing-Cheong; Chen, Long; Wang, Ming; Nie, Qing
2012-01-01
An inhomogeneous steady state pattern of nonlinear reaction-diffusion equations with no-flux boundary conditions is usually computed by solving the corresponding time-dependent reaction-diffusion equations using temporal schemes. Nonlinear solvers (e.g., Newton’s method) take less CPU time in direct computation for the steady state; however, their convergence is sensitive to the initial guess, often leading to divergence or convergence to spatially homogeneous solution. Systematically numerical exploration of spatial patterns of reaction-diffusion equations under different parameter regimes requires that the numerical method be efficient and robust to initial condition or initial guess, with better likelihood of convergence to an inhomogeneous pattern. Here, a new approach that combines the advantages of temporal schemes in robustness and Newton’s method in fast convergence in solving steady states of reaction-diffusion equations is proposed. In particular, an adaptive implicit Euler with inexact solver (AIIE) method is found to be much more efficient than temporal schemes and more robust in convergence than typical nonlinear solvers (e.g., Newton’s method) in finding the inhomogeneous pattern. Application of this new approach to two reaction-diffusion equations in one, two, and three spatial dimensions, along with direct comparisons to several other existing methods, demonstrates that AIIE is a more desirable method for searching inhomogeneous spatial patterns of reaction-diffusion equations in a large parameter space. PMID:22773849
Public acceptance of wildland fire and fuel management: panel responses in seven locations.
Toman, Eric; Shindler, Bruce; McCaffrey, Sarah; Bennett, James
2014-09-01
Wildland fire affects both public and private resources throughout the United States. A century of fire suppression has contributed to changing ecological conditions and accumulated fuel loads. Managers have used a variety of approaches to address these conditions and reduce the likelihood of wildland fires that may result in adverse ecological impacts and threaten communities. Public acceptance is a critical component of developing and implementing successful management programs. This study examines the factors that influence citizen support for agency fuel reduction treatments over time-particularly prescribed fire and mechanical vegetation removal. This paper presents findings from a longitudinal study examining resident beliefs and attitudes regarding fire management and fuels treatments in seven states: Arizona, Colorado, Oregon, Utah, Michigan, Minnesota, and Wisconsin. The study was implemented in two phases over a 6-year period using mail surveys to residents of communities adjacent to federal lands in each location. Questions replicated measures from the original project as well as some new items to allow a more in-depth analysis of key concepts. The study design enables comparisons over time as well as between locations. We also assess the factors that influence acceptance of both prescribed fire and mechanical vegetation removal. Findings demonstrate a relative stability of attitudes toward fuels management approaches over time and suggest that this acceptance is strongly influenced by confidence in resource managers and beliefs that the treatments would result in positive outcomes.
Nasal Airway Microbiota Profile and Severe Bronchiolitis in Infants: A Case-control Study.
Hasegawa, Kohei; Linnemann, Rachel W; Mansbach, Jonathan M; Ajami, Nadim J; Espinola, Janice A; Petrosino, Joseph F; Piedra, Pedro A; Stevenson, Michelle D; Sullivan, Ashley F; Thompson, Amy D; Camargo, Carlos A
2017-11-01
Little is known about the relationship of airway microbiota with bronchiolitis in infants. We aimed to identify nasal airway microbiota profiles and to determine their association with the likelihood of bronchiolitis in infants. A case-control study was conducted. As a part of a multicenter prospective study, we collected nasal airway samples from 40 infants hospitalized with bronchiolitis. We concurrently enrolled 110 age-matched healthy controls. By applying 16S ribosomal RNA gene sequencing and an unbiased clustering approach to these 150 nasal samples, we identified microbiota profiles and determined the association of microbiota profiles with likelihood of bronchiolitis. Overall, the median age was 3 months and 56% were male. Unbiased clustering of airway microbiota identified 4 distinct profiles: Moraxella-dominant profile (37%), Corynebacterium/Dolosigranulum-dominant profile (27%), Staphylococcus-dominant profile (15%) and mixed profile (20%). Proportion of bronchiolitis was lowest in infants with Moraxella-dominant profile (14%) and highest in those with Staphylococcus-dominant profile (57%), corresponding to an odds ratio of 7.80 (95% confidence interval, 2.64-24.9; P < 0.001). In the multivariable model, the association between Staphylococcus-dominant profile and greater likelihood of bronchiolitis persisted (odds ratio for comparison with Moraxella-dominant profile, 5.16; 95% confidence interval, 1.26-22.9; P = 0.03). By contrast, Corynebacterium/Dolosigranulum-dominant profile group had low proportion of infants with bronchiolitis (17%); the likelihood of bronchiolitis in this group did not significantly differ from those with Moraxella-dominant profile in both unadjusted and adjusted analyses. In this case-control study, we identified 4 distinct nasal airway microbiota profiles in infants. Moraxella-dominant and Corynebacterium/Dolosigranulum-dominant profiles were associated with low likelihood of bronchiolitis, while Staphylococcus-dominant profile was associated with high likelihood of bronchiolitis.
Regression estimators for generic health-related quality of life and quality-adjusted life years.
Basu, Anirban; Manca, Andrea
2012-01-01
To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.
NASA Astrophysics Data System (ADS)
Choi, H.; Kim, S.
2012-12-01
Most of hydrologic models have generally been used to describe and represent the spatio-temporal variability of hydrological processes in the watershed scale. Though it is an obvious fact that hydrological responses have the time varying nature, optimal values of model parameters were normally considered as time invariants or constants in most cases. The recent paper of Choi and Beven (2007) presents a multi-period and multi-criteria model conditioning approach. The approach is based on the equifinality thesis within the Generalised Likelihood Uncertainty Estimation (GLUE) framework. In their application, the behavioural TOPMODEL parameter sets are determined by several performance measures for global (annual) and short (30-days) periods, clustered using a Fuzzy C-means algorithm, into 15 types representing different hydrological conditions. Their study shows a good performance on the calibration of a rainfall-runoff model in a forest catchment, and also gives strong indications that it is uncommon to find model realizations that were behavioural over all multi-periods and all performance measures, and multi-period model conditioning approach may become new effective tool for predictions of hydrological processes in ungauged catchments. This study is a follow-up study on the Choi and Beven's (2007) model conditioning approach to test how the approach is effective for the prediction of rainfall-runoff responses in ungauged catchments. To achieve this purpose, 6 small forest catchments are selected among the several hydrological experimental catchments operated by Korea Forest Research Institute. In each catchment, long-term hydrological time series data varying from 10 to 30 years were available. The areas of the selected catchments range from 13.6 to 37.8 ha, and all areas are covered by coniferous or broad-leaves forests. The selected catchments locate in the southern coastal area to the northern part of South Korea. The bed rocks are Granite gneiss, Granite or Limestone. The study is progressed based on the followings. Firstly, hydrological time series of each catchment are sampled and clustered into multi-period having distinctly different temporal characteristics, and secondly, behavioural parameter distributions are determined in each multi-period based on the specification of multi-criteria model performance measures. Finally, behavioural parameter sets of each multi-period of single catchment are applied on the corresponding period of other catchments, and the cross-validations are conducted in this manner for all catchments The multi-period model conditioning approach is clearly effective to reduce the width of prediction limits, giving better model performance against the temporal variability of hydrological characteristics, and has enough potential to be the effective prediction tool for ungauged catchments. However, more advanced and continuous studies are needed to expand the application of this approach in prediction of hydrological responses in ungauged catchments,
Sprajcer, Madeline; Jay, Sarah M; Vincent, Grace E; Vakulin, Andrew; Lack, Leon; Ferguson, Sally A
2018-05-11
On-call working arrangements are employed in a number of industries to manage unpredictable events, and often involve tasks that are safety- or time-critical. This study investigated the effects of call likelihood during an overnight on-call shift on self-reported pre-bed anxiety, sleep and next-day cognitive performance. A four-night laboratory-based protocol was employed, with an adaptation, a control and two counterbalanced on-call nights. On one on-call night, participants were instructed that they would definitely be called during the night, while on the other on-call night they were told they may be called. The State-Trait Anxiety Inventory form x-1 was used to investigate pre-bed anxiety, and sleep was assessed using polysomnography and power spectral analysis of the sleep electroencephalographic analysis. Cognitive performance was assessed four times daily using a 10-min psychomotor vigilance task. Participants felt more anxious before bed when they were definitely going to be called, compared with the control and maybe conditions. Conversely, participants experienced significantly less non-rapid eye movement and stage two sleep and poorer cognitive performance when told they may be called. Further, participants had significantly more rapid eye movement sleep in the maybe condition, which may be an adaptive response to the stress associated with this on-call condition. It appears that self-reported anxiety may not be linked with sleep outcomes while on-call. However, this research indicates that it is important to take call likelihood into consideration when constructing rosters and risk-management systems for on-call workers.
Sripada, Rebecca K; Bohnert, Amy S B; Teo, Alan R; Levine, Debra S; Pfeiffer, Paul N; Bowersox, Nicholas W; Mizruchi, Mark S; Chermack, Stephen T; Ganoczy, Dara; Walters, Heather; Valenstein, Marcia
2015-09-01
Low social support and small social network size have been associated with a variety of negative mental health outcomes, while their impact on mental health services use is less clear. To date, few studies have examined these associations in National Guard service members, where frequency of mental health problems is high, social support may come from military as well as other sources, and services use may be suboptimal. Surveys were administered to 1448 recently returned National Guard members. Multivariable regression models assessed the associations between social support characteristics, probable mental health conditions, and service utilization. In bivariate analyses, large social network size, high social network diversity, high perceived social support, and high military unit support were each associated with lower likelihood of having a probable mental health condition (p < .001). In adjusted analyses, high perceived social support (OR .90, CI .88-.92) and high unit support (OR .96, CI .94-.97) continued to be significantly associated with lower likelihood of mental health conditions. Two social support measures were associated with lower likelihood of receiving mental health services in bivariate analyses, but were not significant in adjusted models. General social support and military-specific support were robustly associated with reduced mental health symptoms in National Guard members. Policy makers, military leaders, and clinicians should attend to service members' level of support from both the community and their units and continue efforts to bolster these supports. Other strategies, such as focused outreach, may be needed to bring National Guard members with need into mental health care.
Variations on Bayesian Prediction and Inference
2016-05-09
inference 2.2.1 Background There are a number of statistical inference problems that are not generally formulated via a full probability model...problem of inference about an unknown parameter, the Bayesian approach requires a full probability 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...the problem of inference about an unknown parameter, the Bayesian approach requires a full probability model/likelihood which can be an obstacle
Ensemble Learning Method for Hidden Markov Models
2014-12-01
Ensemble HMM landmine detector Mine signatures vary according to the mine type, mine size , and burial depth. Similarly, clutter signatures vary with soil ...approaches for the di erent K groups depending on their size and homogeneity. In particular, we investigate the maximum likelihood (ML), the minimum...propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we
DOT National Transportation Integrated Search
2012-04-01
A poroelastic model is developed that can predict stress and strain distributions and, thus, ostensibly : damage likelihood in concrete under freezing conditions caused by aggregates with undesirable : combinations of geometry and constitutive proper...
Influence of weather, rank, and home advantage on football outcomes in the Gulf region.
Brocherie, Franck; Girard, Olivier; Farooq, Abdulaziz; Millet, Grégoire P
2015-02-01
The objective of this study was to investigate the effects of weather, rank, and home advantage on international football match results and scores in the Gulf Cooperation Council (GCC) region. Football matches (n = 2008) in six GCC countries were analyzed. To determine the weather influence on the likelihood of favorable outcome and goal difference, generalized linear model with a logit link function and multiple regression analysis were performed. In the GCC region, home teams tend to have greater likelihood of a favorable outcome (P < 0.001) and higher goal difference (P < 0.001). Temperature difference was identified as a significant explanatory variable when used independently (P < 0.001) or after adjustment for home advantage and team ranking (P < 0.001). The likelihood of favorable outcome for GCC teams increases by 3% for every 1-unit increase in temperature difference. After inclusion of interaction with opposition, this advantage remains significant only when playing against non-GCC opponents. While home advantage increased the odds of favorable outcome (P < 0.001) and goal difference (P < 0.001) after inclusion of interaction term, the likelihood of favorable outcome for a GCC team decreased (P < 0.001) when playing against a stronger opponent. Finally, the temperature and wet bulb globe temperature approximation were found as better indicators of the effect of environmental conditions than absolute and relative humidity or heat index on match outcomes. In GCC region, higher temperature increased the likelihood of a favorable outcome when playing against non-GCC teams. However, international ranking should be considered because an opponent with a higher rank reduced, but did not eliminate, the likelihood of a favorable outcome.
Koffarnus, Mikhail N; Johnson, Matthew W; Thompson-Lake, Daisy G Y; Wesley, Michael J; Lohrenz, Terry; Montague, P Read; Bickel, Warren K
2016-08-01
Cocaine users have a higher incidence of risky sexual behavior and HIV infection than nonusers. Our aim was to measure whether safer sex discount rates-a measure of the likelihood of having immediate unprotected sex versus waiting to have safer sex-differed between controls and cocaine users of varying severity. Of the 162 individuals included in the primary data analyses, 69 met the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.; DSM-IV-TR) criteria for cocaine dependence, 29 were recreational cocaine users who did not meet the dependence criteria, and 64 were controls. Participants completed the Sexual Discounting Task, which measures a person's likelihood of using a condom when one is immediately available and how that likelihood decreases as a function of delay to condom availability with regard to 4 images chosen by the participants of hypothetical sexual partners differing in perceived desirability and likelihood of having a sexually transmitted infection. When a condom was immediately available, the stated likelihood of condom use sometimes differed between cocaine users and controls, which depended on the image condition. Even after controlling for rates of condom use when one is immediately available, the cocaine-dependent and recreational users groups were more sensitive to delay to condom availability than controls. Safer sex discount rates were also related to intelligence scores. The Sexual Discounting Task identifies delay as a key variable that impacts the likelihood of using a condom among these groups and suggests that HIV prevention efforts may be differentially effective based on an individual's safer sex discount rate. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A Bayesian Alternative for Multi-objective Ecohydrological Model Specification
NASA Astrophysics Data System (ADS)
Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.
2015-12-01
Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.
NASA Astrophysics Data System (ADS)
Chakraborty, A.; Goto, H.
2017-12-01
The 2011 off the Pacific coast of Tohoku earthquake caused severe damage in many areas further inside the mainland because of site-amplification. Furukawa district in Miyagi Prefecture, Japan recorded significant spatial differences in ground motion even at sub-kilometer scales. The site responses in the damage zone far exceeded the levels in the hazard maps. A reason why the mismatch occurred is that mapping follow only the mean value at the measurement locations with no regard to the data uncertainties and thus are not always reliable. Our research objective is to develop a methodology to incorporate data uncertainties in mapping and propose a reliable map. The methodology is based on a hierarchical Bayesian modeling of normally-distributed site responses in space where the mean (μ), site-specific variance (σ2) and between-sites variance(s2) parameters are treated as unknowns with a prior distribution. The observation data is artificially created site responses with varying means and variances for 150 seismic events across 50 locations in one-dimensional space. Spatially auto-correlated random effects were added to the mean (μ) using a conditionally autoregressive (CAR) prior. The inferences on the unknown parameters are done using Markov Chain Monte Carlo methods from the posterior distribution. The goal is to find reliable estimates of μ sensitive to uncertainties. During initial trials, we observed that the tau (=1/s2) parameter of CAR prior controls the μ estimation. Using a constraint, s = 1/(k×σ), five spatial models with varying k-values were created. We define reliability to be measured by the model likelihood and propose the maximum likelihood model to be highly reliable. The model with maximum likelihood was selected using a 5-fold cross-validation technique. The results show that the maximum likelihood model (μ*) follows the site-specific mean at low uncertainties and converges to the model-mean at higher uncertainties (Fig.1). This result is highly significant as it successfully incorporates the effect of data uncertainties in mapping. This novel approach can be applied to any research field using mapping techniques. The methodology is now being applied to real records from a very dense seismic network in Furukawa district, Miyagi Prefecture, Japan to generate a reliable map of the site responses.
Empirical Bayes Approaches to Multivariate Fuzzy Partitions.
ERIC Educational Resources Information Center
Woodbury, Max A.; Manton, Kenneth G.
1991-01-01
An empirical Bayes-maximum likelihood estimation procedure is presented for the application of fuzzy partition models in describing high dimensional discrete response data. The model describes individuals in terms of partial membership in multiple latent categories that represent bounded discrete spaces. (SLD)
Identical twins in forensic genetics - Epidemiology and risk based estimation of weight of evidence.
Tvedebrink, Torben; Morling, Niels
2015-12-01
The increase in the number of forensic genetic loci used for identification purposes results in infinitesimal random match probabilities. These probabilities are computed under assumptions made for rather simple population genetic models. Often, the forensic expert reports likelihood ratios, where the alternative hypothesis is assumed not to encompass close relatives. However, this approach implies that important factors present in real human populations are discarded. This approach may be very unfavourable to the defendant. In this paper, we discuss some important aspects concerning the closest familial relationship, i.e., identical (monozygotic) twins, when reporting the weight of evidence. This can be done even when the suspect has no knowledge of an identical twin or when official records hold no twin information about the suspect. The derived expressions are not original as several authors previously have published results accounting for close familial relationships. However, we revisit the discussion to increase the awareness among forensic genetic practitioners and include new information on medical and societal factors to assess the risk of not considering a monozygotic twin as the true perpetrator. If accounting for a monozygotic twin in the weight of evidence, it implies that the likelihood ratio is truncated at a maximal value depending on the prevalence of monozygotic twins and the societal efficiency of recognising a monozygotic twin. If a monozygotic twin is considered as an alternative proposition, then data relevant for the Danish society suggests that the threshold of likelihood ratios should approximately be between 150,000 and 2,000,000 in order to take the risk of an unrecognised identical, monozygotic twin into consideration. In other societies, the threshold of the likelihood ratio in crime cases may reach other, often lower, values depending on the recognition of monozygotic twins and the age of the suspect. In general, more strictly kept registries will imply larger thresholds on the likelihood ratio as the monozygotic twin explanation gets less probable. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai
2015-01-16
Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.
PyEvolve: a toolkit for statistical modelling of molecular evolution.
Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A
2004-01-05
Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used interactively or by writing and executing scripts. The toolkit uses efficient processes for specifying the parameterisation of statistical models, and implements numerous optimisations that make highly parameter rich likelihood functions solvable within hours on multi-cpu hardware. PyEvolve can be readily adapted in response to changing computational demands and hardware configurations to maximise performance. PyEvolve is released under the GPL and can be downloaded from http://cbis.anu.edu.au/software.