Estimating Function Approaches for Spatial Point Processes
NASA Astrophysics Data System (ADS)
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.
Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data
ERIC Educational Resources Information Center
Xi, Nuo; Browne, Michael W.
2014-01-01
A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…
A composite likelihood approach for spatially correlated survival data
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450
A composite likelihood approach for spatially correlated survival data.
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.
Huang, Chiung-Yu; Qin, Jing
2013-01-01
The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265
Nagel, Thomas; Kelly, Daniel J
2013-04-01
The biomechanical functionality of articular cartilage is derived from both its biochemical composition and the architecture of the collagen network. Failure to replicate this normal Benninghoff architecture in regenerating articular cartilage may in turn predispose the tissue to failure. In this article, the influence of the maturity (or functionality) of a tissue-engineered construct at the time of implantation into a tibial chondral defect on the likelihood of recapitulating a normal Benninghoff architecture was investigated using a computational model featuring a collagen remodeling algorithm. Such a normal tissue architecture was predicted to form in the intact tibial plateau due to the interplay between the depth-dependent extracellular matrix properties, foremost swelling pressures, and external mechanical loading. In the presence of even small empty defects in the articular surface, the collagen architecture in the surrounding cartilage was predicted to deviate significantly from the native state, indicating a possible predisposition for osteoarthritic changes. These negative alterations were alleviated by the implantation of tissue-engineered cartilage, where a mature implant was predicted to result in the formation of a more native-like collagen architecture than immature implants. The results of this study highlight the importance of cartilage graft functionality to maintain and/or re-establish joint function and suggest that engineering a tissue with a native depth-dependent composition may facilitate the establishment of a normal Benninghoff collagen architecture after implantation into load-bearing defects.
Basu, Rajit K; Wong, Hector R; Krawczeski, Catherine D; Wheeler, Derek S; Manning, Peter B; Chawla, Lakhmir S; Devarajan, Prasad; Goldstein, Stuart L
2014-12-30
Increases in serum creatinine (ΔSCr) from baseline signify acute kidney injury (AKI) but offer little granular information regarding its characteristics. The 10th Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) suggested that combining AKI biomarkers would provide better precision for AKI course prognostication. This study investigated the value of combining a functional damage biomarker (plasma cystatin C [pCysC]) with a tubular damage biomarker (urine neutrophil gelatinase-associated lipocalin [uNGAL]), forming a composite biomarker for prediction of discrete characteristics of AKI. Data from 345 children after cardiopulmonary bypass (CPB) were analyzed. Severe AKI was defined as Kidney Disease Global Outcomes Initiative stages 2 to 3 (≥100% ΔSCr) within 7 days of CPB. Persistent AKI lasted >2 days. SCr in reversible AKI returned to baseline ≤48 h after CPB. The composite of uNGAL (>200 ng/mg urine Cr = positive [+]) and pCysC (>0.8 mg/l = positive [+]), uNGAL+/pCysC+, measured 2 h after CPB initiation, was compared to ΔSCr increases of ≥50% for correlation with AKI characteristics by using predictive probabilities, likelihood ratios (LR), and area under the curve receiver operating curve (AUC-ROC) values [Corrected]. Severe AKI occurred in 18% of patients. The composite uNGAL+/pCysC+ demonstrated a greater likelihood than ΔSCr for severe AKI (+LR: 34.2 [13.0:94.0] vs. 3.8 [1.9:7.2]) and persistent AKI (+LR: 15.6 [8.8:27.5] versus 4.5 [2.3:8.8]). In AKI patients, the uNGAL-/pCysC+ composite was superior to ΔSCr for prediction of transient AKI. Biomarker composites carried greater probability for specific outcomes than ΔSCr strata. Composites of functional and tubular damage biomarkers are superior to ΔSCr for predicting discrete characteristics of AKI. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Basu, Rajit K.; Wong, Hector R.; Krawczeski, Catherine D.; Wheeler, Derek S.; Manning, Peter B.; Chawla, Lakhmir S.; Devarajan, Prasad; Goldstein, Stuart L.
2015-01-01
BACKGROUND Increases in serum creatinine (ΔSCr) from baseline signify acute kidney injury (AKI) but offer little granular information regarding its characteristics. The 10th Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) suggested that combining AKI biomarkers would provide better precision for AKI course prognostication. OBJECTIVES This study investigated the value of combining a functional damage biomarker (plasma cystatin C [pCysC]) with a tubular damage biomarker (urine neutrophil gelatinase-associated lipocalin [uNGAL]), forming a composite biomarker for prediction of discrete characteristics of AKI. METHODS Data from 345 children after cardiopulmonary bypass (CPB) were analyzed. Severe AKI was defined as Kidney Disease Global Outcomes Initiative stages 2 to 3 (>100% ΔSCr) within 7 days of CPB. Persistent AKI lasted >2 days. SCr in reversible AKI returned to baseline ≤48 h after CPB. The composite of uNGAL (>200 ng/mg urine Cr = positive [+]) and pCysC (>0.8 mg/l = positive [+]), uNGAL+/pCysC+, measured 2 h after CPB initiation, was compared to ΔSCr increases of ≤50% for correlation with AKI characteristics by using predictive probabilities, likelihood ratios (LR), and area under the curve receiver operating curve (AUC-ROC) values. RESULTS Severe AKI occurred in 18% of patients. The composite uNGAL+/pCysC+ demonstrated a greater likelihood than ΔSCr for severe AKI (+LR: 34.2 [13.0:94.0] vs. 3.8 [1.9:7.2]) and persistent AKI (+LR: 15.6 [8.8:27.5] versus 4.5 [2.3:8.8]). In AKI patients, the uNGAL−/pCysC+ composite was superior to ΔSCr for prediction of transient AKI. Biomarker composites carried greater probability for specific outcomes than ΔSCr strata. CONCLUSIONS Composites of functional and tubular damage biomarkers are superior to ΔSCr for predicting discrete characteristics of AKI. PMID:25541128
A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses
ERIC Educational Resources Information Center
Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini
2012-01-01
The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…
Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders
2007-01-01
Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…
A Semiparametric Approach for Composite Functional Mapping of Dynamic Quantitative Traits
Yang, Runqing; Gao, Huijiang; Wang, Xin; Zhang, Ji; Zeng, Zhao-Bang; Wu, Rongling
2007-01-01
Functional mapping has emerged as a powerful tool for mapping quantitative trait loci (QTL) that control developmental patterns of complex dynamic traits. Original functional mapping has been constructed within the context of simple interval mapping, without consideration of separate multiple linked QTL for a dynamic trait. In this article, we present a statistical framework for mapping QTL that affect dynamic traits by capitalizing on the strengths of functional mapping and composite interval mapping. Within this so-called composite functional-mapping framework, functional mapping models the time-dependent genetic effects of a QTL tested within a marker interval using a biologically meaningful parametric function, whereas composite interval mapping models the time-dependent genetic effects of the markers outside the test interval to control the genome background using a flexible nonparametric approach based on Legendre polynomials. Such a semiparametric framework was formulated by a maximum-likelihood model and implemented with the EM algorithm, allowing for the estimation and the test of the mathematical parameters that define the QTL effects and the regression coefficients of the Legendre polynomials that describe the marker effects. Simulation studies were performed to investigate the statistical behavior of composite functional mapping and compare its advantage in separating multiple linked QTL as compared to functional mapping. We used the new mapping approach to analyze a genetic mapping example in rice, leading to the identification of multiple QTL, some of which are linked on the same chromosome, that control the developmental trajectory of leaf age. PMID:17947431
NASA Astrophysics Data System (ADS)
Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran
2016-09-01
In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.
NASA Technical Reports Server (NTRS)
Aboudi, Jacob; Pindera, Marek-Jerzy; Arnold, Steven M.
1995-01-01
A recently developed micromechanical theory for the thermoelastic response of functionally graded composites with nonuniform fiber spacing in the through-thickness direction is further extended to enable analysis of material architectures characterized by arbitrarily nonuniform fiber spacing in two directions. In contrast to currently employed micromechanical approaches applied to functionally graded materials, which decouple the local and global effects by assuming the existence of a representative volume element at every point within the composite, the new theory explicitly couples the local and global effects. The analytical development is based on volumetric averaging of the various field quantities, together with imposition of boundary and interfacial conditions in an average sense. Results are presented that illustrate the capability of the derived theory to capture local stress gradients at the free edge of a laminated composite plate due to the application of a uniform temperature change. It is further shown that it is possible to reduce the magnitude of these stress concentrations by a proper management of the microstructure of the composite plies near the free edge. Thus by an appropriate tailoring of the microstructure it is possible to reduce or prevent the likelihood of delamination at free edges of standard composite laminates.
Chemical communication, sexual selection, and introgression in wall lizards.
MacGregor, Hannah E A; Lewandowsky, Rachel A M; d'Ettorre, Patrizia; Leroy, Chloé; Davies, Noel W; While, Geoffrey M; Uller, Tobias
2017-10-01
Divergence in communication systems should influence the likelihood that individuals from different lineages interbreed, and consequently shape the direction and rate of hybridization. Here, we studied the role of chemical communication in hybridization, and its contribution to asymmetric and sexually selected introgression between two lineages of the common wall lizard (Podarcis muralis). Males of the two lineages differed in the chemical composition of their femoral secretions. Chemical profiles provided information regarding male secondary sexual characters, but the associations were variable and inconsistent between lineages. In experimental contact zones, chemical composition was weakly associated with male reproductive success, and did not predict the likelihood of hybridization. Consistent with these results, introgression of chemical profiles in a natural hybrid zone resembled that of neutral nuclear genetic markers overall, but one compound in particular (tocopherol methyl ether) matched closely the introgression of visual sexual characters. These results imply that associations among male chemical profiles, sexual characters, and reproductive success largely reflect transient and environmentally driven effects, and that genetic divergence in chemical composition is largely neutral. We therefore suggest that femoral secretions in wall lizards primarily provide information about residency and individual identity rather than function as sexual signals. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Maximum likelihood solution for inclination-only data in paleomagnetism
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2010-08-01
We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.
Krishnan, Neeraja M; Seligmann, Hervé; Stewart, Caro-Beth; De Koning, A P Jason; Pollock, David D
2004-10-01
Reconstruction of ancestral DNA and amino acid sequences is an important means of inferring information about past evolutionary events. Such reconstructions suggest changes in molecular function and evolutionary processes over the course of evolution and are used to infer adaptation and convergence. Maximum likelihood (ML) is generally thought to provide relatively accurate reconstructed sequences compared to parsimony, but both methods lead to the inference of multiple directional changes in nucleotide frequencies in primate mitochondrial DNA (mtDNA). To better understand this surprising result, as well as to better understand how parsimony and ML differ, we constructed a series of computationally simple "conditional pathway" methods that differed in the number of substitutions allowed per site along each branch, and we also evaluated the entire Bayesian posterior frequency distribution of reconstructed ancestral states. We analyzed primate mitochondrial cytochrome b (Cyt-b) and cytochrome oxidase subunit I (COI) genes and found that ML reconstructs ancestral frequencies that are often more different from tip sequences than are parsimony reconstructions. In contrast, frequency reconstructions based on the posterior ensemble more closely resemble extant nucleotide frequencies. Simulations indicate that these differences in ancestral sequence inference are probably due to deterministic bias caused by high uncertainty in the optimization-based ancestral reconstruction methods (parsimony, ML, Bayesian maximum a posteriori). In contrast, ancestral nucleotide frequencies based on an average of the Bayesian set of credible ancestral sequences are much less biased. The methods involving simpler conditional pathway calculations have slightly reduced likelihood values compared to full likelihood calculations, but they can provide fairly unbiased nucleotide reconstructions and may be useful in more complex phylogenetic analyses than considered here due to their speed and flexibility. To determine whether biased reconstructions using optimization methods might affect inferences of functional properties, ancestral primate mitochondrial tRNA sequences were inferred and helix-forming propensities for conserved pairs were evaluated in silico. For ambiguously reconstructed nucleotides at sites with high base composition variability, ancestral tRNA sequences from Bayesian analyses were more compatible with canonical base pairing than were those inferred by other methods. Thus, nucleotide bias in reconstructed sequences apparently can lead to serious bias and inaccuracies in functional predictions.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
On non-parametric maximum likelihood estimation of the bivariate survivor function.
Prentice, R L
The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.
Fowler, Patrick J; Henry, David B; Marcal, Katherine E
2015-09-01
This study investigated the longitudinal effects of family structure changes and housing instability in adolescence on functioning in the transition to adulthood. A model examined the influence of household composition changes and mobility in context of ethnic differences and sociodemographic risks. Data from the National Longitudinal Study of Adolescent Health measured household and residential changes over a 12-month period among a nationally representative sample of adolescents. Assessments in young adulthood measured rates of depression, criminal activity, and smoking. Findings suggested housing mobility in adolescence predicted poorer functioning across outcomes in young adulthood, and youth living in multigenerational homes exhibited greater likelihood to be arrested than adolescents in single-generation homes. However, neither family structure changes nor its interaction with residential instability or ethnicity related to young adult outcomes. Findings emphasized the unique influence of housing mobility in the context of dynamic household compositions. Copyright © 2015 Elsevier Inc. All rights reserved.
Doyle, Caoilainn; Smeaton, Alan F.; Roche, Richard A. P.; Boran, Lorraine
2018-01-01
To elucidate the core executive function profile (strengths and weaknesses in inhibition, updating, and switching) associated with dyslexia, this study explored executive function in 27 children with dyslexia and 29 age matched controls using sensitive z-mean measures of each ability and controlled for individual differences in processing speed. This study found that developmental dyslexia is associated with inhibition and updating, but not switching impairments, at the error z-mean composite level, whilst controlling for processing speed. Inhibition and updating (but not switching) error composites predicted both dyslexia likelihood and reading ability across the full range of variation from typical to atypical. The predictive relationships were such that those with poorer performance on inhibition and updating measures were significantly more likely to have a diagnosis of developmental dyslexia and also demonstrate poorer reading ability. These findings suggest that inhibition and updating abilities are associated with developmental dyslexia and predict reading ability. Future studies should explore executive function training as an intervention for children with dyslexia as core executive functions appear to be modifiable with training and may transfer to improved reading ability. PMID:29892245
Computation of nonparametric convex hazard estimators via profile methods.
Jankowski, Hanna K; Wellner, Jon A
2009-05-01
This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.
Duke, Sean R; Martin, Steve E; Gaul, Catherine A
2017-10-01
The purpose of this study was to determine the relationship between Functional Movement Screen (FMS) score and the risk of time-loss injury in experienced male rugby union athletes. A secondary purpose was to determine the relationship between FMS-determined asymmetries and the risk of time-loss injury in these athletes. Functional Movement Screen scores were collected from male rugby union athletes (n = 73) during preseason and half-way through one 8-month season. Time-loss injury data were collected throughout the full season. A receiver-operator characteristic curve was created for each half of the season to identify FMS composite and asymmetry cut-off scores associated with increased likelihood of injury and determined odds ratios, sensitivity, and specificity in evaluating FMS as a predictor of injury risk. Odds ratio analyses revealed that when compared with those scoring >14, athletes with an FMS ≤14 were 10.42 times more likely (95% confidence interval [CI]: 1.28-84.75, p = 0.007) to have sustained injury in the first half of the season and 4.97 times (95% CI: 1.02-24.19, p = 0.029) more likely in the second half of the season. The presence of asymmetries was not associated with increased likelihood of injury. Experienced male rugby union athletes with FMS composite scores ≤14 are significantly more likely to sustain time-loss injury in a competitive season than those scoring >14. The quality of fundamental movement, as assessed by the FMS, is predictive of time-loss injury risk in experienced rugby union athletes and should be considered an important preseason assessment tool used by strength and conditioning and medical professionals in this sport with inherently high injury rates.
Bow, E J
2013-03-01
The success of modern anticancer treatment is a composite function of enhanced efficacy of surgical, radiation and systemic treatment strategies and of our collective clinical abilities in supporting patients through the perils of their cancer journeys. Despite the widespread availability of antibacterial therapies, the threat of community- or healthcare facility-acquired bacterial infection remains a constant risk to patients during this journey. The rising prevalence of colonization by multidrug-resistant (MDR) bacteria in the population, acquired through exposure from endemic environments, antimicrobial stewardship and infection prevention and control strategies notwithstanding, increases the likelihood that such organisms may be the cause of cancer treatment-related infection and the likelihood of antibacterial treatment failure. The high mortality associated with invasive MDR bacterial infection increases the likelihood that many patients may not survive long enough to reap the benefits of enhanced anticancer treatments, thus threatening the societal investment in the cancer journey. Since cancer care providers arguably no longer have, and are unlikely to have in the foreseeable future, the antibacterial tools to reliably rescue patients from harm's way, the difficult ethical debate over the risks and benefits of anticancer treatments must now be reopened.
The Maximum Likelihood Solution for Inclination-only Data
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2006-12-01
The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag
Annexin A9 (ANXA9) biomarker and therapeutic target in epithelial cancer
Hu, Zhi [El Cerrito, CA; Kuo, Wen-Lin [San Ramon, CA; Neve, Richard M [San Mateo, CA; Gray, Joe W [San Francisco, CA
2012-06-12
Amplification of the ANXA9 gene in human chromosomal region 1q21 in epithelial cancers indicates a likelihood of both in vivo drug resistance and metastasis, and serves as a biomarker indicating these aspects of the disease. ANXA9 can also serve as a therapeutic target. Interfering RNAs (iRNAs) (such as siRNA and miRNA) and shRNA adapted to inhibit ANXA9 expression, when formulated in a therapeutic composition, and delivered to cells of the tumor, function to treat the epithelial cancer.
Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei
2016-03-01
We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.
Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric
2010-01-01
It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140
Gaussian copula as a likelihood function for environmental models
NASA Astrophysics Data System (ADS)
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.
Identification of complex stiffness tensor from waveform reconstruction
NASA Astrophysics Data System (ADS)
Leymarie, N.; Aristégui, C.; Audoin, B.; Baste, S.
2002-03-01
An inverse method is proposed in order to determine the viscoelastic properties of composite-material plates from the plane-wave transmitted acoustic field. Analytical formulations of both the plate transmission coefficient and its first and second derivatives are established, and included in a two-step inversion scheme. Two objective functions to be minimized are then designed by considering the well-known maximum-likelihood principle and by using an analytic signal formulation. Through these innovative objective functions, the robustness of the inversion process against high level of noise in waveforms is improved and the method can be applied to a very thin specimen. The suitability of the inversion process for viscoelastic property identification is demonstrated using simulated data for composite materials with different anisotropy and damping degrees. A study of the effect of the rheologic model choice on the elastic property identification emphasizes the relevance of using a phenomenological description considering viscosity. Experimental characterizations show then the good reliability of the proposed approach. Difficulties arise experimentally for particular anisotropic media.
Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O
1994-01-01
The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.
Tailly, Thomas; Larish, Yaniv; Nadeau, Brandon; Violette, Philippe; Glickman, Leonard; Olvera-Posada, Daniel; Alenezi, Husain; Amann, Justin; Denstedt, John; Razvi, Hassan
2016-04-01
The mineral composition of a urinary stone may influence its surgical and medical treatment. Previous attempts at identifying stone composition based on mean Hounsfield Units (HUm) have had varied success. We aimed to evaluate the additional use of standard deviation of HU (HUsd) to more accurately predict stone composition. We identified patients from two centers who had undergone urinary stone treatment between 2006 and 2013 and had mineral stone analysis and a computed tomography (CT) available. HUm and HUsd of the stones were compared with ANOVA. Receiver operative characteristic analysis with area under the curve (AUC), Youden index, and likelihood ratio calculations were performed. Data were available for 466 patients. The major components were calcium oxalate monohydrate (COM), uric acid, hydroxyapatite, struvite, brushite, cystine, and CO dihydrate (COD) in 41.4%, 19.3%, 12.4%, 7.5%, 5.8%, 5.4%, and 4.7% of patients, respectively. The HUm of UA and Br was significantly lower and higher than the HUm of any other stone type, respectively. HUm and HUsd were most accurate in predicting uric acid with an AUC of 0.969 and 0.851, respectively. The combined use of HUm and HUsd resulted in increased positive predictive value and higher likelihood ratios for identifying a stone's mineral composition for all stone types but COM. To the best of our knowledge, this is the first report of CT data aiding in the prediction of brushite stone composition. Both HUm and HUsd can help predict stone composition and their combined use results in higher likelihood ratios influencing probability.
2015-01-01
This research has the purpose to establish a foundation for new classification and estimation of CDMA signals. Keywords: DS / CDMA signals, BPSK, QPSK...DEVELOPMENT OF THE AVERAGE LIKELIHOOD FUNCTION FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) USING BPSK AND QPSK SYMBOLS JANUARY 2015...To) OCT 2013 – OCT 2014 4. TITLE AND SUBTITLE DEVELOPMENT OF THE AVERAGE LIKELIHOOD FUNCTION FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) USING BPSK
Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method
NASA Astrophysics Data System (ADS)
Ardianti, Fitri; Sutarman
2018-01-01
In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.
Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao
2014-01-01
Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179
Composite Linear Models | Division of Cancer Prevention
By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty
A likelihood method for measuring the ultrahigh energy cosmic ray composition
NASA Astrophysics Data System (ADS)
High Resolution Fly'S Eye Collaboration; Abu-Zayyad, T.; Amman, J. F.; Archbold, G. C.; Belov, K.; Blake, S. A.; Belz, J. W.; Benzvi, S.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Connolly, B. M.; Deng, W.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M.; Rodriguez, D.; Sasaki, M.; Schnetzer, S.; Seman, M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2006-08-01
Air fluorescence detectors traditionally determine the dominant chemical composition of the ultrahigh energy cosmic ray flux by comparing the averaged slant depth of the shower maximum, Xmax, as a function of energy to the slant depths expected for various hypothesized primaries. In this paper, we present a method to make a direct measurement of the expected mean number of protons and iron by comparing the shapes of the expected Xmax distributions to the distribution for data. The advantages of this method includes the use of information of the full distribution and its ability to calculate a flux for various cosmic ray compositions. The same method can be expanded to marginalize uncertainties due to choice of spectra, hadronic models and atmospheric parameters. We demonstrate the technique with independent simulated data samples from a parent sample of protons and iron. We accurately predict the number of protons and iron in the parent sample and show that the uncertainties are meaningful.
Closed-loop carrier phase synchronization techniques motivated by likelihood functions
NASA Technical Reports Server (NTRS)
Tsou, H.; Hinedi, S.; Simon, M.
1994-01-01
This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.
Ryu, Jihye; Lee, Chaeyoung
2014-12-01
Positive selection not only increases beneficial allele frequency but also causes augmentation of allele frequencies of sequence variants in close proximity. Signals for positive selection were detected by the statistical differences in subsequent allele frequencies. To identify selection signatures in Korean cattle, we applied a composite log-likelihood (CLL)-based method, which calculates a composite likelihood of the allelic frequencies observed across sliding windows of five adjacent loci and compares the value with the critical statistic estimated by 50,000 permutations. Data for a total of 11,799 nucleotide polymorphisms were used with 71 Korean cattle and 209 foreign beef cattle. As a result, 147 signals were identified for Korean cattle based on CLL estimates (P < 0.01). The signals might be candidate genetic factors for meat quality by which the Korean cattle have been selected. Further genetic association analysis with 41 intragenic variants in the selection signatures with the greatest CLL for each chromosome revealed that marbling score was associated with five variants. Intensive association studies with all the selection signatures identified in this study are required to exclude signals associated with other phenotypes or signals falsely detected and thus to identify genetic markers for meat quality. © 2014 Stichting International Foundation for Animal Genetics.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
Linking Executive Function and Peer Problems from Early Childhood Through Middle Adolescence.
Holmes, Christopher J; Kim-Spoon, Jungmeen; Deater-Deckard, Kirby
2016-01-01
Peer interactions and executive function play central roles in the development of healthy children, as peer problems have been indicative of lower cognitive competencies such as self-regulatory behavior and poor executive function has been indicative of problem behaviors and social dysfunction. However, few studies have focused on the relation between peer interactions and executive function and the underlying mechanisms that may create this link. Using a national sample (n = 1164, 48.6% female) from the Study of Early Child Care and Youth Development (SECCYD), we analyzed executive function and peer problems (including victimization and rejection) across three waves within each domain (executive function or peer problems), beginning in early childhood and ending in middle adolescence. Executive function was measured as a multi-method, multi-informant composite including reports from parents on the Children's Behavior Questionnaire and Child Behavior Checklist and child's performance on behavioral tasks including the Continuous Performance Task, Woodcock-Johnson, Tower of Hanoi, Operation Span Task, Stroop, and Tower of London. Peer problems were measured as a multi-informant composite including self, teacher, and afterschool caregiver reports on multiple peer-relationship scales. Using a cross-lagged design, our Structural Equation Modeling findings suggested that experiencing peer problems contributed to lower executive function later in childhood and better executive function reduced the likelihood of experiencing peer problems later in childhood and middle adolescence, although these relations weakened as a child moves into adolescence. The results highlight that peer relationships are involved in the development of strengths and deficits in executive function and vice versa.
Linking Executive Function and Peer Problems from Early Childhood through Middle Adolescence
Holmes, Christopher J.; Kim-Spoon, Jungmeen; Deater-Deckard, Kirby
2015-01-01
Peer interactions and executive function play central roles in the development of healthy children, as peer problems have been indicative of lower cognitive competencies such as self-regulatory behavior and poor executive function has been indicative of problem behaviors and social dysfunction. However, few studies have focused on the relation between peer interactions and executive function and the underlying mechanisms that may create this link. Using a national sample (n = 1,164, 48.6% female) from the Study of Early Child Care and Youth Development (SECCYD), we analyzed executive function and peer problems (including victimization and rejection) across three waves within each domain (executive function or peer problems), beginning in early childhood and ending in middle adolescence. Executive function was measured as a multi-method, multi-informant composite including reports from parents on the Children’s Behavior Questionnaire and Child Behavior Checklist and child’s performance on behavioral tasks including the Continuous Performance Task, Woodcock-Johnson, Tower of Hanoi, Operation Span Task, Stroop, and Tower of London. Peer problems were measured as a multi-informant composite including self, teacher, and after school caregiver reports on multiple peer-relationship scales. Using a cross-lagged design, our Structural Equation Modeling findings suggested that experiencing peer problems contributed to lower executive function later in childhood and better executive function reduced the likelihood of experiencing peer problems later in childhood and middle adolescence, although these relations weakened as a child moves into adolescence. The results highlight that peer relationships are involved in the development of strengths and deficits in executive function and vice versa. PMID:26096194
A Probabilistic Approach to Mitigate Composition Attacks on Privacy in Non-Coordinated Environments
Sarowar Sattar, A.H.M.; Li, Jiuyong; Liu, Jixue; Heatherly, Raymond; Malin, Bradley
2014-01-01
Organizations share data about individuals to drive business and comply with law and regulation. However, an adversary may expose confidential information by tracking an individual across disparate data publications using quasi-identifying attributes (e.g., age, geocode and sex) associated with the records. Various studies have shown that well-established privacy protection models (e.g., k-anonymity and its extensions) fail to protect an individual’s privacy against this “composition attack”. This type of attack can be thwarted when organizations coordinate prior to data publication, but such a practice is not always feasible. In this paper, we introduce a probabilistic model called (d, α)-linkable, which mitigates composition attack without coordination. The model ensures that d confidential values are associated with a quasi-identifying group with a likelihood of α. We realize this model through an efficient extension to k-anonymization and use extensive experiments to show our strategy significantly reduces the likelihood of a successful composition attack and can preserve more utility than alternative privacy models, such as differential privacy. PMID:25598581
Maximum-likelihood soft-decision decoding of block codes using the A* algorithm
NASA Technical Reports Server (NTRS)
Ekroot, L.; Dolinar, S.
1994-01-01
The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.
How much to trust the senses: Likelihood learning
Sato, Yoshiyuki; Kording, Konrad P.
2014-01-01
Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood. PMID:25398975
Identification of damage in composite structures using Gaussian mixture model-processed Lamb waves
NASA Astrophysics Data System (ADS)
Wang, Qiang; Ma, Shuxian; Yue, Dong
2018-04-01
Composite materials have comprehensively better properties than traditional materials, and therefore have been more and more widely used, especially because of its higher strength-weight ratio. However, the damage of composite structures is usually varied and complicated. In order to ensure the security of these structures, it is necessary to monitor and distinguish the structural damage in a timely manner. Lamb wave-based structural health monitoring (SHM) has been proved to be effective in online structural damage detection and evaluation; furthermore, the characteristic parameters of the multi-mode Lamb wave varies in response to different types of damage in the composite material. This paper studies the damage identification approach for composite structures using the Lamb wave and the Gaussian mixture model (GMM). The algorithm and principle of the GMM, and the parameter estimation, is introduced. Multi-statistical characteristic parameters of the excited Lamb waves are extracted, and the parameter space with reduced dimensions is adopted by principal component analysis (PCA). The damage identification system using the GMM is then established through training. Experiments on a glass fiber-reinforced epoxy composite laminate plate are conducted to verify the feasibility of the proposed approach in terms of damage classification. The experimental results show that different types of damage can be identified according to the value of the likelihood function of the GMM.
Wagener, Sandra; Dommershausen, Nils; Jungnickel, Harald; Laux, Peter; Mitrano, Denise; Nowack, Bernd; Schneider, Gregor; Luch, Andreas
2016-06-07
This study addresses the release of total silver (Ag) and silver nanoparticles (Ag-NPs) from textiles into artificial sweat, particularly considering the functionalization technology used in textile finishing. Migration experiments were conducted for four commercially available textiles and for six laboratory-prepared textiles. Two among these lab-prepared textiles represent materials in which Ag-NPs were embedded within the textile fibers (composites), whereas the other lab-prepared textiles contain Ag particles on the respective fiber surfaces (coatings). The results indicate a smaller release of total Ag from composites in comparison to surface-coated textiles. The particulate fraction determined within the artificial sweat was negligible for most textiles, meaning that the majority of the released Ag is present as dissolved Ag. It is also relevant to note that nanotextiles do not release more particulate Ag than conventional Ag textiles. The results rather indicate that the functionalization type is the most important parameter affecting the migration. Furthermore, after measuring different Ag-NP types in their pristine form with inductively coupled plasma mass spectrometry in the single particle mode, there is evidence that particle modifications, like surface coating, may also influence the dissolution behavior of the Ag-NPs in the sweat solutions. These factors are important when discussing the likelihood of consumer exposure.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.
Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions
Barrett, Harrison H.; Dainty, Christopher; Lara, David
2008-01-01
Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255
Multiple robustness in factorized likelihood models.
Molina, J; Rotnitzky, A; Sued, M; Robins, J M
2017-09-01
We consider inference under a nonparametric or semiparametric model with likelihood that factorizes as the product of two or more variation-independent factors. We are interested in a finite-dimensional parameter that depends on only one of the likelihood factors and whose estimation requires the auxiliary estimation of one or several nuisance functions. We investigate general structures conducive to the construction of so-called multiply robust estimating functions, whose computation requires postulating several dimension-reducing models but which have mean zero at the true parameter value provided one of these models is correct.
cosmoabc: Likelihood-free inference for cosmology
NASA Astrophysics Data System (ADS)
Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.
2015-05-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
ERIC Educational Resources Information Center
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...
2014-10-16
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.
2014-01-01
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavel, D.T.
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.
ERIC Educational Resources Information Center
Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike
2011-01-01
It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…
Exploring Neutrino Oscillation Parameter Space with a Monte Carlo Algorithm
NASA Astrophysics Data System (ADS)
Espejel, Hugo; Ernst, David; Cogswell, Bernadette; Latimer, David
2015-04-01
The χ2 (or likelihood) function for a global analysis of neutrino oscillation data is first calculated as a function of the neutrino mixing parameters. A computational challenge is to obtain the minima or the allowed regions for the mixing parameters. The conventional approach is to calculate the χ2 (or likelihood) function on a grid for a large number of points, and then marginalize over the likelihood function. As the number of parameters increases with the number of neutrinos, making the calculation numerically efficient becomes necessary. We implement a new Monte Carlo algorithm (D. Foreman-Mackey, D. W. Hogg, D. Lang and J. Goodman, Publications of the Astronomical Society of the Pacific, 125 306 (2013)) to determine its computational efficiency at finding the minima and allowed regions. We examine a realistic example to compare the historical and the new methods.
Fastening hardware to honeycomb panels
NASA Technical Reports Server (NTRS)
Kenger, A.
1979-01-01
Adhesive bonding reduces likelihood of skin failure due to excessive forces or torques by utilizing an adhesive to honeycomb skin. Concept is useful in other applications of composites such as aircraft, automobiles, and home appliances.
Different types of crude oil and refined product, of all different chemical compositions, have distinct physical properties. These properties affect the way oil spreads and breaks down, its hazard to marine and human life, and the likelihood of threat.
ERIC Educational Resources Information Center
Moses, Tim
2008-01-01
Nine statistical strategies for selecting equating functions in an equivalent groups design were evaluated. The strategies of interest were likelihood ratio chi-square tests, regression tests, Kolmogorov-Smirnov tests, and significance tests for equated score differences. The most accurate strategies in the study were the likelihood ratio tests…
Chen, Rui; Hyrien, Ollivier
2011-01-01
This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-07-01
We derive the essentials of the skewed weak lensing likelihood via a simple hierarchical forward model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of Lambda cold dark matter. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from cosmic microwave background analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30 per cent of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound.
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-04-01
We derive the essentials of the skewed weak lensing likelihood via a simple Hierarchical Forward Model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of ΛCDM. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from CMB analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30% of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.
Correlates of physical activity participation in community-dwelling older adults.
Haley, Christy; Andel, Ross
2010-10-01
The authors examined factors related to participation in walking, gardening or yard work, and sports or exercise in 686 community-dwelling adults 60-95 years of age from Wave IV of the population-based Americans' Changing Lives Study. Logistic regression revealed that male gender, being married, and better functional health were associated with greater likelihood of participating in gardening or yard work (p < .05). Male gender, better functional health, and lower body-mass index were independently associated with greater likelihood of walking (p < .05). Increasing age, male gender, higher education, and better functional health were associated with greater likelihood of participating in sports or exercise (p < .05). Subsequent analyses yielded an interaction of functional health by gender in sport or exercise participation (p = .06), suggesting a greater association between functional health and participation in men. Gender and functional health appear to be particularly important for physical activity participation, which may be useful in guiding future research. Attention to different subgroups may be needed to promote participation in specific activities.
VizieR Online Data Catalog: Star formation histories of LG dwarf galaxies (Weisz+, 2014)
NASA Astrophysics Data System (ADS)
Weisz, D. R.; Dolphin, A. E.; Skillman, E. D.; Holtzman, J.; Gilbert, K. M.; Dalcanton, J. J.; Williams, B. F.
2017-03-01
For this paper, we have selected only dwarf galaxies that are located within the zero surface velocity of the LG (~1 Mpc; e.g., van den Bergh 2000, The Galaxies of the Local Group (Cambridge: Cambridge Univ. Press) ; McConnachie 2012, J/AJ/144/4). This definition excludes some dwarfs that have been historically associated with the LG, such as GR8 and IC 5152, but which are located well beyond 1 Mpc. We have chosen to include two galaxies with WFPC2 imaging that are located on the periphery of the LG (Sex A and Sex B), because of their ambiguous association with the LG, the NGC 3109 sub-group, or perhaps neither (although see Bellazzini et al. 2013A&A...559L..11B for discussion of the possible association of these systems). We measured the SFH of each field using the maximum likelihood CMD fitting routine, MATCH (Dolphin 2002MNRAS.332...91D). Briefly, MATCH works as follows: it accepts a range of input parameters (e.g., initial mass function (IMF) slope, binary fraction, age and metallicity bin widths, etc.), uses these parameters to construct synthetic CMDs of simple stellar populations (SSPs), and then linearly combines them with a model foreground CMD to form a composite model CMD with a complex SFH. The composite model CMD is then convolved with the noise model from the artificial star tests (i.e., completeness, photometric uncertainties, and color/magnitude biases). The resulting model CMD is then compared to the observed CMD using a Poisson likelihood statistic. (3 data files).
Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15
ERIC Educational Resources Information Center
Zhang, Jinming
2005-01-01
Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…
NASA Astrophysics Data System (ADS)
Maghsoudi, Mastoureh; Bakar, Shaiful Anuar Abu
2017-05-01
In this paper, a recent novel approach is applied to estimate the threshold parameter of a composite model. Several composite models from Transformed Gamma and Inverse Transformed Gamma families are constructed based on this approach and their parameters are estimated by the maximum likelihood method. These composite models are fitted to allocated loss adjustment expenses (ALAE). In comparison to all composite models studied, the composite Weibull-Inverse Transformed Gamma model is proved to be a competitor candidate as it best fit the loss data. The final part considers the backtesting method to verify the validation of VaR and CTE risk measures.
NASA Astrophysics Data System (ADS)
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
Silverman, Merav H.; Jedd, Kelly; Luciana, Monica
2015-01-01
Behavioral responses to, and the neural processing of, rewards change dramatically during adolescence and may contribute to observed increases in risk-taking during this developmental period. Functional MRI (fMRI) studies suggest differences between adolescents and adults in neural activation during reward processing, but findings are contradictory, and effects have been found in non-predicted directions. The current study uses an activation likelihood estimation (ALE) approach for quantitative meta-analysis of functional neuroimaging studies to: 1) confirm the network of brain regions involved in adolescents’ reward processing, 2) identify regions involved in specific stages (anticipation, outcome) and valence (positive, negative) of reward processing, and 3) identify differences in activation likelihood between adolescent and adult reward-related brain activation. Results reveal a subcortical network of brain regions involved in adolescent reward processing similar to that found in adults with major hubs including the ventral and dorsal striatum, insula, and posterior cingulate cortex (PCC). Contrast analyses find that adolescents exhibit greater likelihood of activation in the insula while processing anticipation relative to outcome and greater likelihood of activation in the putamen and amygdala during outcome relative to anticipation. While processing positive compared to negative valence, adolescents show increased likelihood for activation in the posterior cingulate cortex (PCC) and ventral striatum. Contrasting adolescent reward processing with the existing ALE of adult reward processing (Liu et al., 2011) reveals increased likelihood for activation in limbic, frontolimbic, and striatal regions in adolescents compared with adults. Unlike adolescents, adults also activate executive control regions of the frontal and parietal lobes. These findings support hypothesized elevations in motivated activity during adolescence. PMID:26254587
Unified Theory of Inference for Text Understanding
1986-11-25
Technical Report S. L. Graham Principal Investigator (4151 642-2059 DTIC ^ELECTE APR 2 21987 D "The views and conclusions contained in this document...obtain X? Function - Infer P will use X for its normal purpose, if it has one. Intervention - How could C keep P from obtaining X? Knowledge Propagation...likelihood is low otherwise, likelihood is moderate otherwise, does X have a normal function ? if so, does P do actions like this function ? if so
Assessing compatibility of direct detection data: halo-independent global likelihood analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.
2016-10-18
We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be comparedmore » with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.« less
NASA Astrophysics Data System (ADS)
Peng, Juan-juan; Wang, Jian-qiang; Yang, Wu-E.
2017-01-01
In this paper, multi-criteria decision-making (MCDM) problems based on the qualitative flexible multiple criteria method (QUALIFLEX), in which the criteria values are expressed by multi-valued neutrosophic information, are investigated. First, multi-valued neutrosophic sets (MVNSs), which allow the truth-membership function, indeterminacy-membership function and falsity-membership function to have a set of crisp values between zero and one, are introduced. Then the likelihood of multi-valued neutrosophic number (MVNN) preference relations is defined and the corresponding properties are also discussed. Finally, an extended QUALIFLEX approach based on likelihood is explored to solve MCDM problems where the assessments of alternatives are in the form of MVNNs; furthermore an example is provided to illustrate the application of the proposed method, together with a comparison analysis.
Expressed Likelihood as Motivator: Creating Value through Engaging What’s Real
Higgins, E. Tory; Franks, Becca; Pavarini, Dana; Sehnert, Steen; Manley, Katie
2012-01-01
Our research tested two predictions regarding how likelihood can have motivational effects as a function of how a probability is expressed. We predicted that describing the probability of a future event that could be either A or B using the language of high likelihood (“80% A”) rather than low likelihood (“20% B”), i.e., high rather than low expressed likelihood, would make a present activity more real and engaging, as long as the future event had properties relevant to the present activity. We also predicted that strengthening engagement from the high (vs. low) expressed likelihood of a future event would intensify the value of present positive and negative objects (in opposite directions). Both predictions were supported. There was also evidence that this intensification effect from expressed likelihood was independent of the actual probability or valence of the future event. What mattered was whether high versus low likelihood language was used to describe the future event. PMID:23940411
ERIC Educational Resources Information Center
Paek, Insu; Wilson, Mark
2011-01-01
This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…
Jet fuel-induced immunotoxicity.
Harris, D T; Sakiestewa, D; Titone, D; Robledo, R F; Young, R S; Witten, M
2000-09-01
Chronic exposure to jet fuel has been shown to cause human liver dysfunction, emotional dysfunction, abnormal electroencephalograms, shortened attention spans, and to decrease sensorimotor speed (3-5). Exposure to potential environmental toxicants such as jet fuel may have significant effects on host systems beyond those readily visible (e.g., physiology, cardiology, respiratory, etc.), e.g., the immune system. Significant changes in immune function, even if short-lived, may have serious consequences for the exposed host that may impinge affect susceptibility to infectious agents. Major alterations in immune function that are long lasting may result in an increased likelihood of development and/or progression of cancer, as well as autoimmune diseases. In the current study mice were exposed 1 h/day for 7 days to a 1000-mg/m3 concentration of aerosolized jet fuel obtained from various sources (JP-8, JP-8+100 and Jet A1) and of differing compositions to simulate occupational exposures. Twenty-four hours after the last exposure the mice were analyzed for effects on the immune system. It was observed that exposure to all jet fuel sources examined had detrimental effects on the immune system. Decreases in viable immune cell numbers and immune organ weights were found. Jet fuel exposure resulted in differential losses of immune cell populations in the thymus. Further, jet fuel exposure resulted in significantly decreased immune function, as analyzed by mitogenesis assays. Suppressed immune function could not be overcome by the addition of exogenous growth factors known to stimulate immune function. Thus, short-term, low-concentration exposure of mice to aerosolized jet fuel, regardless of source or composition, caused significant deleterious effects on the immune system.
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgans, D. L.; Lindberg, S. L.
The purpose of this technical approach document (TAD) is to document the assumptions, equations, and methods used to perform the groundwater pathway radiological dose calculations for the revised Hanford Site Composite Analysis (CA). DOE M 435.1-1, states, “The composite analysis results shall be used for planning, radiation protection activities, and future use commitments to minimize the likelihood that current low-level waste disposal activities will result in the need for future corrective or remedial actions to adequately protect the public and the environment.”
A likelihood ratio test for evolutionary rate shifts and functional divergence among proteins
Knudsen, Bjarne; Miyamoto, Michael M.
2001-01-01
Changes in protein function can lead to changes in the selection acting on specific residues. This can often be detected as evolutionary rate changes at the sites in question. A maximum-likelihood method for detecting evolutionary rate shifts at specific protein positions is presented. The method determines significance values of the rate differences to give a sound statistical foundation for the conclusions drawn from the analyses. A statistical test for detecting slowly evolving sites is also described. The methods are applied to a set of Myc proteins for the identification of both conserved sites and those with changing evolutionary rates. Those positions with conserved and changing rates are related to the structures and functions of their proteins. The results are compared with an earlier Bayesian method, thereby highlighting the advantages of the new likelihood ratio tests. PMID:11734650
Nagelkerke, Nico; Fidler, Vaclav
2015-01-01
The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.
Deviation from expected cognitive ability across psychotic disorders.
Hochberger, W C; Combs, T; Reilly, J L; Bishop, J R; Keefe, R S E; Clementz, B A; Keshavan, M S; Pearlson, G D; Tamminga, C A; Hill, S K; Sweeney, J A
2018-02-01
Patients with schizophrenia show a deficit in cognitive ability compared to estimated premorbid and familial intellectual abilities. However, the degree to which this pattern holds across psychotic disorders and is familial is unclear. The present study examined deviation from expected cognitive level in schizophrenia, schizoaffective disorder, and psychotic bipolar disorder probands and their first-degree relatives. Using a norm-based regression approach, parental education and WRAT-IV Reading scores (both significant predictors of cognitive level in the healthy control group) were used to predict global neuropsychological function as measured by the composite score from the Brief Assessment of Cognition in Schizophrenia (BACS) test in probands and relatives. When compared to healthy control group, psychotic probands showed a significant gap between observed and predicted BACS composite scores and a greater likelihood of robust cognitive decline. This effect was not seen in unaffected relatives. While BACS and WRAT-IV Reading scores were themselves highly familial, the decline in cognitive function from expectation had lower estimates of familiality. Thus, illness-related factors such as epigenetic, treatment, or pathophysiological factors may be important causes of illness related decline in cognitive abilities across psychotic disorders. This is consistent with the markedly greater level of cognitive impairment seen in affected individuals compared to their unaffected family members. Copyright © 2017 Elsevier B.V. All rights reserved.
Yiu, Sean; Tom, Brian Dm
2017-01-01
Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.
Stamatakis, Alexandros; Ott, Michael
2008-12-27
The continuous accumulation of sequence data, for example, due to novel wet-laboratory techniques such as pyrosequencing, coupled with the increasing popularity of multi-gene phylogenies and emerging multi-core processor architectures that face problems of cache congestion, poses new challenges with respect to the efficient computation of the phylogenetic maximum-likelihood (ML) function. Here, we propose two approaches that can significantly speed up likelihood computations that typically represent over 95 per cent of the computational effort conducted by current ML or Bayesian inference programs. Initially, we present a method and an appropriate data structure to efficiently compute the likelihood score on 'gappy' multi-gene alignments. By 'gappy' we denote sampling-induced gaps owing to missing sequences in individual genes (partitions), i.e. not real alignment gaps. A first proof-of-concept implementation in RAXML indicates that this approach can accelerate inferences on large and gappy alignments by approximately one order of magnitude. Moreover, we present insights and initial performance results on multi-core architectures obtained during the transition from an OpenMP-based to a Pthreads-based fine-grained parallelization of the ML function.
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data
Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu
2012-01-01
SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2012-01-01
When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282
Radial Mixing and Ru-Mo Isotope Systematics Under Different Accretion Scenarios
NASA Astrophysics Data System (ADS)
Fischer, R. A.; Nimmo, F.; O'Brien, D. P.
2017-12-01
The Ru-Mo isotopic compositions of inner Solar System bodies may reflect the provenance of accreted material and how it evolved with time, both of which are controlled by the accretion scenario these bodies experienced. Here we use a total of 116 N-body simulations of terrestrial planet accretion, run in the Eccentric Jupiter and Saturn (EJS), Circular Jupiter and Saturn (CJS), and Grand Tack scenarios, to model the Ru-Mo anomalies of Earth, Mars, and Theia analogues. This model starts by applying an initial step function in Ru-Mo isotopic composition, with compositions reflecting those in meteorites, and traces compositional evolution as planets accrete. The mass-weighted provenance of the resulting planets reveals more radial mixing in Grand Tack simulations than in EJS/CJS simulations, and more efficient mixing among late-accreted material than during the main phase of accretion in EJS/CJS simulations. We find that an extensive homogenous inner disk region is required to reproduce Earth's observed Ru-Mo composition. EJS/CJS simulations require a homogeneous reservoir in the inner disk extending to ≥3-4 AU (≥74-98% of initial mass) to reproduce Earth's composition, while Grand Tack simulations require a homogeneous reservoir extending to ≥3-10 AU (≥97-99% of initial mass), and likely to ≥7-10 AU. In the Grand Tack model, Jupiter's initial location (the most likely location for a discontinuity in isotopic composition) is 3.5 AU; however, this step location has only a 33% likelihood of producing an Earth with the correct Ru-Mo isotopic signature for the most plausible model conditions. Our results give the testable predictions that Mars has zero Ru anomaly and small or zero Mo anomaly, and the Moon has zero Mo anomaly. These predictions are insensitive to wide variations in parameter choices.
Spectral likelihood expansions for Bayesian inference
NASA Astrophysics Data System (ADS)
Nagel, Joseph B.; Sudret, Bruno
2016-03-01
A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Statistical tests to compare motif count exceptionalities
Robin, Stéphane; Schbath, Sophie; Vandewalle, Vincent
2007-01-01
Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use. PMID:17346349
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.
Kong, Shengchun; Nan, Bin
2014-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso
Kong, Shengchun; Nan, Bin
2013-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328
Top-quark mass measurement from dilepton events at CDF II.
Abulencia, A; Acosta, D; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Ambrose, D; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arguin, J-F; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Bachacou, H; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Bedeschi, F; Behari, S; Belforte, S; Bellettini, G; Bellinger, J; Belloni, A; Ben-Haim, E; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bishai, M; Blair, R E; Blocker, C; Bloom, K; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Bourov, S; Boveia, A; Brau, B; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carron, S; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chapman, J; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Chu, P H; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciljak, M; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Coca, M; Connolly, A; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Cruz, A; Cuevas, J; Culbertson, R; Cyr, D; DaRonco, S; D'Auria, S; D'Onofrio, M; Dagenhart, D; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; Dell'Orso, M; Demers, S; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Dionisi, C; Dittmann, J; Dituro, P; Dörr, C; Dominguez, A; Donati, S; Donega, M; Dong, P; Donini, J; Dorigo, T; Dube, S; Ebina, K; Efron, J; Ehlers, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Flores-Castillo, L R; Foland, A; Forrester, S; Foster, G W; Franklin, M; Freeman, J C; Fujii, Y; Furic, I; Gajjar, A; Gallinaro, M; Galyardt, J; Garcia, J E; Garcia Sciverez, M; Garfinkel, A F; Gay, C; Gerberich, H; Gerchtein, E; Gerdes, D; Giagu, S; Giannetti, P; Gibson, A; Gibson, K; Ginsburg, C; Giolo, K; Giordani, M; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, J; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Gotra, Y; Goulianos, K; Gresele, A; Griffiths, M; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Haber, C; Hahn, S R; Hahn, K; Halkiadakis, E; Hamilton, A; Han, B-Y; Handler, R; Happacher, F; Hara, K; Hare, M; Harper, S; Harr, R F; Harris, R M; Hatakeyama, K; Hauser, J; Hays, C; Hayward, H; Heijboer, A; Heinemann, B; Heinrich, J; Hennecke, M; Herndon, M; Heuser, J; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Holloway, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Huston, J; Ikado, K; Incandela, J; Introzzi, G; Iori, M; Ishizawa, Y; Ivanov, A; Iyutin, B; James, E; Jang, D; Jayatilaka, B; Jeans, D; Jensen, H; Jeon, E J; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Kang, J; Karagoz-Unel, M; Karchin, P E; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, M S; Kim, S B; Kim, S H; Kim, Y K; Kirby, M; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Kobayashi, H; Kondo, K; Kong, D J; Konigsberg, J; Kordas, K; Korytov, A; Kotwal, A V; Kovalev, A; Kraus, J; Kravchenko, I; Kreps, M; Kreymer, A; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kuhlmann, S E; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecci, C; LeCompte, T; Lee, J; Lee, J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Li, K; Lin, C; Lin, C S; Lindgren, M; Lipeles, E; Liss, T M; Lister, A; Litvintsev, D O; Liu, T; Liu, Y; Lockyer, N S; Loginov, A; Loreti, M; Loverre, P; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Maki, T; Maksimovic, P; Manca, G; Margaroli, F; Marginean, R; Marino, C; Martin, A; Martin, M; Martin, V; Martínez, M; Maruyama, T; Matsunaga, H; Mattson, M E; Mazini, R; Mazzanti, P; McFarland, K S; McGivern, D; McIntyre, P; McNamara, P; McNulty, R; Mehta, A; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; von der Mey, M; Miao, T; Miladinovic, N; Miles, J; Miller, R; Miller, J S; Mills, C; Milnik, M; Miquel, R; Miscetti, S; Mitselmakher, G; Miyamoto, A; Moggi, N; Mohr, B; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Mulhearn, M; Muller, Th; Mumford, R; Murat, P; Nachtman, J; Nahn, S; Nakano, I; Napier, A; Naumov, D; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nigmanov, T; Nodulman, L; Norniella, O; Ogawa, T; Oh, S H; Oh, Y D; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagliarone, C; Palencia, E; Paoletti, R; Papadimitriou, V; Papikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pitts, K; Plager, C; Pondrom, L; Pope, G; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Rakitin, A; Rappoccio, S; Ratnikov, F; Reisert, B; Rekovic, V; van Remortel, N; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Rinnert, K; Ristori, L; Robertson, W J; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Rott, C; Ruiz, A; Russ, J; Rusu, V; Ryan, D; Saarikko, H; Sabik, S; Safonov, A; Sakumoto, W K; Salamanna, G; Salto, O; Saltzberg, D; Sanchez, C; Santi, L; Sarkar, S; Sato, K; Savard, P; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Semeria, F; Sexton-Kennedy, L; Sfiligoi, I; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sill, A; Sinervo, P; Sisakyan, A; Sjolin, J; Skiba, A; Slaughter, A J; Sliwa, K; Smirnov, D; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Squillacioti, P; Stanitzki, M; Staveris-Polykalas, A; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sumorok, K; Sun, H; Suzuki, T; Taffard, A; Tafirout, R; Takashima, R; Takeuchi, Y; Takikawa, K; Tanaka, M; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Tether, S; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tollefson, K; Tomura, T; Tonelli, D; Tönnesmann, M; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tsuchiya, R; Tsuno, S; Turini, N; Ukegawa, F; Unverhau, T; Uozumi, S; Usynin, D; Vacavant, L; Vaiciulis, A; Vallecorsa, S; Varganov, A; Vataga, E; Velev, G; Veramendi, G; Veszpremi, V; Vickey, T; Vidal, R; Vila, I; Vilar, R; Vollrath, I; Volobouev, I; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner, W; Wallny, R; Walter, T; Wan, Z; Wang, M J; Wang, S M; Warburton, A; Ward, B; Waschke, S; Waters, D; Watts, T; Weber, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Worm, S; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, Y; Yang, C; Yang, U K; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zetti, F; Zhang, X; Zhou, J; Zucchelli, S
2006-04-21
We report a measurement of the top-quark mass using events collected by the CDF II detector from pp collisions at square root of s = 1.96 TeV at the Fermilab Tevatron. We calculate a likelihood function for the top-quark mass in events that are consistent with tt --> bl(-)nu(l)bl'+ nu'(l) decays. The likelihood is formed as the convolution of the leading-order matrix element and detector resolution functions. The joint likelihood is the product of likelihoods for each of 33 events collected in 340 pb(-1) of integrated luminosity, yielding a top-quark mass M(t) = 165.2 +/- 6.1(stat) +/- 3.4(syst) GeV/c2. This first application of a matrix-element technique to tt --> bl+ nu(l)bl'- nu(l') decays gives the most precise single measurement of M(t) in dilepton events. Combined with other CDF run II measurements using dilepton events, we measure M(t) = 167.9 +/- 5.2(stat) +/- 3.7(syst) GeV/c2.
Maximum-likelihood estimation of parameterized wavefronts from multifocal data
Sakamoto, Julia A.; Barrett, Harrison H.
2012-01-01
A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282
Perception of risk from automobile safety defects.
Slovic, P; MacGregor, D; Kraus, N N
1987-10-01
Descriptions of safety engineering defects of the kind that compel automobile manufacturers to initiate a recall campaign were evaluated by individuals on a set of risk characteristic scales that included overall vehicle riskiness, manufacturer's ability to anticipate the defect, importance for vehicle operation, severity of consequences and likelihood of compliance with a recall notice. A factor analysis of the risk characteristics indicated that judgments could be summarized in terms of two composite scales, one representing the uncontrollability of the damage the safety defect might cause and the other representing the foreseeability of the defect by the manufacturer. Motor vehicle defects were found to be highly diverse in terms of the perceived qualities of their risks. Location of individual defects within the factor space was closely associated with perceived riskiness, perceived likelihood of purchasing another car from the same manufacturer, perceived likelihood of compliance with a recall notice, and actual compliance rates.
Work-related injury factors and safety climate perception in truck drivers.
Anderson, Naomi J; Smith, Caroline K; Byrd, Jesse L
2017-08-01
The trucking industry has a high burden of work-related injuries. This study examined factors, such as safety climate perceptions, that may impact injury risk. A random sample of 9800 commercial driver's license holders (CDL) were sent surveys, only 4360 were eligible truck drivers. Descriptive statistics and logistic regression models were developed to describe the population and identify variables associated with work-related injury. 2189 drivers completed the pertinent interview questions. Driving less-than-truckload, daytime sleepiness, pressure to work faster, and having a poor composite score for safety perceptions were all associated with increased likelihood of work-related injury. Positive safety perception score was protective for odds of work-related injury, and increased claim filing when injured. Positive psychological safety climate is associated with decreased likelihood of work-related injury and increased likelihood that a driver injured on the job files a workers' compensation claim. © 2017 Wiley Periodicals, Inc.
Christensen, Mark H; Kohlmeier, Kristi A
2016-03-01
The earlier an individual initiates cigarette smoking, the higher the likelihood of development of dependency to nicotine, the addictive ingredient in cigarettes. One possible mechanism underlying this higher addiction liability is an ontogenetically differential cellular response induced by nicotine in neurons mediating the reinforcing or euphoric effects of this drug, which could arise from age-related differences in the composition of nicotinic acetylcholine receptor (nAChR) subunits. In the current study, we examined whether the subunit composition of nAChRs differed between neurons within the laterodorsal tegmentum (LDT), a nucleus importantly involved in drug addiction associated behaviours, across two periods of ontogeny in which nicotine-mediated excitatory responses were shown to depend on age. To this end, whole-cell patch-clamp recordings in mouse brain slices from identified LDT neurons, in combination with nAChR subunit-specific receptor antagonists, were conducted. Comparison of the contribution of different nAChR subunits to acetylcholine (ACh)-induced inward currents indicated that the contributions of the β2 and/or β4 and α7 nAChR subunits alter across age. Taken together, we conclude that across a limited ontogenetic period, there is plasticity in the subunit composition of nAChRs in LDT neurons. In addition, our data indicate, for the first time, functional presence of α6 nAChR subunits in LDT neurons within the age ranges studied. Changes in subunit composition of nAChRs across ontogeny could contribute to the age-related differential excitability induced by nicotine. Differences in the subunit composition of nAChRs within the LDT would be expected to contribute to ontogenetic-dependent outflow from the LDT to target regions, which include reward-related circuitry. © 2014 Society for the Study of Addiction.
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Taghva, Alexander; Karst, Edward; Underwood, Paul
2017-08-01
Concordant paresthesia coverage is an independent predictor of pain relief following spinal cord stimulation (SCS). Using aggregate data, our objective is to produce a map of paresthesia coverage as a function of electrode location in SCS. This retrospective analysis used x-rays, SCS programming data, and paresthesia coverage maps from the EMPOWER registry of SCS implants for chronic neuropathic pain. Spinal level of dorsal column stimulation was determined by x-ray adjudication and active cathodes in patient programs. Likelihood of paresthesia coverage was determined as a function of stimulating electrode location. Segments of paresthesia coverage were grouped anatomically. Fisher's exact test was used to identify significant differences in likelihood of paresthesia coverage as a function of spinal stimulation level. In the 178 patients analyzed, the most prevalent areas of paresthesia coverage were buttocks, anterior and posterior thigh (each 98%), and low back (94%). Unwanted paresthesia at the ribs occurred in 8% of patients. There were significant differences in the likelihood of achieving paresthesia, with higher thoracic levels (T5, T6, and T7) more likely to achieve low back coverage but also more likely to introduce paresthesia felt at the ribs. Higher levels in the thoracic spine were associated with greater coverage of the buttocks, back, and thigh, and with lesser coverage of the leg and foot. This paresthesia atlas uses real-world, aggregate data to determine likelihood of paresthesia coverage as a function of stimulating electrode location. It represents an application of "big data" techniques, and a step toward achieving personalized SCS therapy tailored to the individual's chronic pain. © 2017 International Neuromodulation Society.
Anomaly detection of microstructural defects in continuous fiber reinforced composites
NASA Astrophysics Data System (ADS)
Bricker, Stephen; Simmons, J. P.; Przybyla, Craig; Hardie, Russell
2015-03-01
Ceramic matrix composites (CMC) with continuous fiber reinforcements have the potential to enable the next generation of high speed hypersonic vehicles and/or significant improvements in gas turbine engine performance due to their exhibited toughness when subjected to high mechanical loads at extreme temperatures (2200F+). Reinforced fiber composites (RFC) provide increased fracture toughness, crack growth resistance, and strength, though little is known about how stochastic variation and imperfections in the material effect material properties. In this work, tools are developed for quantifying anomalies within the microstructure at several scales. The detection and characterization of anomalous microstructure is a critical step in linking production techniques to properties, as well as in accurate material simulation and property prediction for the integrated computation materials engineering (ICME) of RFC based components. It is desired to find statistical outliers for any number of material characteristics such as fibers, fiber coatings, and pores. Here, fiber orientation, or `velocity', and `velocity' gradient are developed and examined for anomalous behavior. Categorizing anomalous behavior in the CMC is approached by multivariate Gaussian mixture modeling. A Gaussian mixture is employed to estimate the probability density function (PDF) of the features in question, and anomalies are classified by their likelihood of belonging to the statistical normal behavior for that feature.
On the Power Functions of Test Statistics in Order Restricted Inference.
1984-10-01
California-Davis Actuarial Science Davis, California 95616 The University of Iowa Iowa City, Iowa 52242 *F. T. Wright Department of Mathematics and...34 SUMMARY --We study the power functions of both the likelihood ratio and con- trast statistics for detecting a totally ordered trend in a collection...samples from normal populations, Bartholomew (1959 a,b; 1961) studied the likelihood ratio tests (LRTs) for H0 versus H -H assuming in one case that
2008-12-20
Equation 6 for the sample likelihood function gives a “concentrated likelihood function,” which depends on correlation parameters θh and ph. This...step one and estimates correlation parameters using the new data set including all previous sample points and the new data point x. The algorithm...Unclassified b. ABSTRACT Unclassified c. THIS PAGE Unclassified UU 279 19b. TELEPHONE NUMBER (include area code ) N/A
Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection
NASA Astrophysics Data System (ADS)
Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan
2017-08-01
Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.
The composite sequential clustering technique for analysis of multispectral scanner data
NASA Technical Reports Server (NTRS)
Su, M. Y.
1972-01-01
The clustering technique consists of two parts: (1) a sequential statistical clustering which is essentially a sequential variance analysis, and (2) a generalized K-means clustering. In this composite clustering technique, the output of (1) is a set of initial clusters which are input to (2) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum likelihood classification techniques. The mathematical algorithms for the composite sequential clustering program and a detailed computer program description with job setup are given.
A new estimator method for GARCH models
NASA Astrophysics Data System (ADS)
Onody, R. N.; Favaro, G. M.; Cazaroto, E. R.
2007-06-01
The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.
An analysis of crash likelihood : age versus driving experience
DOT National Transportation Integrated Search
1995-05-01
The study was designed to determine the crash likelihood of drivers in Michigan as a function of two independent variables: driver age and driving experience. The age variable had eight levels (18, 19, 20, 21, 22, 23, 24, and 25 years old) and the ex...
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2018-01-01
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.
2015-01-01
In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152
Likelihood-Ratio DIF Testing: Effects of Nonnormality
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
Differential item functioning (DIF) occurs when an item has different measurement properties for members of one group versus another. Likelihood-ratio (LR) tests for DIF based on item response theory (IRT) involve statistically comparing IRT models that vary with respect to their constraints. A simulation study evaluated how violation of the…
2010-01-01
Background Irregularly shaped spatial clusters are difficult to delineate. A cluster found by an algorithm often spreads through large portions of the map, impacting its geographical meaning. Penalized likelihood methods for Kulldorff's spatial scan statistics have been used to control the excessive freedom of the shape of clusters. Penalty functions based on cluster geometry and non-connectivity have been proposed recently. Another approach involves the use of a multi-objective algorithm to maximize two objectives: the spatial scan statistics and the geometric penalty function. Results & Discussion We present a novel scan statistic algorithm employing a function based on the graph topology to penalize the presence of under-populated disconnection nodes in candidate clusters, the disconnection nodes cohesion function. A disconnection node is defined as a region within a cluster, such that its removal disconnects the cluster. By applying this function, the most geographically meaningful clusters are sifted through the immense set of possible irregularly shaped candidate cluster solutions. To evaluate the statistical significance of solutions for multi-objective scans, a statistical approach based on the concept of attainment function is used. In this paper we compared different penalized likelihoods employing the geometric and non-connectivity regularity functions and the novel disconnection nodes cohesion function. We also build multi-objective scans using those three functions and compare them with the previous penalized likelihood scans. An application is presented using comprehensive state-wide data for Chagas' disease in puerperal women in Minas Gerais state, Brazil. Conclusions We show that, compared to the other single-objective algorithms, multi-objective scans present better performance, regarding power, sensitivity and positive predicted value. The multi-objective non-connectivity scan is faster and better suited for the detection of moderately irregularly shaped clusters. The multi-objective cohesion scan is most effective for the detection of highly irregularly shaped clusters. PMID:21034451
Fischer, Sebastian; Wiemer, Anita; Diedrich, Laura; Moock, Jörn; Rössler, Wulf
2014-01-01
We suggest that interactions with strangers at work influence the likelihood of depressive disorders, as they serve as an environmental stressor, which are a necessary condition for the onset of depression according to diathesis-stress models of depression. We examined a large dataset (N = 76,563 in K = 196 occupations) from the German pension insurance program and the Occupational Information Network dataset on occupational characteristics. We used a multilevel framework with individuals and occupations as levels of analysis. We found that occupational environments influence employees’ risks of depression. In line with the quotation that ‘hell is other people’ frequent conflictual contacts were related to greater likelihoods of depression in both males and females (OR = 1.14, p<.05). However, interactions with the public were related to greater likelihoods of depression for males but lower likelihoods of depression for females (ORintercation = 1.21, p<.01). We theorize that some occupations may involve interpersonal experiences with negative emotional tones that make functional coping difficult and increase the risk of depression. In other occupations, these experiences have neutral tones and allow for functional coping strategies. Functional strategies are more often found in women than in men. PMID:25075855
Schwappach, David L. B.; Gehring, Katrin
2014-01-01
Purpose To investigate the likelihood of speaking up about patient safety in oncology and to clarify the effect of clinical and situational context factors on the likelihood of voicing concerns. Patients and Methods 1013 nurses and doctors in oncology rated four clinical vignettes describing coworkers’ errors and rule violations in a self-administered factorial survey (65% response rate). Multiple regression analysis was used to model the likelihood of speaking up as outcome of vignette attributes, responder’s evaluations of the situation and personal characteristics. Results Respondents reported a high likelihood of speaking up about patient safety but the variation between and within types of errors and rule violations was substantial. Staff without managerial function provided significantly higher levels of decision difficulty and discomfort to speak up. Based on the information presented in the vignettes, 74%−96% would speak up towards a supervisor failing to check a prescription, 45%−81% would point a coworker to a missed hand disinfection, 82%−94% would speak up towards nurses who violate a safety rule in medication preparation, and 59%−92% would question a doctor violating a safety rule in lumbar puncture. Several vignette attributes predicted the likelihood of speaking up. Perceived potential harm, anticipated discomfort, and decision difficulty were significant predictors of the likelihood of speaking up. Conclusions Clinicians’ willingness to speak up about patient safety is considerably affected by contextual factors. Physicians and nurses without managerial function report substantial discomfort with speaking up. Oncology departments should provide staff with clear guidance and trainings on when and how to voice safety concerns. PMID:25116338
Insufficient DNA methylation affects healthy aging and promotes age-related health problems.
Liu, Liang; van Groen, Thomas; Kadish, Inga; Li, Yuanyuan; Wang, Deli; James, Smitha R; Karpf, Adam R; Tollefsbol, Trygve O
2011-08-01
DNA methylation plays an integral role in development and aging through epigenetic regulation of genome function. DNA methyltransferase 1 (Dnmt1) is the most prevalent DNA methyltransferase that maintains genomic methylation stability. To further elucidate the function of Dnmt1 in aging and age-related diseases, we exploited the Dnmt1+/- mouse model to investigate how Dnmt1 haploinsufficiency impacts the aging process by assessing the changes of several major aging phenotypes. We confirmed that Dnmt1 haploinsufficiency indeed decreases DNA methylation as a result of reduced Dnmt1 expression. To assess the effect of Dnmt1 haploinsufficiency on general body composition, we performed dual-energy X-ray absorptiometry analysis and showed that reduced Dnmt1 activity decreased bone mineral density and body weight, but with no significant impact on mortality or body fat content. Using behavioral tests, we demonstrated that Dnmt1 haploinsufficiency impairs learning and memory functions in an age-dependent manner. Taken together, our findings point to the interesting likelihood that reduced genomic methylation activity adversely affects the healthy aging process without altering survival and mortality. Our studies demonstrated that cognitive functions of the central nervous system are modulated by Dnmt1 activity and genomic methylation, highlighting the significance of the original epigenetic hypothesis underlying memory coding and function.
COSMOABC: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Ishida, E. E. O.; Vitenti, S. D. P.; Penna-Lima, M.; Cisewski, J.; de Souza, R. S.; Trindade, A. M. M.; Cameron, E.; Busti, V. C.; COIN Collaboration
2015-11-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present COSMOABC, a Python ABC sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled COSMOABC with the NUMCOSMO library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. COSMOABC is published under the GPLv3 license on PyPI and GitHub and documentation is available at http://goo.gl/SmB8EX.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
Saavedra, Serguei; Rohr, Rudolf P; Fortuna, Miguel A; Selva, Nuria; Bascompte, Jordi
2016-04-01
Many of the observed species interactions embedded in ecological communities are not permanent, but are characterized by temporal changes that are observed along with abiotic and biotic variations. While work has been done describing and quantifying these changes, little is known about their consequences for species coexistence. Here, we investigate the extent to which changes of species composition impact the likelihood of persistence of the predator-prey community in the highly seasonal Białowieza Primeval Forest (northeast Poland), and the extent to which seasonal changes of species interactions (predator diet) modulate the expected impact. This likelihood is estimated extending recent developments on the study of structural stability in ecological communities. We find that the observed species turnover strongly varies the likelihood of community persistence between summer and winter. Importantly, we demonstrate that the observed seasonal interaction changes minimize the variation in the likelihood of persistence associated with species turnover across the year. We find that these community dynamics can be explained as the coupling of individual species to their environment by minimizing both the variation in persistence conditions and the interaction changes between seasons. Our results provide a homeostatic explanation for seasonal species interactions and suggest that monitoring the association of interactions changes with the level of variation in community dynamics can provide a good indicator of the response of species to environmental pressures.
Revision of an automated microseismic location algorithm for DAS - 3C geophone hybrid array
NASA Astrophysics Data System (ADS)
Mizuno, T.; LeCalvez, J.; Raymer, D.
2017-12-01
Application of distributed acoustic sensing (DAS) has been studied in several areas in seismology. One of the areas is microseismic reservoir monitoring (e.g., Molteni et al., 2017, First Break). Considering the present limitations of DAS, which include relatively low signal-to-noise ratio (SNR) and no 3C polarization measurements, a DAS - 3C geophone hybrid array is a practical option when using a single monitoring well. Considering the large volume of data from distributed sensing, microseismic event detection and location using a source scanning type algorithm is a reasonable choice, especially for real-time monitoring. The algorithm must handle both strain rate along the borehole axis for DAS and particle velocity for 3C geophones. Only a small quantity of large SNR events will be detected throughout a large aperture encompassing the hybrid array; therefore, the aperture is to be optimized dynamically to eliminate noisy channels for a majority of events. For such hybrid array, coalescence microseismic mapping (CMM) (Drew et al., 2005, SPE) was revised. CMM forms a likelihood function of location of event and its origin time. At each receiver, a time function of event arrival likelihood is inferred using an SNR function, and it is migrated to time and space to determine hypocenter and origin time likelihood. This algorithm was revised to dynamically optimize such a hybrid array by identifying receivers where a microseismic signal is possibly detected and using only those receivers to compute the likelihood function. Currently, peak SNR is used to select receivers. To prevent false results due to small aperture, a minimum aperture threshold is employed. The algorithm refines location likelihood using 3C geophone polarization. We tested this algorithm using a ray-based synthetic dataset. Leaney (2014, PhD thesis, UBC) is used to compute particle velocity at receivers. Strain rate along the borehole axis is computed from particle velocity as DAS microseismic synthetic data. The likelihood function formed by both DAS and geophone behaves as expected with the aperture dynamically selected depending on the SNR of the event. We conclude that this algorithm can be successfully applied for such hybrid arrays to monitor microseismic activity. A study using a recently acquired dataset is planned.
Unified halo-independent formalism from convex hulls for direct dark matter searches
NASA Astrophysics Data System (ADS)
Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.
2017-12-01
Using the Fenchel-Eggleston theorem for convex hulls (an extension of the Caratheodory theorem), we prove that any likelihood can be maximized by either a dark matter 1- speed distribution F(v) in Earth's frame or 2- Galactic velocity distribution fgal(vec u), consisting of a sum of delta functions. The former case applies only to time-averaged rate measurements and the maximum number of delta functions is (Script N‑1), where Script N is the total number of data entries. The second case applies to any harmonic expansion coefficient of the time-dependent rate and the maximum number of terms is Script N. Using time-averaged rates, the aforementioned form of F(v) results in a piecewise constant unmodulated halo function tilde eta0BF(vmin) (which is an integral of the speed distribution) with at most (Script N-1) downward steps. The authors had previously proven this result for likelihoods comprised of at least one extended likelihood, and found the best-fit halo function to be unique. This uniqueness, however, cannot be guaranteed in the more general analysis applied to arbitrary likelihoods. Thus we introduce a method for determining whether there exists a unique best-fit halo function, and provide a procedure for constructing either a pointwise confidence band, if the best-fit halo function is unique, or a degeneracy band, if it is not. Using measurements of modulation amplitudes, the aforementioned form of fgal(vec u), which is a sum of Galactic streams, yields a periodic time-dependent halo function tilde etaBF(vmin, t) which at any fixed time is a piecewise constant function of vmin with at most Script N downward steps. In this case, we explain how to construct pointwise confidence and degeneracy bands from the time-averaged halo function. Finally, we show that requiring an isotropic Galactic velocity distribution leads to a Galactic speed distribution F(u) that is once again a sum of delta functions, and produces a time-dependent tilde etaBF(vmin, t) function (and a time-averaged tilde eta0BF(vmin)) that is piecewise linear, differing significantly from best-fit halo functions obtained without the assumption of isotropy.
The Creative Side of the Dark Triad
ERIC Educational Resources Information Center
Kapoor, Hansika
2015-01-01
This study associates the subclinical dark triad (DT) of personality--narcissism, psychopathy, and Machiavellianism, and their composite--with negative creativity. An instrument developed by the author assessed the likelihood of engaging in creativity, where negative creativity was defined as an act that is original and useful to the individual.…
Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2005-01-01
In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…
Counseling Pretreatment and the Elaboration Likelihood Model of Attitude Change.
ERIC Educational Resources Information Center
Heesacker, Martin
The importance of high levels of involvement in counseling has been related to theories of interpersonal influence. To examine differing effects of counselor credibility as a function of how personally involved counselors are, the Elaboration Likelihood Model (ELM) of attitude change was applied to counseling pretreatment. Students (N=256) were…
A Note on Three Statistical Tests in the Logistic Regression DIF Procedure
ERIC Educational Resources Information Center
Paek, Insu
2012-01-01
Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…
Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures
ERIC Educational Resources Information Center
Atar, Burcu; Kamata, Akihito
2011-01-01
The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…
Effects of Habitual Anger on Employees’ Behavior during Organizational Change
Bönigk, Mareike; Steffgen, Georges
2013-01-01
Organizational change is a particularly emotional event for those being confronted with it. Anger is a frequently experienced emotion under these conditions. This study analyses the influence of employees’ habitual anger reactions on their reported behavior during organizational change. It was explored whether anger reactions conducive to recovering or increasing individual well-being will enhance the likelihood of functional change behavior. Dysfunctional regulation strategies in terms of individual well-being are expected to decrease the likelihood of functional change behavior—mediated by the commitment to change. Four hundred and twelve employees of different organizations in Luxembourg undergoing organizational change participated in the study. Findings indicate that the anger regulation strategy venting, and humor increase the likelihood of deviant resistance to change. Downplaying the incident’s negative impact and feedback increase the likelihood of active support for change. The mediating effect of commitment to change has been found for humor and submission. The empirical findings suggest that a differentiated conceptualization of resistance to change is required. Specific implications for practical change management and for future research are discussed. PMID:24287849
NASA Technical Reports Server (NTRS)
Grove, R. D.; Bowles, R. L.; Mayhew, S. C.
1972-01-01
A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.
Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1985-01-01
Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.
Effects of habitual anger on employees' behavior during organizational change.
Bönigk, Mareike; Steffgen, Georges
2013-11-25
Organizational change is a particularly emotional event for those being confronted with it. Anger is a frequently experienced emotion under these conditions. This study analyses the influence of employees' habitual anger reactions on their reported behavior during organizational change. It was explored whether anger reactions conducive to recovering or increasing individual well-being will enhance the likelihood of functional change behavior. Dysfunctional regulation strategies in terms of individual well-being are expected to decrease the likelihood of functional change behavior-mediated by the commitment to change. Four hundred and twelve employees of different organizations in Luxembourg undergoing organizational change participated in the study. Findings indicate that the anger regulation strategy venting, and humor increase the likelihood of deviant resistance to change. Downplaying the incident's negative impact and feedback increase the likelihood of active support for change. The mediating effect of commitment to change has been found for humor and submission. The empirical findings suggest that a differentiated conceptualization of resistance to change is required. Specific implications for practical change management and for future research are discussed.
Poisson point process modeling for polyphonic music transcription.
Peeling, Paul; Li, Chung-fai; Godsill, Simon
2007-04-01
Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.
He, Ye; Lin, Huazhen; Tu, Dongsheng
2018-06-04
In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.
Johnson, Timothy R; Kuhn, Kristine M
2015-12-01
This paper introduces the ltbayes package for R. This package includes a suite of functions for investigating the posterior distribution of latent traits of item response models. These include functions for simulating realizations from the posterior distribution, profiling the posterior density or likelihood function, calculation of posterior modes or means, Fisher information functions and observed information, and profile likelihood confidence intervals. Inferences can be based on individual response patterns or sets of response patterns such as sum scores. Functions are included for several common binary and polytomous item response models, but the package can also be used with user-specified models. This paper introduces some background and motivation for the package, and includes several detailed examples of its use.
SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.
Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman
2017-03-01
We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).
Measuring coherence of computer-assisted likelihood ratio methods.
Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H
2015-04-01
Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
NASA Astrophysics Data System (ADS)
De Santis, Alberto; Dellepiane, Umberto; Lucidi, Stefano
2012-11-01
In this paper we investigate the estimation problem for a model of the commodity prices. This model is a stochastic state space dynamical model and the problem unknowns are the state variables and the system parameters. Data are represented by the commodity spot prices, very seldom time series of Futures contracts are available for free. Both the system joint likelihood function (state variables and parameters) and the system marginal likelihood (the state variables are eliminated) function are addressed.
Less-Complex Method of Classifying MPSK
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2006-01-01
An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may "know" little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term "modulation classification" or simply "classification" signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum- likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of carrier phase, of a complicated sum of functions of the baseband sample values, the carrier phase, the carrier-signal and noise magnitudes, and M or M'. Then the likelihood ratio, defined as the ratio between the likelihood functions, is computed, leading to the choice of whichever hypothesis - M or M'- is more likely. In the alternative method, the integral in each likelihood function is approximated by a sum over values of the integrand sampled at a number, 1, of equally spaced values of carrier phase. Used in this way, 1 is a parameter that can be adjusted to trade computational complexity against the probability of misclassification. In the limit as 1 approaches infinity, one obtains the integral form of the likelihood function and thus recovers the ML classification. The present approximate method has been tested in comparison with the ML method by means of computational simulations. The results of the simulations have shown that the performance (as quantified by probability of misclassification) of the approximate method is nearly indistinguishable from that of the ML method (see figure).
Optimal Methods for Classification of Digitally Modulated Signals
2013-03-01
of using a ratio of likelihood functions, the proposed approach uses the Kullback - Leibler (KL) divergence. KL...58 List of Acronyms ALRT Average LRT BPSK Binary Shift Keying BPSK-SS BPSK Spread Spectrum or CDMA DKL Kullback - Leibler Information Divergence...blind demodulation for develop classification algorithms for wider set of signals types. Two methodologies were used : Likelihood Ratio Test
A spatially explicit capture-recapture estimator for single-catch traps.
Distiller, Greg; Borchers, David L
2015-11-01
Single-catch traps are frequently used in live-trapping studies of small mammals. Thus far, a likelihood for single-catch traps has proven elusive and usually the likelihood for multicatch traps is used for spatially explicit capture-recapture (SECR) analyses of such data. Previous work found the multicatch likelihood to provide a robust estimator of average density. We build on a recently developed continuous-time model for SECR to derive a likelihood for single-catch traps. We use this to develop an estimator based on observed capture times and compare its performance by simulation to that of the multicatch estimator for various scenarios with nonconstant density surfaces. While the multicatch estimator is found to be a surprisingly robust estimator of average density, its performance deteriorates with high trap saturation and increasing density gradients. Moreover, it is found to be a poor estimator of the height of the detection function. By contrast, the single-catch estimators of density, distribution, and detection function parameters are found to be unbiased or nearly unbiased in all scenarios considered. This gain comes at the cost of higher variance. If there is no interest in interpreting the detection function parameters themselves, and if density is expected to be fairly constant over the survey region, then the multicatch estimator performs well with single-catch traps. However if accurate estimation of the detection function is of interest, or if density is expected to vary substantially in space, then there is merit in using the single-catch estimator when trap saturation is above about 60%. The estimator's performance is improved if care is taken to place traps so as to span the range of variables that affect animal distribution. As a single-catch likelihood with unknown capture times remains intractable for now, researchers using single-catch traps should aim to incorporate timing devices with their traps.
NASA Astrophysics Data System (ADS)
Zhou, X.; Albertson, J. D.
2016-12-01
Natural gas is considered as a bridge fuel towards clean energy due to its potential lower greenhouse gas emission comparing with other fossil fuels. Despite numerous efforts, an efficient and cost-effective approach to monitor fugitive methane emissions along the natural gas production-supply chain has not been developed yet. Recently, mobile methane measurement has been introduced which applies a Bayesian approach to probabilistically infer methane emission rates and update estimates recursively when new measurements become available. However, the likelihood function, especially the error term which determines the shape of the estimate uncertainty, is not rigorously defined and evaluated with field data. To address this issue, we performed a series of near-source (< 30 m) controlled methane release experiments using a specialized vehicle mounted with fast response methane analyzers and a GPS unit. Methane concentrations were measured at two different heights along mobile traversals downwind of the sources, and concurrent wind and temperature data are recorded by nearby 3-D sonic anemometers. With known methane release rates, the measurements were used to determine the functional form and the parameterization of the likelihood function in the Bayesian inference scheme under different meteorological conditions.
Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence
NASA Astrophysics Data System (ADS)
Lewis, Nicholas; Grünwald, Peter
2018-03-01
Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).
Mason, Tyler B; Lewis, Robin J
2017-12-01
Binge eating is a significant concern among college age women-both Caucasian and African-American women. Research has shown that social support, coping, and optimism are associated with engaging in fewer negative health behaviors including binge eating among college students. However, the impact of sources of social support (i.e., support from family, friends, and a special person), rumination, and optimism on binge eating as a function of race/ethnicity has received less attention. The purpose of this study was to examine the association between social support, rumination, and optimism and binge eating among Caucasian and American-American women, separately. Caucasian (n = 100) and African-American (n = 84) women from a university in the Mid-Atlantic US completed an online survey about eating behaviors and psychosocial health. Social support from friends was associated with less likelihood of binge eating among Caucasian women. Social support from family was associated with less likelihood of binge eating among African-American women, but greater likelihood of binge eating among Caucasian women. Rumination was associated with greater likelihood of binge eating among Caucasian and African-American women. Optimism was associated with less likelihood of binge eating among African-American women. These results demonstrate similarities and differences in correlates of binge eating as a function of race/ethnicity.
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2016-01-15
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.
Effectiveness of percutaneous closure of patent foramen ovale for hypoxemia.
Fenster, Brett E; Nguyen, Bryant H; Buckner, J Kern; Freeman, Andrew M; Carroll, John D
2013-10-15
The aim of this study was to evaluate the ability of percutaneous patent foramen ovale (PFO) closure to improve systemic hypoxemia. Although PFO-mediated right-to-left shunt (RTLS) is associated with hypoxemia, the ability of percutaneous closure to ameliorate hypoxemia is unknown. Between 2004 and 2009, 97 patients who underwent PFO closure for systemic hypoxemia and dyspnea that was disproportionate to underlying lung disease were included for evaluation. All patients exhibited PFO-mediated RTLS as determined by agitated saline echocardiography. Procedural success was defined as implantation of a device without major complications and mild or no residual shunt at 6 months. Clinical success was defined as a composite of an improvement in New York Heart Association (NYHA) functional class, reduction of dyspnea symptoms, or decreased oxygen requirement. Procedural success was achieved in 96 of 97 (99%), and clinical success was achieved in 68 of 97 (70%). The presence of any moderate or severe interatrial shunt by agitated saline study (odds ratio [OR] = 4.7; p <0.024), NYHA class at referral (OR = 2.9; p <0.0087), and 10-year increase in age (OR = 1.8; p <0.0017) increased likelihood of clinical success. In contrast, a pulmonary comorbidity (OR = 0.18; p <0.005) and male gender (OR = 0.30; p <0.017) decreased the likelihood of success. In conclusion, based on the largest single-center experience of patients referred for PFO closure for systemic hypoxemia, PFO closure was a mechanically effective procedure with an associated improvement in echocardiographic evidence of RTLS, NYHA functional class, and oxygen requirement. Copyright © 2013 Elsevier Inc. All rights reserved.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).
Smith, Paul D; Hanlon, Michael P
2017-12-01
Smith, PD, and Hanlon, D. Assessing the effectiveness of the functional movement screen in predicting noncontact injury rates in soccer players. J Strength Cond Res 31(12): 3327-3332, 2017-This study assessed if the Functional Movement Screen (FMS) can accurately predict noncontact injury in adult soccer players when normalizing noncontact injury occurrence against match exposure levels. Senior male players (n = 89) from 5 League of Ireland semiprofessional clubs participated in the study (mean age = 23.2 ± 4.4 years; mean height = 179.5 ± 6.6 cm; mean body mass = 77.5 ± 7.8 kg). Participants performed the FMS during preseason, and their injury occurrence rates and match minutes were tracked throughout 1 season. In total, 66 noncontact injuries were recorded. No significant difference was found in FMS composite scores between players receiving noncontact injuries and players not suffering a noncontact injury (p = 0.96). There was no significant difference in exposure-normalized noncontact injury incidence between those scoring 14 or below and those scoring above 14 on the FMS (0.36 vs. 0.29 non-contact injuries per player per 1,000 match minutes). Players scoring 14 or below on the FMS had an odds ratio of 0.63 (p = 0.45; 95% CI = 0.19-2.07) of receiving a noncontact injury. Despite previous research showing links between low FMS composite scores and subsequent injury, these results suggest that the FMS cannot accurately predict a male soccer player's likelihood of receiving a noncontact injury and that a lower FMS composite score does not significantly increase their noncontact injury incidence rate per 1,000 match minutes. Caution should therefore be used when using the FMS as a predictor of noncontact injury, and pain prevalence during the FMS, previous injuries, and training/match exposure levels should also be taken into account.
The Nonmetro Labor Force in the Seventies.
ERIC Educational Resources Information Center
Schaub, James D.
The report identifies structural changes and trends in the composition of the nonmetro labor force between 1973 and 1979; evaluates the labor force performance by race, sex, and age; and suggests underlying causes of the major changes and the likelihood of particular trends continuing into the eighties. Tabular data indicate that: (1) metro and…
An unsupervised classification technique for multispectral remote sensing data.
NASA Technical Reports Server (NTRS)
Su, M. Y.; Cummings, R. E.
1973-01-01
Description of a two-part clustering technique consisting of (a) a sequential statistical clustering, which is essentially a sequential variance analysis, and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum-likelihood classification techniques.
NASA Astrophysics Data System (ADS)
Ben Abdessalem, Anis; Dervilis, Nikolaos; Wagg, David; Worden, Keith
2018-01-01
This paper will introduce the use of the approximate Bayesian computation (ABC) algorithm for model selection and parameter estimation in structural dynamics. ABC is a likelihood-free method typically used when the likelihood function is either intractable or cannot be approached in a closed form. To circumvent the evaluation of the likelihood function, simulation from a forward model is at the core of the ABC algorithm. The algorithm offers the possibility to use different metrics and summary statistics representative of the data to carry out Bayesian inference. The efficacy of the algorithm in structural dynamics is demonstrated through three different illustrative examples of nonlinear system identification: cubic and cubic-quintic models, the Bouc-Wen model and the Duffing oscillator. The obtained results suggest that ABC is a promising alternative to deal with model selection and parameter estimation issues, specifically for systems with complex behaviours.
Free energy reconstruction from steered dynamics without post-processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athenes, Manuel, E-mail: Manuel.Athenes@cea.f; Condensed Matter and Materials Division, Physics and Life Sciences Directorate, LLNL, Livermore, CA 94551; Marinica, Mihai-Cosmin
2010-09-20
Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, wemore » accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.« less
Accounting for informatively missing data in logistic regression by means of reassessment sampling.
Lin, Ji; Lyles, Robert H
2015-05-20
We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopich, Irina V.
2015-01-21
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when themore » FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.« less
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
Gopich, Irina V.
2015-01-01
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated. PMID:25612692
Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1981-01-01
A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.
Uncertainty analysis of signal deconvolution using a measured instrument response function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartouni, E. P.; Beeman, B.; Caggiano, J. A.
2016-10-05
A common analysis procedure minimizes the ln-likelihood that a set of experimental observables matches a parameterized model of the observation. The model includes a description of the underlying physical process as well as the instrument response function (IRF). Here, we investigate the National Ignition Facility (NIF) neutron time-of-flight (nTOF) spectrometers, the IRF is constructed from measurements and models. IRF measurements have a finite precision that can make significant contributions to the uncertainty estimate of the physical model’s parameters. Finally, we apply a Bayesian analysis to properly account for IRF uncertainties in calculating the ln-likelihood function used to find the optimummore » physical parameters.« less
Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions
Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.
2012-01-01
In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661
Stochastic control system parameter identifiability
NASA Technical Reports Server (NTRS)
Lee, C. H.; Herget, C. J.
1975-01-01
The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.
Extending the Applicability of the Generalized Likelihood Function for Zero-Inflated Data Series
NASA Astrophysics Data System (ADS)
Oliveira, Debora Y.; Chaffe, Pedro L. B.; Sá, João. H. M.
2018-03-01
Proper uncertainty estimation for data series with a high proportion of zero and near zero observations has been a challenge in hydrologic studies. This technical note proposes a modification to the Generalized Likelihood function that accounts for zero inflation of the error distribution (ZI-GL). We compare the performance of the proposed ZI-GL with the original Generalized Likelihood function using the entire data series (GL) and by simply suppressing zero observations (GLy>0). These approaches were applied to two interception modeling examples characterized by data series with a significant number of zeros. The ZI-GL produced better uncertainty ranges than the GL as measured by the precision, reliability and volumetric bias metrics. The comparison between ZI-GL and GLy>0 highlights the need for further improvement in the treatment of residuals from near zero simulations when a linear heteroscedastic error model is considered. Aside from the interception modeling examples illustrated herein, the proposed ZI-GL may be useful for other hydrologic studies, such as for the modeling of the runoff generation in hillslopes and ephemeral catchments.
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
Topics in inference and decision-making with partial knowledge
NASA Technical Reports Server (NTRS)
Safavian, S. Rasoul; Landgrebe, David
1990-01-01
Two essential elements needed in the process of inference and decision-making are prior probabilities and likelihood functions. When both of these components are known accurately and precisely, the Bayesian approach provides a consistent and coherent solution to the problems of inference and decision-making. In many situations, however, either one or both of the above components may not be known, or at least may not be known precisely. This problem of partial knowledge about prior probabilities and likelihood functions is addressed. There are at least two ways to cope with this lack of precise knowledge: robust methods, and interval-valued methods. First, ways of modeling imprecision and indeterminacies in prior probabilities and likelihood functions are examined; then how imprecision in the above components carries over to the posterior probabilities is examined. Finally, the problem of decision making with imprecise posterior probabilities and the consequences of such actions are addressed. Application areas where the above problems may occur are in statistical pattern recognition problems, for example, the problem of classification of high-dimensional multispectral remote sensing image data.
Chen, Ying-Ju; Ning, Wei; Gupta, Arjun K
2016-05-01
The mean residual life (MRL) function is one of the basic parameters of interest in survival analysis that describes the expected remaining time of an individual after a certain age. The study of changes in the MRL function is practical and interesting because it may help us to identify some factors such as age and gender that may influence the remaining lifetimes of patients after receiving a certain surgery. In this paper, we propose a detection procedure based on the empirical likelihood for the changes in MRL functions with right censored data. Two real examples are also given: Veterans' administration lung cancer study and Stanford heart transplant to illustrate the detecting procedure. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L
2005-12-01
Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation
Meyer, Karin
2016-01-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681
A long-term earthquake rate model for the central and eastern United States from smoothed seismicity
Moschetti, Morgan P.
2015-01-01
I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.
Radial mixing and Ru-Mo isotope systematics under different accretion scenarios
NASA Astrophysics Data System (ADS)
Fischer, Rebecca A.; Nimmo, Francis; O'Brien, David P.
2018-01-01
The Ru-Mo isotopic compositions of inner Solar System bodies may reflect the provenance of accreted material and how it evolved with time, both of which are controlled by the accretion scenario these bodies experienced. Here we use a total of 116 N-body simulations of terrestrial planet accretion, run in the Eccentric Jupiter and Saturn (EJS), Circular Jupiter and Saturn (CJS), and Grand Tack scenarios, to model the Ru-Mo anomalies of Earth, Mars, and Theia analogues. This model starts by applying an initial step function in Ru-Mo isotopic composition, with compositions reflecting those in meteorites, and traces compositional evolution as planets accrete. The mass-weighted provenance of the resulting planets reveals more radial mixing in Grand Tack simulations than in EJS/CJS simulations, and more efficient mixing among late-accreted material than during the main phase of accretion in EJS/CJS simulations. We find that an extensive homogeneous inner disk region is required to reproduce Earth's observed Ru-Mo composition. EJS/CJS simulations require a homogeneous reservoir in the inner disk extending to ≥3-4 AU (≥74-98% of initial mass) to reproduce Earth's composition, while Grand Tack simulations require a homogeneous reservoir extending to ≥3-10 AU (≥97-99% of initial mass), and likely to ≥6-10 AU. In the Grand Tack model, Jupiter's initial location (the most likely location for a discontinuity in isotopic composition) is ∼3.5 AU; however, this step location has only a 33% likelihood of producing an Earth with the correct Ru-Mo isotopic signature for the most plausible model conditions. Our results give the testable predictions that Mars has zero Ru anomaly and small or zero Mo anomaly, and the Moon has zero Mo anomaly. These predictions are insensitive to wide variations in parameter choices.
Multiple Cognitive Control Effects of Error Likelihood and Conflict
Brown, Joshua W.
2010-01-01
Recent work on cognitive control has suggested a variety of performance monitoring functions of the anterior cingulate cortex, such as errors, conflict, error likelihood, and others. Given the variety of monitoring effects, a corresponding variety of control effects on behavior might be expected. This paper explores whether conflict and error likelihood produce distinct cognitive control effects on behavior, as measured by response time. A change signal task (Brown & Braver, 2005) was modified to include conditions of likely errors due to tardy as well as premature responses, in conditions with and without conflict. The results discriminate between competing hypotheses of independent vs. interacting conflict and error likelihood control effects. Specifically, the results suggest that the likelihood of premature vs. tardy response errors can lead to multiple distinct control effects, which are independent of cognitive control effects driven by response conflict. As a whole, the results point to the existence of multiple distinct cognitive control mechanisms and challenge existing models of cognitive control that incorporate only a single control signal. PMID:19030873
Richards, V. M.; Dai, W.
2014-01-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826
Ishikawa, Sohta A; Inagaki, Yuji; Hashimoto, Tetsuo
2012-01-01
In phylogenetic analyses of nucleotide sequences, 'homogeneous' substitution models, which assume the stationarity of base composition across a tree, are widely used, albeit individual sequences may bear distinctive base frequencies. In the worst-case scenario, a homogeneous model-based analysis can yield an artifactual union of two distantly related sequences that achieved similar base frequencies in parallel. Such potential difficulty can be countered by two approaches, 'RY-coding' and 'non-homogeneous' models. The former approach converts four bases into purine and pyrimidine to normalize base frequencies across a tree, while the heterogeneity in base frequency is explicitly incorporated in the latter approach. The two approaches have been applied to real-world sequence data; however, their basic properties have not been fully examined by pioneering simulation studies. Here, we assessed the performances of the maximum-likelihood analyses incorporating RY-coding and a non-homogeneous model (RY-coding and non-homogeneous analyses) on simulated data with parallel convergence to similar base composition. Both RY-coding and non-homogeneous analyses showed superior performances compared with homogeneous model-based analyses. Curiously, the performance of RY-coding analysis appeared to be significantly affected by a setting of the substitution process for sequence simulation relative to that of non-homogeneous analysis. The performance of a non-homogeneous analysis was also validated by analyzing a real-world sequence data set with significant base heterogeneity.
What mediates tree mortality during drought in the southern Sierra Nevada?
Paz-Kagan, Tarin; Brodrick, Philip; Vaughn, Nicholas R.; Das, Adrian J.; Stephenson, Nathan L.; Nydick, Koren R.; Asner, Gregory P.
2017-01-01
Severe drought has the potential to cause selective mortality within a forest, thereby inducing shifts in forest species composition. The southern Sierra Nevada foothills and mountains of California have experienced extensive forest dieback due to drought stress and insect outbreak. We used high-fidelity imaging spectroscopy (HiFIS) and light detection and ranging (LiDAR) from the Carnegie Airborne Observatory (CAO) to estimate the effect of forest dieback on species composition in response to drought stress in Sequoia National Park. Our aims were: (1) to quantify site-specific conditions that mediate tree mortality along an elevation gradient in the southern Sierra Nevada Mountains; (2) to assess where mortality events have a greater probability of occurring; and (3) to estimate which tree species have a greater likelihood of mortality along the elevation gradient. A series of statistical models were generated to classify species composition and identify tree mortality, and the influences of different environmental factors were spatially quantified and analyzed to assess where mortality events have a greater likelihood of occurring. A higher probability of mortality was observed in the lower portion of the elevation gradient, on southwest and west-facing slopes, in areas with shallow soils, on shallower slopes, and at greater distances from water. All of these factors are related to site water balance throughout the landscape. Our results also suggest that mortality is species-specific along the elevation gradient, mainly affecting Pinus ponderosa and Pinus lambertiana at lower elevations. Selective mortality within the forest may drive long-term shifts in community composition along the elevation gradient.
What mediates tree mortality during drought in the southern Sierra Nevada?
Paz-Kagan, Tarin; Brodrick, Philip G; Vaughn, Nicholas R; Das, Adrian J; Stephenson, Nathan L; Nydick, Koren R; Asner, Gregory P
2017-12-01
Severe drought has the potential to cause selective mortality within a forest, thereby inducing shifts in forest species composition. The southern Sierra Nevada foothills and mountains of California have experienced extensive forest dieback due to drought stress and insect outbreak. We used high-fidelity imaging spectroscopy (HiFIS) and light detection and ranging (LiDAR) from the Carnegie Airborne Observatory (CAO) to estimate the effect of forest dieback on species composition in response to drought stress in Sequoia National Park. Our aims were (1) to quantify site-specific conditions that mediate tree mortality along an elevation gradient in the southern Sierra Nevada Mountains, (2) to assess where mortality events have a greater probability of occurring, and (3) to estimate which tree species have a greater likelihood of mortality along the elevation gradient. A series of statistical models were generated to classify species composition and identify tree mortality, and the influences of different environmental factors were spatially quantified and analyzed to assess where mortality events have a greater likelihood of occurring. A higher probability of mortality was observed in the lower portion of the elevation gradient, on southwest- and west-facing slopes, in areas with shallow soils, on shallower slopes, and at greater distances from water. All of these factors are related to site water balance throughout the landscape. Our results also suggest that mortality is species-specific along the elevation gradient, mainly affecting Pinus ponderosa and Pinus lambertiana at lower elevations. Selective mortality within the forest may drive long-term shifts in community composition along the elevation gradient. © 2017 by the Ecological Society of America.
Parejo, M; Wragg, D; Henriques, D; Vignal, A; Neuditschko, M
2017-12-01
Human-mediated selection has left signatures in the genomes of many domesticated animals, including the European dark honeybee, Apis mellifera mellifera, which has been selected by apiculturists for centuries. Using whole-genome sequence information, we investigated selection signatures in spatially separated honeybee subpopulations (Switzerland, n = 39 and France, n = 17). Three different test statistics were calculated in windows of 2 kb (fixation index, cross-population extended haplotype homozygosity and cross-population composite likelihood ratio) and combined into a recently developed composite selection score. Applying a stringent false discovery rate of 0.01, we identified six significant selective sweeps distributed across five chromosomes covering eight genes. These genes are associated with multiple molecular and biological functions, including regulation of transcription, receptor binding and signal transduction. Of particular interest is a selection signature on chromosome 1, which corresponds to the WNT4 gene, the family of which is conserved across the animal kingdom with a variety of functions. In Drosophila melanogaster, WNT4 alleles have been associated with differential wing, cross vein and abdominal phenotypes. Defining phenotypic characteristics of different Apis mellifera ssp., which are typically used as selection criteria, include colour and wing venation pattern. This signal is therefore likely to be a good candidate for human mediated-selection arising from different applied breeding practices in the two managed populations. © 2017 The Authors. Animal Genetics published by John Wiley & Sons Ltd on behalf of Stichting International Foundation for Animal Genetics.
Functional mapping of quantitative trait loci associated with rice tillering.
Liu, G F; Li, M; Wen, J; Du, Y; Zhang, Y-M
2010-10-01
Several biologically significant parameters that are related to rice tillering are closely associated with rice grain yield. Although identification of the genes that control rice tillering and therefore influence crop yield would be valuable for rice production management and genetic improvement, these genes remain largely unidentified. In this study, we carried out functional mapping of quantitative trait loci (QTLs) for rice tillering in 129 doubled haploid lines, which were derived from a cross between IR64 and Azucena. We measured the average number of tillers in each plot at seven developmental stages and fit the growth trajectory of rice tillering with the Wang-Lan-Ding mathematical model. Four biologically meaningful parameters in this model--the potential maximum for tiller number (K), the optimum tiller time (t(0)), and the increased rate (r), or the reduced rate (c) at the time of deviation from t(0)--were our defined variables for multi-marker joint analysis under the framework of penalized maximum likelihood, as well as composite interval mapping. We detected a total of 27 QTLs that accounted for 2.49-8.54% of the total phenotypic variance. Nine common QTLs across multi-marker joint analysis and composite interval mapping showed high stability, while one QTL was environment-specific and three were epistatic. We also identified several genomic segments that are associated with multiple traits. Our results describe the genetic basis of rice tiller development, enable further marker-assisted selection in rice cultivar development, and provide useful information for rice production management.
Reconciling differences in stratospheric ozone composites
NASA Astrophysics Data System (ADS)
Ball, William T.; Alsing, Justin; Mortlock, Daniel J.; Rozanov, Eugene V.; Tummon, Fiona; Haigh, Joanna D.
2017-10-01
Observations of stratospheric ozone from multiple instruments now span three decades; combining these into composite datasets allows long-term ozone trends to be estimated. Recently, several ozone composites have been published, but trends disagree by latitude and altitude, even between composites built upon the same instrument data. We confirm that the main causes of differences in decadal trend estimates lie in (i) steps in the composite time series when the instrument source data changes and (ii) artificial sub-decadal trends in the underlying instrument data. These artefacts introduce features that can alias with regressors in multiple linear regression (MLR) analysis; both can lead to inaccurate trend estimates. Here, we aim to remove these artefacts using Bayesian methods to infer the underlying ozone time series from a set of composites by building a joint-likelihood function using a Gaussian-mixture density to model outliers introduced by data artefacts, together with a data-driven prior on ozone variability that incorporates knowledge of problems during instrument operation. We apply this Bayesian self-calibration approach to stratospheric ozone in 10° bands from 60° S to 60° N and from 46 to 1 hPa (˜ 21-48 km) for 1985-2012. There are two main outcomes: (i) we independently identify and confirm many of the data problems previously identified, but which remain unaccounted for in existing composites; (ii) we construct an ozone composite, with uncertainties, that is free from most of these problems - we call this the BAyeSian Integrated and Consolidated (BASIC) composite. To analyse the new BASIC composite, we use dynamical linear modelling (DLM), which provides a more robust estimate of long-term changes through Bayesian inference than MLR. BASIC and DLM, together, provide a step forward in improving estimates of decadal trends. Our results indicate a significant recovery of ozone since 1998 in the upper stratosphere, of both northern and southern midlatitudes, in all four composites analysed, and particularly in the BASIC composite. The BASIC results also show no hemispheric difference in the recovery at midlatitudes, in contrast to an apparent feature that is present, but not consistent, in the four composites. Our overall conclusion is that it is possible to effectively combine different ozone composites and account for artefacts and drifts, and that this leads to a clear and significant result that upper stratospheric ozone levels have increased since 1998, following an earlier decline.
Lineup Composition, Suspect Position, and the Sequential Lineup Advantage
ERIC Educational Resources Information Center
Carlson, Curt A.; Gronlund, Scott D.; Clark, Steven E.
2008-01-01
N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate…
ERIC Educational Resources Information Center
Piontak, Joy Rayanne; Schulman, Michael D.
2016-01-01
Background: Schools are important sites for interventions to prevent childhood obesity. This study examines how variables measuring the socioeconomic and racial composition of schools and counties affect the likelihood of obesity among third to fifth grade children. Methods: Body mass index data were collected from third to fifth grade public…
Motivation and Connection: Teaching Reading (and Writing) in the Composition Classroom
ERIC Educational Resources Information Center
Bunn, Michael
2013-01-01
Teaching reading in terms of its connections to writing can motivate students to read and increase the likelihood that they find success in both activities. It can lead students to value reading as an integral aspect of learning to write. It can help students develop their understanding of writerly strategies and techniques. Drawing on qualitative…
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen
2018-07-01
Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.
Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function
NASA Astrophysics Data System (ADS)
Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.
2017-06-01
This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.
Hey, Jody; Nielsen, Rasmus
2007-01-01
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
Morgan, Wayne J; Wagener, Jeffrey S; Pasta, David J; Millar, Stefanie J; VanDevanter, Donald R; Konstan, Michael W
2017-06-01
Children with cystic fibrosis often experience acute declines in lung function. We previously showed that such declines are not always treated with antibiotics, but we did not assess whether treatment improves the likelihood of recovery. To determine whether new antibiotic treatment was associated with recovery from acute FEV 1 decline. We studied episodes of FEV 1 decline (≥10% from baseline) in the Epidemiologic Study of Cystic Fibrosis. Treatments were hospitalization, home intravenous antibiotic, new inhaled oral quinolone, or other oral antibiotic. We used logistic regression to evaluate whether treatment was associated with recovery to baseline or near baseline. Logistic regression of 9,875 patients showed that new antibiotic treatment was associated with an increased likelihood of recovery to 90% of baseline (P < 0.001), especially for hospitalization compared with no new antibiotic (odds ratio [OR], 2.79; 95% confidence interval, 2.41-3.23). All four outpatient treatments were associated with greater likelihood of recovery compared with no treatment (OR, 1.27-1.64). Inpatient treatment was better than outpatient treatment (OR, 1.94; 95% confidence interval, 1.68-2.23). Treatment-type ORs were similar across recovery criteria and levels of baseline lung function. New antibiotic therapy, and especially inpatient treatment, is associated with greater likelihood of recovery after acute decline in FEV 1 . Benefits extend across all disease stages and are especially important in patients with high lung function, who are at greatest risk for FEV 1 decline.
Use of Bayes theorem to correct size-specific sampling bias in growth data.
Troynikov, V S
1999-03-01
The bayesian decomposition of posterior distribution was used to develop a likelihood function to correct bias in the estimates of population parameters from data collected randomly with size-specific selectivity. Positive distributions with time as a parameter were used for parametrization of growth data. Numerical illustrations are provided. The alternative applications of the likelihood to estimate selectivity parameters are discussed.
Cox Regression Models with Functional Covariates for Survival Data.
Gellar, Jonathan E; Colantuoni, Elizabeth; Needham, Dale M; Crainiceanu, Ciprian M
2015-06-01
We extend the Cox proportional hazards model to cases when the exposure is a densely sampled functional process, measured at baseline. The fundamental idea is to combine penalized signal regression with methods developed for mixed effects proportional hazards models. The model is fit by maximizing the penalized partial likelihood, with smoothing parameters estimated by a likelihood-based criterion such as AIC or EPIC. The model may be extended to allow for multiple functional predictors, time varying coefficients, and missing or unequally-spaced data. Methods were inspired by and applied to a study of the association between time to death after hospital discharge and daily measures of disease severity collected in the intensive care unit, among survivors of acute respiratory distress syndrome.
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Hearing loss and disability exit: Measurement issues and coping strategies.
Christensen, Vibeke Tornhøj; Datta Gupta, Nabanita
2017-02-01
Hearing loss is one of the most common conditions related to aging, and previous descriptive evidence links it to early exit from the labor market. These studies are usually based on self-reported hearing difficulties, which are potentially endogenous to labor supply. We use unique representative data collected in the spring of 2005 through in-home interviews. The data contains self-reported functional and clinically-measured hearing ability for a representative sample of the Danish population aged 50-64. We estimate the causal effect of hearing loss on early retirement via disability benefits, taking into account the endogeneity of functional hearing. Our identification strategy involves the simultaneous estimation of labor supply, functional hearing, and coping strategies (i.e. accessing assistive devices at work or informing one's employer about the problem). We use hearing aids as an instrument for functional hearing. Our main empirical findings are that endogeneity bias is more severe for men than women and that functional hearing problems significantly increase the likelihood of receiving disability benefits for both men and women. However, relative to the baseline the effect is larger for men (47% vs. 20%, respectively). Availability of assistive devices in the workplace decreases the likelihood of receiving disability benefits, whereas informing an employer about hearing problems increases this likelihood. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pan, Zhen; Anderes, Ethan; Knox, Lloyd
2018-05-01
One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)
Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.
2015-01-01
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128
Image classification at low light levels
NASA Astrophysics Data System (ADS)
Wernick, Miles N.; Morris, G. Michael
1986-12-01
An imaging photon-counting detector is used to achieve automatic sorting of two image classes. The classification decision is formed on the basis of the cross correlation between a photon-limited input image and a reference function stored in computer memory. Expressions for the statistical parameters of the low-light-level correlation signal are given and are verified experimentally. To obtain a correlation-based system for two-class sorting, it is necessary to construct a reference function that produces useful information for class discrimination. An expression for such a reference function is derived using maximum-likelihood decision theory. Theoretically predicted results are used to compare on the basis of performance the maximum-likelihood reference function with Fukunaga-Koontz basis vectors and average filters. For each method, good class discrimination is found to result in milliseconds from a sparse sampling of the input image.
Lee, Joseph G. L.; Landrine, Hope; Torres, Essie; Gregory, Kyle R.
2016-01-01
Objective Tobacco retailers are an important source of tobacco products for minors. Previous research shows racial discrimination in sales to minors, but no national study has examined neighborhood correlates of retailer underage sales. Methods We accessed publicly available results of 2015 FDA inspections of tobacco retailers (n=108,614). In this cross-sectional study, we used multilevel logistic regression to predict the likelihood of retailer sale to a minor based on tract characteristics. We assessed the proportion of residents identifying as American Indian, Asian, Black, Latino, and White; isolation index scores for each racial/ethnic group; the proportion of people less than age 65 living in poverty; and, the proportion of residents age 10–17 in relation to retailer inspection results. Results The proportion of American Indian residents, Black residents, Latino residents, and residents less than age 65 under the poverty line in a neighborhood are independently, positively associated with the likelihood that a retailer in that neighborhood will fail an underage buy inspection. The proportion of White residents and residents age 10–17 are independently, negatively associated with the likelihood of sale of tobacco products to a minor. Isolation index scores show a similar pattern. In multivariable models holding neighborhood characteristics constant, higher proportions of Black (+), Latino (+), and age 10–17 (−) residents remained significant predictors of the likelihood of underage sale. Discussion Regulatory agencies should consider oversampling retailers in areas with higher likelihood of sales to minors for inspection. Interventions with tobacco retailers to reduce inequities in youth access should be implemented. PMID:27609780
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
INFERRING THE ECCENTRICITY DISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogg, David W.; Bovy, Jo; Myers, Adam D., E-mail: david.hogg@nyu.ed
2010-12-20
Standard maximum-likelihood estimators for binary-star and exoplanet eccentricities are biased high, in the sense that the estimated eccentricity tends to be larger than the true eccentricity. As with most non-trivial observables, a simple histogram of estimated eccentricities is not a good estimate of the true eccentricity distribution. Here, we develop and test a hierarchical probabilistic method for performing the relevant meta-analysis, that is, inferring the true eccentricity distribution, taking as input the likelihood functions for the individual star eccentricities, or samplings of the posterior probability distributions for the eccentricities (under a given, uninformative prior). The method is a simple implementationmore » of a hierarchical Bayesian model; it can also be seen as a kind of heteroscedastic deconvolution. It can be applied to any quantity measured with finite precision-other orbital parameters, or indeed any astronomical measurements of any kind, including magnitudes, distances, or photometric redshifts-so long as the measurements have been communicated as a likelihood function or a posterior sampling.« less
Bayesian image reconstruction for improving detection performance of muon tomography.
Wang, Guobao; Schultz, Larry J; Qi, Jinyi
2009-05-01
Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.
Precision Parameter Estimation and Machine Learning
NASA Astrophysics Data System (ADS)
Wandelt, Benjamin D.
2008-12-01
I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
Statistical inference of static analysis rules
NASA Technical Reports Server (NTRS)
Engler, Dawson Richards (Inventor)
2009-01-01
Various apparatus and methods are disclosed for identifying errors in program code. Respective numbers of observances of at least one correctness rule by different code instances that relate to the at least one correctness rule are counted in the program code. Each code instance has an associated counted number of observances of the correctness rule by the code instance. Also counted are respective numbers of violations of the correctness rule by different code instances that relate to the correctness rule. Each code instance has an associated counted number of violations of the correctness rule by the code instance. A respective likelihood of the validity is determined for each code instance as a function of the counted number of observances and counted number of violations. The likelihood of validity indicates a relative likelihood that a related code instance is required to observe the correctness rule. The violations may be output in order of the likelihood of validity of a violated correctness rule.
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
Beyond valence in the perception of likelihood: the role of emotion specificity.
DeSteno, D; Petty, R E; Wegener, D T; Rucker, D D
2000-03-01
Positive and negative moods have been shown to increase likelihood estimates of future events matching these states in valence (e.g., E. J. Johnson & A. Tversky, 1983). In the present article, 4 studies provide evidence that this congruency bias (a) is not limited to valence but functions in an emotion-specific manner, (b) derives from the informational value of emotions, and (c) is not the inevitable outcome of likelihood assessment under heightened emotion. Specifically, Study 1 demonstrates that sadness and anger, 2 distinct, negative emotions, differentially bias likelihood estimates of sad and angering events. Studies 2 and 3 replicate this finding in addition to supporting an emotion-as-information (cf. N. Schwarz & G. L. Clore, 1983), as opposed to a memory-based, mediating process for the bias. Finally, Study 4 shows that when the source of the emotion is salient, a reversal of the bias can occur given greater cognitive effort aimed at accuracy.
Lovelock, Paul K; Spurdle, Amanda B; Mok, Myth T S; Farrugia, Daniel J; Lakhani, Sunil R; Healey, Sue; Arnold, Stephen; Buchanan, Daniel; Couch, Fergus J; Henderson, Beric R; Goldgar, David E; Tavtigian, Sean V; Chenevix-Trench, Georgia; Brown, Melissa A
2007-01-01
Many of the DNA sequence variants identified in the breast cancer susceptibility gene BRCA1 remain unclassified in terms of their potential pathogenicity. Both multifactorial likelihood analysis and functional approaches have been proposed as a means to elucidate likely clinical significance of such variants, but analysis of the comparative value of these methods for classifying all sequence variants has been limited. We have compared the results from multifactorial likelihood analysis with those from several functional analyses for the four BRCA1 sequence variants A1708E, G1738R, R1699Q, and A1708V. Our results show that multifactorial likelihood analysis, which incorporates sequence conservation, co-inheritance, segregation, and tumour immunohistochemical analysis, may improve classification of variants. For A1708E, previously shown to be functionally compromised, analysis of oestrogen receptor, cytokeratin 5/6, and cytokeratin 14 tumour expression data significantly strengthened the prediction of pathogenicity, giving a posterior probability of pathogenicity of 99%. For G1738R, shown to be functionally defective in this study, immunohistochemistry analysis confirmed previous findings of inconsistent 'BRCA1-like' phenotypes for the two tumours studied, and the posterior probability for this variant was 96%. The posterior probabilities of R1699Q and A1708V were 54% and 69%, respectively, only moderately suggestive of increased risk. Interestingly, results from functional analyses suggest that both of these variants have only partial functional activity. R1699Q was defective in foci formation in response to DNA damage and displayed intermediate transcriptional transactivation activity but showed no evidence for centrosome amplification. In contrast, A1708V displayed an intermediate transcriptional transactivation activity and a normal foci formation response in response to DNA damage but induced centrosome amplification. These data highlight the need for a range of functional studies to be performed in order to identify variants with partially compromised function. The results also raise the possibility that A1708V and R1699Q may be associated with a low or moderate risk of cancer. While data pooling strategies may provide more information for multifactorial analysis to improve the interpretation of the clinical significance of these variants, it is likely that the development of current multifactorial likelihood approaches and the consideration of alternative statistical approaches will be needed to determine whether these individually rare variants do confer a low or moderate risk of breast cancer.
Impact of organizational leadership on physician burnout and satisfaction.
Shanafelt, Tait D; Gorringe, Grace; Menaker, Ronald; Storz, Kristin A; Reeves, David; Buskirk, Steven J; Sloan, Jeff A; Swensen, Stephen J
2015-04-01
To evaluate the impact of organizational leadership on the professional satisfaction and burnout of individual physicians working for a large health care organization. We surveyed physicians and scientists working for a large health care organization in October 2013. Validated tools were used to assess burnout. Physicians also rated the leadership qualities of their immediate supervisor in 12 specific dimensions on a 5-point Likert scale. All supervisors were themselves physicians/scientists. A composite leadership score was calculated by summing scores for the 12 individual items (range, 12-60; higher scores indicate more effective leadership). Of the 3896 physicians surveyed, 2813 (72.2%) responded. Supervisor scores in each of the 12 leadership dimensions and composite leadership score strongly correlated with the burnout and satisfaction scores of individual physicians (all P<.001). On multivariate analysis adjusting for age, sex, duration of employment at Mayo Clinic, and specialty, each 1-point increase in composite leadership score was associated with a 3.3% decrease in the likelihood of burnout (P<.001) and a 9.0% increase in the likelihood of satisfaction (P<.001) of the physicians supervised. The mean composite leadership rating of each division/department chair (n=128) also correlated with the prevalence of burnout (correlation=-0.330; r(2)=0.11; P<.001) and satisfaction (correlation=0.684; r(2)=0.47; P<.001) at the division/department level. The leadership qualities of physician supervisors appear to impact the well-being and satisfaction of individual physicians working in health care organizations. These findings have important implications for the selection and training of physician leaders and provide new insights into organizational factors that affect physician well-being. Copyright © 2015 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.
A Dirichlet-Multinomial Bayes Classifier for Disease Diagnosis with Microbial Compositions.
Gao, Xiang; Lin, Huaiying; Dong, Qunfeng
2017-01-01
Dysbiosis of microbial communities is associated with various human diseases, raising the possibility of using microbial compositions as biomarkers for disease diagnosis. We have developed a Bayes classifier by modeling microbial compositions with Dirichlet-multinomial distributions, which are widely used to model multicategorical count data with extra variation. The parameters of the Dirichlet-multinomial distributions are estimated from training microbiome data sets based on maximum likelihood. The posterior probability of a microbiome sample belonging to a disease or healthy category is calculated based on Bayes' theorem, using the likelihood values computed from the estimated Dirichlet-multinomial distribution, as well as a prior probability estimated from the training microbiome data set or previously published information on disease prevalence. When tested on real-world microbiome data sets, our method, called DMBC (for Dirichlet-multinomial Bayes classifier), shows better classification accuracy than the only existing Bayesian microbiome classifier based on a Dirichlet-multinomial mixture model and the popular random forest method. The advantage of DMBC is its built-in automatic feature selection, capable of identifying a subset of microbial taxa with the best classification accuracy between different classes of samples based on cross-validation. This unique ability enables DMBC to maintain and even improve its accuracy at modeling species-level taxa. The R package for DMBC is freely available at https://github.com/qunfengdong/DMBC. IMPORTANCE By incorporating prior information on disease prevalence, Bayes classifiers have the potential to estimate disease probability better than other common machine-learning methods. Thus, it is important to develop Bayes classifiers specifically tailored for microbiome data. Our method shows higher classification accuracy than the only existing Bayesian classifier and the popular random forest method, and thus provides an alternative option for using microbial compositions for disease diagnosis.
M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU
NASA Astrophysics Data System (ADS)
Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.
2018-04-01
Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.
Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin
2016-01-01
In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680
Personal network structure and substance use in women by 12 months post treatment intake
Tracy, Elizabeth M.; Min, Meeyoung O.; Park, Hyunyong; Jun, MinKyoung; Brown, Suzanne; Francis, Meredith W.
2015-01-01
Introduction Women with substance use disorders enter treatment with limited personal network resources and reduced recovery support. This study examined the impact of personal networks on substance use by 12 months post treatment intake. Methods Data were collected from 284 women who received substance abuse treatment. At six month follow up, composition, support availability and structure of personal networks were examined. Substance use was measured by women’s report of any use of alcohol or drugs. Hierarchical multivariate logistic regression was conducted to examine the contribution of personal network characteristics on substance use by 12 months post treatment intake. Results Higher numbers of substance using alters (network members) and more densely connected networks at six month follow-up were associated with an increased likelihood of substance use by 12 months post treatment intake. A greater number of isolates in women’s networks was associated with decreased odds of substance use. Women who did not use substances by 12 months post treatment intake had more non-users among their isolates at six months compared to those who used substances. No association was found between support availability and likelihood of substance use. Conclusions Both network composition and structure could be relevant foci for network interventions e.g. helping women change network composition by reducing substance users as well as increasing network connections. Isolates who are not substance users may be a particular strength to help women cultivate within their network to promote sustained sobriety post treatment. PMID:26712040
Fitness, motor competence, and body composition are weakly associated with adolescent back pain.
Perry, Mark; Straker, Leon; O'Sullivan, Peter; Smith, Anne; Hands, Beth
2009-06-01
Cross-sectional survey. To assess the associations between adolescent back pain and fitness, motor competence, and body composition. Although deficits in physical fitness and motor control have been shown to relate to adult back pain, the evidence in adolescents is less clear. In this cross-sectional study, 1608 "Raine" cohort adolescents (mean age, 14 years) answered questions on lifetime, month, and chronic prevalence of back pain, and participated in a range of physical tests assessing aerobic capacity, muscle performance, flexibility, motor competence, and body composition.A history of any diagnosed back pain in the adolescent was obtained from the primary caregiver. After multivariate logistic regression analysis, increased likelihood of back pain in boys was associated with greater aerobic capacity, greater waist girth, and both reduced and greater flexibility. Back pain in girls was associated with greater abdominal endurance, reduced kinesthetic integration, and both reduced and greater back endurance. Lower likelihood of back pain was associated with greater bimanual dexterity in boys and greater lower extremity power in girls. Physical characteristics are commonly cited as important risk factors in back pain development. Although some factors were associated with adolescent back pain, and these differed between boys and girls, they made only a small contribution to logistic regression models for back pain. The results suggest future work should explore the interaction of multiple domains of risk factors (physical, lifestyle, and psychosocial) and subgroups of adolescent back pain, for whom different risk factors may be important.
Saavedra, Serguei; Cenci, Simone; Del-Val, Ek; Boege, Karina; Rohr, Rudolf P
2017-09-01
Ecological interaction networks constantly reorganize as interspecific interactions change across successional stages and environmental gradients. This reorganization can also be associated with the extent to which species change their preference for types of niches available in their local sites. Despite the pervasiveness of these interaction changes, previous studies have revealed that network reorganizations have a minimal or insignificant effect on global descriptors of network architecture, such as connectance, modularity and nestedness. However, little is known about whether these reorganizations may have an effect on community dynamics and composition. To answer the question above, we study the multi-year dynamics and reorganization of plant-herbivore interaction networks across secondary successional stages of a tropical dry forest. We develop new quantitative tools based on a structural stability approach to estimate the potential impact of network reorganization on species persistence. Then, we investigate whether this impact can explain the likelihood of persistence of herbivore species in the observed communities. We find that resident (early-arriving) herbivore species increase their likelihood of persistence across time and successional stages. Importantly, we demonstrate that, in late successional stages, the reorganization of interactions among resident species has a strong inhibitory effect on the likelihood of persistence of colonizing (late-arriving) herbivores. These findings support earlier predictions suggesting that, in mature communities, changes of species interactions can act as community-control mechanisms (also known as priority effects). Furthermore, our results illustrate that the dynamics and composition of ecological communities cannot be fully understood without attention to their reorganization processes, despite the invariability of global network properties. © 2017 The Authors. Journal of Animal Ecology © 2017 British Ecological Society.
Robust and efficient estimation with weighted composite quantile regression
NASA Astrophysics Data System (ADS)
Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng
2016-09-01
In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.
Multivariate Phylogenetic Comparative Methods: Evaluations, Comparisons, and Recommendations.
Adams, Dean C; Collyer, Michael L
2018-01-01
Recent years have seen increased interest in phylogenetic comparative analyses of multivariate data sets, but to date the varied proposed approaches have not been extensively examined. Here we review the mathematical properties required of any multivariate method, and specifically evaluate existing multivariate phylogenetic comparative methods in this context. Phylogenetic comparative methods based on the full multivariate likelihood are robust to levels of covariation among trait dimensions and are insensitive to the orientation of the data set, but display increasing model misspecification as the number of trait dimensions increases. This is because the expected evolutionary covariance matrix (V) used in the likelihood calculations becomes more ill-conditioned as trait dimensionality increases, and as evolutionary models become more complex. Thus, these approaches are only appropriate for data sets with few traits and many species. Methods that summarize patterns across trait dimensions treated separately (e.g., SURFACE) incorrectly assume independence among trait dimensions, resulting in nearly a 100% model misspecification rate. Methods using pairwise composite likelihood are highly sensitive to levels of trait covariation, the orientation of the data set, and the number of trait dimensions. The consequences of these debilitating deficiencies are that a user can arrive at differing statistical conclusions, and therefore biological inferences, simply from a dataspace rotation, like principal component analysis. By contrast, algebraic generalizations of the standard phylogenetic comparative toolkit that use the trace of covariance matrices are insensitive to levels of trait covariation, the number of trait dimensions, and the orientation of the data set. Further, when appropriate permutation tests are used, these approaches display acceptable Type I error and statistical power. We conclude that methods summarizing information across trait dimensions, as well as pairwise composite likelihood methods should be avoided, whereas algebraic generalizations of the phylogenetic comparative toolkit provide a useful means of assessing macroevolutionary patterns in multivariate data. Finally, we discuss areas in which multivariate phylogenetic comparative methods are still in need of future development; namely highly multivariate Ornstein-Uhlenbeck models and approaches for multivariate evolutionary model comparisons. © The Author(s) 2017. Published by Oxford University Press on behalf of the Systematic Biology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Zee, Jarcy; Xie, Sharon X.
2015-01-01
Summary When a true survival endpoint cannot be assessed for some subjects, an alternative endpoint that measures the true endpoint with error may be collected, which often occurs when obtaining the true endpoint is too invasive or costly. We develop an estimated likelihood function for the situation where we have both uncertain endpoints for all participants and true endpoints for only a subset of participants. We propose a nonparametric maximum estimated likelihood estimator of the discrete survival function of time to the true endpoint. We show that the proposed estimator is consistent and asymptotically normal. We demonstrate through extensive simulations that the proposed estimator has little bias compared to the naïve Kaplan-Meier survival function estimator, which uses only uncertain endpoints, and more efficient with moderate missingness compared to the complete-case Kaplan-Meier survival function estimator, which uses only available true endpoints. Finally, we apply the proposed method to a dataset for estimating the risk of developing Alzheimer's disease from the Alzheimer's Disease Neuroimaging Initiative. PMID:25916510
The likelihood ratio as a random variable for linked markers in kinship analysis.
Egeland, Thore; Slooten, Klaas
2016-11-01
The likelihood ratio is the fundamental quantity that summarizes the evidence in forensic cases. Therefore, it is important to understand the theoretical properties of this statistic. This paper is the last in a series of three, and the first to study linked markers. We show that for all non-inbred pairwise kinship comparisons, the expected likelihood ratio in favor of a type of relatedness depends on the allele frequencies only via the number of alleles, also for linked markers, and also if the true relationship is another one than is tested for by the likelihood ratio. Exact expressions for the expectation and variance are derived for all these cases. Furthermore, we show that the expected likelihood ratio is a non-increasing function if the recombination rate increases between 0 and 0.5 when the actual relationship is the one investigated by the LR. Besides being of theoretical interest, exact expressions such as obtained here can be used for software validation as they allow to verify the correctness up to arbitrary precision. The paper also presents results and advice of practical importance. For example, we argue that the logarithm of the likelihood ratio behaves in a fundamentally different way than the likelihood ratio itself in terms of expectation and variance, in agreement with its interpretation as weight of evidence. Equipped with the results presented and freely available software, one may check calculations and software and also do power calculations.
Dissociating response conflict and error likelihood in anterior cingulate cortex.
Yeung, Nick; Nieuwenhuis, Sander
2009-11-18
Neuroimaging studies consistently report activity in anterior cingulate cortex (ACC) in conditions of high cognitive demand, leading to the view that ACC plays a crucial role in the control of cognitive processes. According to one prominent theory, the sensitivity of ACC to task difficulty reflects its role in monitoring for the occurrence of competition, or "conflict," between responses to signal the need for increased cognitive control. However, a contrasting theory proposes that ACC is the recipient rather than source of monitoring signals, and that ACC activity observed in relation to task demand reflects the role of this region in learning about the likelihood of errors. Response conflict and error likelihood are typically confounded, making the theories difficult to distinguish empirically. The present research therefore used detailed computational simulations to derive contrasting predictions regarding ACC activity and error rate as a function of response speed. The simulations demonstrated a clear dissociation between conflict and error likelihood: fast response trials are associated with low conflict but high error likelihood, whereas slow response trials show the opposite pattern. Using the N2 component as an index of ACC activity, an EEG study demonstrated that when conflict and error likelihood are dissociated in this way, ACC activity tracks conflict and is negatively correlated with error likelihood. These findings support the conflict-monitoring theory and suggest that, in speeded decision tasks, ACC activity reflects current task demands rather than the retrospective coding of past performance.
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
Models and analysis for multivariate failure time data
NASA Astrophysics Data System (ADS)
Shih, Joanna Huang
The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.
Pyne, Matthew I; Poff, N LeRoy
2017-01-01
Shifts in biodiversity and ecological processes in stream ecosystems in response to rapid climate change will depend on how numerically and functionally dominant aquatic insect species respond to changes in stream temperature and hydrology. Across 253 minimally perturbed streams in eight ecoregions in the western USA, we modeled the distribution of 88 individual insect taxa in relation to existing combinations of maximum summer temperature, mean annual streamflow, and their interaction. We used a heat map approach along with downscaled general circulation model (GCM) projections of warming and streamflow change to estimate site-specific extirpation likelihood for each taxon, allowing estimation of whole-community change in streams across these ecoregions. Conservative climate change projections indicate a 30-40% loss of taxa in warmer, drier ecoregions and 10-20% loss in cooler, wetter ecoregions where taxa are relatively buffered from projected warming and hydrologic change. Differential vulnerability of taxa with key functional foraging roles in processing basal resources suggests that climate change has the potential to modify stream trophic structure and function (e.g., alter rates of detrital decomposition and algal consumption), particularly in warmer and drier ecoregions. We show that streamflow change is equally as important as warming in projected risk to stream community composition and that the relative threat posed by these two fundamental drivers varies across ecoregions according to projected gradients of temperature and hydrologic change. Results also suggest that direct human modification of streams through actions such as water abstraction is likely to further exacerbate loss of taxa and ecosystem alteration, especially in drying climates. Management actions to mitigate climate change impacts on stream ecosystems or to proactively adapt to them will require regional calibration, due to geographic variation in insect sensitivity and in exposure to projected thermal warming and hydrologic change. © 2016 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Lackner, Jeffrey M.; And Others
1996-01-01
Tested the predictive power of self-efficacy expectations of physical capabilities, expectations of pain, and expectations of reinjury on physical function in chronic back pain patients. Before assessment of function, patients rated their abilities to perform essential job tasks--functional self-efficacy (FSE)--and the likelihood working would…
A. W Schoettle; J. G. Klutsch; R. A. Sniezko
2012-01-01
Global trade increases the likelihood of introduction of non-native, invasive species which can threaten native species and their associated ecosystems. This has led to significant impacts to forested landscapes, including extensive tree mortality, shifts in ecosystem composition, and vulnerabilities to other stresses. With the increased appreciation of the importance...
ANA: Astrophysical Neutrino Anisotropy
NASA Astrophysics Data System (ADS)
Denton, Peter
2017-08-01
ANA calculates the likelihood function for a model comprised of two components to the astrophysical neutrino flux detected by IceCube. The first component is extragalactic. Since point sources have not been found and there is increasing evidence that one source catalog cannot describe the entire data set, ANA models the extragalactic flux as isotropic. The second component is galactic. A variety of catalogs of interest are also provided. ANA takes the galactic contribution to be proportional to the matter density of the universe. The likelihood function has one free parameter fgal that is the fraction of the astrophysical flux that is galactic. ANA finds the best fit value of fgal and scans over 0
Generalized likelihood ratios for quantitative diagnostic test scores.
Tandberg, D; Deely, J J; O'Malley, A J
1997-11-01
The reduction of quantitative diagnostic test scores to the dichotomous case is a wasteful and unnecessary simplification in the era of high-speed computing. Physicians could make better use of the information embedded in quantitative test results if modern generalized curve estimation techniques were applied to the likelihood functions of Bayes' theorem. Hand calculations could be completely avoided and computed graphical summaries provided instead. Graphs showing posttest probability of disease as a function of pretest probability with confidence intervals (POD plots) would enhance acceptance of these techniques if they were immediately available at the computer terminal when test results were retrieved. Such constructs would also provide immediate feedback to physicians when a valueless test had been ordered.
Peterman, Jerusha Nelson; Silka, Linda; Bermudez, Odilia I; Wilde, Parke E; Rogers, Beatrice Lorge
2011-09-01
Refugees in the United States have higher rates of some chronic diseases than US-born residents or other first-generation immigrants. This may be partially a result of dietary practices in the United States. There is limited information about which factors are related to dietary practices in refugee populations, particularly those who have been in the United States for 10 to 20 years. Research with Cambodian communities may be useful for examining the relationship between refugee characteristics and dietary practices. Two focus groups (n=11) and a survey (n=150) of Cambodian refugee women were conducted in Lowell, MA, from 2007 to 2008. χ(2) analyses, t tests, and analysis of variance tests were used to describe differences in dietary practices (24-hour recall and a targeted qualitative food assessment) by group characteristics. Higher acculturation was related to higher likelihood of eating brown rice/whole grains, and to lower likelihood of eating high-sodium Asian sauces. Higher education was related to higher likelihood of eating vegetables and fruits and to eating white rice fewer times. Nutrition education and receiving dietary advice from a health care provider were related to higher likelihood of eating whole grains/brown rice. Having a child at home was related to a higher likelihood of eating fast food. Among Cambodian refugees who have been in the United States for 10 to 20 years, dietary practices appear to have a relationship with acculturation (positive association), the interrupted education common to refugees (negative association), nutrition education from either programs or health care providers (positive association), and having a child at home (negative association). Copyright © 2011 American Dietetic Association. Published by Elsevier Inc. All rights reserved.
Lee, Joseph G L; Landrine, Hope; Torres, Essie; Gregory, Kyle R
2016-12-01
Tobacco retailers are an important source of tobacco products for minors. Previous research shows racial discrimination in sales to minors, but no national study has examined neighbourhood correlates of retailer under-age sales. We accessed publicly available results of 2015 US Food and Drug Administration (FDA) inspections of tobacco retailers (n=108 614). In this cross-sectional study, we used multilevel logistic regression to predict the likelihood of retailer sale to a minor based on tract characteristics. We assessed the proportion of residents identifying as American Indian, Asian, Black, Latino and White; Isolation Index scores for each racial/ethnic group; the proportion of people less than age 65 living in poverty; and the proportion of residents age 10-17 in relation to retailer inspection results. The proportion of American Indian residents, Black residents, Latino residents and residents less than age 65 under the poverty line in a neighbourhood are independently, positively associated with the likelihood that a retailer in that neighbourhood will fail an under-age buy inspection. The proportion of White residents and residents age 10-17 are independently, negatively associated with the likelihood of sale of tobacco products to a minor. Isolation Index scores show a similar pattern. In multivariable models holding neighbourhood characteristics constant, higher proportions of Black (+), Latino (+) and age 10-17 (-) residents remained significant predictors of the likelihood of under-age sale. Regulatory agencies should consider oversampling retailers in areas with higher likelihood of sales to minors for inspection. Interventions with tobacco retailers to reduce inequities in youth access should be implemented. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Antimicrobial surfaces for craniofacial implants: state of the art.
Actis, Lisa; Gaviria, Laura; Guda, Teja; Ong, Joo L
2013-04-01
In an attempt to regain function and aesthetics in the craniofacial region, different biomaterials, including titanium, hydroxyapatite, biodegradable polymers and composites, have been widely used as a result of the loss of craniofacial bone. Although these materials presented favorable success rates, osseointegration and antibacterial properties are often hard to achieve. Although bone-implant interactions are highly dependent on the implant's surface characteristics, infections following traumatic craniofacial injuries are common. As such, poor osseointegration and infections are two of the many causes of implant failure. Further, as increasingly complex dental repairs are attempted, the likelihood of infection in these implants has also been on the rise. For these reasons, the treatment of craniofacial bone defects and dental repairs for long-term success remains a challenge. Various approaches to reduce the rate of infection and improve osseointegration have been investigated. Furthermore, recent and planned tissue engineering developments are aimed at improving the implants' physical and biological properties by improving their surfaces in order to develop craniofacial bone substitutes that will restore, maintain and improve tissue function. In this review, the commonly used biomaterials for craniofacial bone restoration and dental repair, as well as surface modification techniques, antibacterial surfaces and coatings are discussed.
Invasive aquarium fish transform ecosystem nutrient dynamics
Capps, Krista A.; Flecker, Alexander S.
2013-01-01
Trade of ornamental aquatic species is a multi-billion dollar industry responsible for the introduction of myriad fishes into novel ecosystems. Although aquarium invaders have the potential to alter ecosystem function, regulation of the trade is minimal and little is known about the ecosystem-level consequences of invasion for all but a small number of aquarium species. Here, we demonstrate how ecological stoichiometry can be used as a framework to identify aquarium invaders with the potential to modify ecosystem processes. We show that explosive growth of an introduced population of stoichiometrically unique, phosphorus (P)-rich catfish in a river in southern Mexico significantly transformed stream nutrient dynamics by altering nutrient storage and remineralization rates. Notably, changes varied between elements; the P-rich fish acted as net sinks of P and net remineralizers of nitrogen. Results from this study suggest species-specific stoichiometry may be insightful for understanding how invasive species modify nutrient dynamics when their population densities and elemental composition differ substantially from native organisms. Risk analysis for potential aquarium imports should consider species traits such as body stoichiometry, which may increase the likelihood that an invasion will alter the structure and function of ecosystems. PMID:23966642
A survey of kernel-type estimators for copula and their applications
NASA Astrophysics Data System (ADS)
Sumarjaya, I. W.
2017-10-01
Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.
7 CFR 1467.4 - Program requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... that promote the restoration, protection, enhancement, maintenance, and management of wetland functions... successful restoration of wetland functions and values when considering the cost of acquiring the easement...) The likelihood of the successful restoration of such land and the resultant wetland values merit...
1990-11-01
1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and
Yang, Haishui; Zang, Yanyan; Yuan, Yongge; Tang, Jianjun; Chen, Xin
2012-04-12
Arbuscular mycorrhizal fungi (AMF) can form obligate symbioses with the vast majority of land plants, and AMF distribution patterns have received increasing attention from researchers. At the local scale, the distribution of AMF is well documented. Studies at large scales, however, are limited because intensive sampling is difficult. Here, we used ITS rDNA sequence metadata obtained from public databases to study the distribution of AMF at continental and global scales. We also used these sequence metadata to investigate whether host plant is the main factor that affects the distribution of AMF at large scales. We defined 305 ITS virtual taxa (ITS-VTs) among all sequences of the Glomeromycota by using a comprehensive maximum likelihood phylogenetic analysis. Each host taxonomic order averaged about 53% specific ITS-VTs, and approximately 60% of the ITS-VTs were host specific. Those ITS-VTs with wide host range showed wide geographic distribution. Most ITS-VTs occurred in only one type of host functional group. The distributions of most ITS-VTs were limited across ecosystem, across continent, across biogeographical realm, and across climatic zone. Non-metric multidimensional scaling analysis (NMDS) showed that AMF community composition differed among functional groups of hosts, and among ecosystem, continent, biogeographical realm, and climatic zone. The Mantel test showed that AMF community composition was significantly correlated with plant community composition among ecosystem, among continent, among biogeographical realm, and among climatic zone. The structural equation modeling (SEM) showed that the effects of ecosystem, continent, biogeographical realm, and climatic zone were mainly indirect on AMF distribution, but plant had strongly direct effects on AMF. The distribution of AMF as indicated by ITS rDNA sequences showed a pattern of high endemism at large scales. This pattern indicates high specificity of AMF for host at different scales (plant taxonomic order and functional group) and high selectivity from host plants for AMF. The effects of ecosystemic, biogeographical, continental and climatic factors on AMF distribution might be mediated by host plants.
Whiley, Phillip J.; Parsons, Michael T.; Leary, Jennifer; Tucker, Kathy; Warwick, Linda; Dopita, Belinda; Thorne, Heather; Lakhani, Sunil R.; Goldgar, David E.; Brown, Melissa A.; Spurdle, Amanda B.
2014-01-01
Rare exonic, non-truncating variants in known cancer susceptibility genes such as BRCA1 and BRCA2 are problematic for genetic counseling and clinical management of relevant families. This study used multifactorial likelihood analysis and/or bioinformatically-directed mRNA assays to assess pathogenicity of 19 BRCA1 or BRCA2 variants identified following patient referral to clinical genetic services. Two variants were considered to be pathogenic (Class 5). BRCA1:c.4484G> C(p.Arg1495Thr) was shown to result in aberrant mRNA transcripts predicted to encode truncated proteins. The BRCA1:c.122A>G(p.His41Arg) RING-domain variant was found from multifactorial likelihood analysis to have a posterior probability of pathogenicity of 0.995, a result consistent with existing protein functional assay data indicating lost BARD1 binding and ubiquitin ligase activity. Of the remaining variants, seven were determined to be not clinically significant (Class 1), nine were likely not pathogenic (Class 2), and one was uncertain (Class 3).These results have implications for genetic counseling and medical management of families carrying these specific variants. They also provide additional multifactorial likelihood variant classifications as reference to evaluate the sensitivity and specificity of bioinformatic prediction tools and/or functional assay data in future studies. PMID:24489791
Zhan, Tingting; Chevoneva, Inna; Iglewicz, Boris
2010-01-01
The family of weighted likelihood estimators largely overlaps with minimum divergence estimators. They are robust to data contaminations compared to MLE. We define the class of generalized weighted likelihood estimators (GWLE), provide its influence function and discuss the efficiency requirements. We introduce a new truncated cubic-inverse weight, which is both first and second order efficient and more robust than previously reported weights. We also discuss new ways of selecting the smoothing bandwidth and weighted starting values for the iterative algorithm. The advantage of the truncated cubic-inverse weight is illustrated in a simulation study of three-components normal mixtures model with large overlaps and heavy contaminations. A real data example is also provided. PMID:20835375
NASA Technical Reports Server (NTRS)
Klein, V.
1980-01-01
A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.
Bayesian image reconstruction - The pixon and optimal image modeling
NASA Technical Reports Server (NTRS)
Pina, R. K.; Puetter, R. C.
1993-01-01
In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.
ERIC Educational Resources Information Center
Borgmeier, Chris; Horner, Robert H.
2006-01-01
Faced with limited resources, schools require tools that increase the accuracy and efficiency of functional behavioral assessment. Yarbrough and Carr (2000) provided evidence that informant confidence ratings of the likelihood of problem behavior in specific situations offered a promising tool for predicting the accuracy of function-based…
Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoneking, M.R.; Den Hartog, D.J.
1996-06-01
The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less
Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete
2015-01-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.
Uncued Low SNR Detection with Likelihood from Image Multi Bernoulli Filter
NASA Astrophysics Data System (ADS)
Murphy, T.; Holzinger, M.
2016-09-01
Both SSA and SDA necessitate uncued, partially informed detection and orbit determination efforts for small space objects which often produce only low strength electro-optical signatures. General frame to frame detection and tracking of objects includes methods such as moving target indicator, multiple hypothesis testing, direct track-before-detect methods, and random finite set based multiobject tracking. This paper will apply the multi-Bernoilli filter to low signal-to-noise ratio (SNR), uncued detection of space objects for space domain awareness applications. The primary novel innovation in this paper is a detailed analysis of the existing state-of-the-art likelihood functions and a likelihood function, based on a binary hypothesis, previously proposed by the authors. The algorithm is tested on electro-optical imagery obtained from a variety of sensors at Georgia Tech, including the GT-SORT 0.5m Raven-class telescope, and a twenty degree field of view high frame rate CMOS sensor. In particular, a data set of an extended pass of the Hitomi Astro-H satellite approximately 3 days after loss of communication and potential break up is examined.
Competition, Speculative Risks, and IT Security Outsourcing
NASA Astrophysics Data System (ADS)
Cezar, Asunur; Cavusoglu, Huseyin; Raghunathan, Srinivasan
Information security management is becoming a more critical and, simultaneously, a challenging function for many firms. Even though many security managers are skeptical about outsourcing of IT security, others have cited reasons that are used for outsourcing of traditional IT functions for why security outsourcing is likely to increase. Our research offers a novel explanation, based on competitive externalities associated with IT security, for firms' decisions to outsource IT security. We show that if competitive externalities are ignored, then a firm will outsource security if and only if the MSSP offers a quality (or a cost) advantage over in-house operations, which is consistent with the traditional explanation for security outsourcing. However, a higher quality is neither a prerequisite nor a guarantee for a firm to outsource security. The competitive risk environment and the nature of the security function outsourced, in addition to quality, determine firms' outsourcing decisions. If the reward from the competitor's breach is higher than the loss from own breach, then even if the likelihood of a breach is higher under the MSSP the expected benefit from the competitive demand externality may offset the loss from the higher likelihood of breaches, resulting in one or both firms outsourcing security. The incentive to outsource security monitoring is higher than that of infrastructure management because the MSSP can reduce the likelihood of breach on both firms and thus enhance the demand externality effect. The incentive to outsource security monitoring (infrastructure management) is higher (lower) if either the likelihood of breach on both firms is lower (higher) when security is outsourced or the benefit (relative to loss) from the externality is higher (lower). The benefit from the demand externality arising out of a security breach is higher when more of the customers that leave the breached firm switch to the non-breached firm.
Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul
2015-01-01
Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si
2014-12-01
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level
Savalei, Victoria; Rhemtulla, Mijke
2017-01-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.
Savalei, Victoria; Rhemtulla, Mijke
2017-08-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.
Lin, Feng-Chang; Zhu, Jun
2012-01-01
We develop continuous-time models for the analysis of environmental or ecological monitoring data such that subjects are observed at multiple monitoring time points across space. Of particular interest are additive hazards regression models where the baseline hazard function can take on flexible forms. We consider time-varying covariates and take into account spatial dependence via autoregression in space and time. We develop statistical inference for the regression coefficients via partial likelihood. Asymptotic properties, including consistency and asymptotic normality, are established for parameter estimates under suitable regularity conditions. Feasible algorithms utilizing existing statistical software packages are developed for computation. We also consider a simpler additive hazards model with homogeneous baseline hazard and develop hypothesis testing for homogeneity. A simulation study demonstrates that the statistical inference using partial likelihood has sound finite-sample properties and offers a viable alternative to maximum likelihood estimation. For illustration, we analyze data from an ecological study that monitors bark beetle colonization of red pines in a plantation of Wisconsin.
Planck intermediate results. XVI. Profile likelihoods for cosmological parameters
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bonaldi, A.; Bond, J. R.; Bouchet, F. R.; Burigana, C.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Couchot, F.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Liddle, A.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski∗, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rouillé d'Orfeuil, B.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Savelainen, M.; Savini, G.; Spencer, L. D.; Spinelli, M.; Starck, J.-L.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; White, M.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-06-01
We explore the 2013 Planck likelihood function with a high-precision multi-dimensional minimizer (Minuit). This allows a refinement of the ΛCDM best-fit solution with respect to previously-released results, and the construction of frequentist confidence intervals using profile likelihoods. The agreement with the cosmological results from the Bayesian framework is excellent, demonstrating the robustness of the Planck results to the statistical methodology. We investigate the inclusion of neutrino masses, where more significant differences may appear due to the non-Gaussian nature of the posterior mass distribution. By applying the Feldman-Cousins prescription, we again obtain results very similar to those of the Bayesian methodology. However, the profile-likelihood analysis of the cosmic microwave background (CMB) combination (Planck+WP+highL) reveals a minimum well within the unphysical negative-mass region. We show that inclusion of the Planck CMB-lensing information regularizes this issue, and provide a robust frequentist upper limit ∑ mν ≤ 0.26 eV (95% confidence) from the CMB+lensing+BAO data combination.
NASA Technical Reports Server (NTRS)
Howell, L. W.
2001-01-01
A simple power law model consisting of a single spectral index (alpha-1) is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at knee energy (E(sub k)) to a steeper spectral index alpha-2 > alpha-1 above E(sub k). The maximum likelihood procedure is developed for estimating these three spectral parameters of the broken power law energy spectrum from simulated detector responses. These estimates and their surrounding statistical uncertainty are being used to derive the requirements in energy resolution, calorimeter size, and energy response of a proposed sampling calorimeter for the Advanced Cosmic-ray Composition Experiment for the Space Station (ACCESS). This study thereby permits instrument developers to make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.
Spatial competition dynamics between reef corals under ocean acidification.
Horwitz, Rael; Hoogenboom, Mia O; Fine, Maoz
2017-01-09
Climate change, including ocean acidification (OA), represents a major threat to coral-reef ecosystems. Although previous experiments have shown that OA can negatively affect the fitness of reef corals, these have not included the long-term effects of competition for space on coral growth rates. Our multispecies year-long study subjected reef-building corals from the Gulf of Aqaba (Red Sea) to competitive interactions under present-day ocean pH (pH 8.1) and predicted end-of-century ocean pH (pH 7.6). Results showed coral growth is significantly impeded by OA under intraspecific competition for five out of six study species. Reduced growth from OA, however, is negligible when growth is already suppressed in the presence of interspecific competition. Using a spatial competition model, our analysis indicates shifts in the competitive hierarchy and a decrease in overall coral cover under lowered pH. Collectively, our case study demonstrates how modified competitive performance under increasing OA will in all likelihood change the composition, structure and functionality of reef coral communities.
Spatial competition dynamics between reef corals under ocean acidification
NASA Astrophysics Data System (ADS)
Horwitz, Rael; Hoogenboom, Mia O.; Fine, Maoz
2017-01-01
Climate change, including ocean acidification (OA), represents a major threat to coral-reef ecosystems. Although previous experiments have shown that OA can negatively affect the fitness of reef corals, these have not included the long-term effects of competition for space on coral growth rates. Our multispecies year-long study subjected reef-building corals from the Gulf of Aqaba (Red Sea) to competitive interactions under present-day ocean pH (pH 8.1) and predicted end-of-century ocean pH (pH 7.6). Results showed coral growth is significantly impeded by OA under intraspecific competition for five out of six study species. Reduced growth from OA, however, is negligible when growth is already suppressed in the presence of interspecific competition. Using a spatial competition model, our analysis indicates shifts in the competitive hierarchy and a decrease in overall coral cover under lowered pH. Collectively, our case study demonstrates how modified competitive performance under increasing OA will in all likelihood change the composition, structure and functionality of reef coral communities.
Salleh, Sh-Hussain; Hamedi, Mahyar; Zulkifly, Ahmad Hafiz; Lee, Muhammad Hisyam; Mohd Noor, Alias; Harris, Arief Ruhullah A.; Abdul Majid, Norazman
2014-01-01
Stress shielding and micromotion are two major issues which determine the success of newly designed cementless femoral stems. The correlation of experimental validation with finite element analysis (FEA) is commonly used to evaluate the stress distribution and fixation stability of the stem within the femoral canal. This paper focused on the applications of feature extraction and pattern recognition using support vector machine (SVM) to determine the primary stability of the implant. We measured strain with triaxial rosette at the metaphyseal region and micromotion with linear variable direct transducer proximally and distally using composite femora. The root mean squares technique is used to feed the classifier which provides maximum likelihood estimation of amplitude, and radial basis function is used as the kernel parameter which mapped the datasets into separable hyperplanes. The results showed 100% pattern recognition accuracy using SVM for both strain and micromotion. This indicates that DSP could be applied in determining the femoral stem primary stability with high pattern recognition accuracy in biomechanical testing. PMID:24800230
Baharuddin, Mohd Yusof; Salleh, Sh-Hussain; Hamedi, Mahyar; Zulkifly, Ahmad Hafiz; Lee, Muhammad Hisyam; Mohd Noor, Alias; Harris, Arief Ruhullah A; Abdul Majid, Norazman
2014-01-01
Stress shielding and micromotion are two major issues which determine the success of newly designed cementless femoral stems. The correlation of experimental validation with finite element analysis (FEA) is commonly used to evaluate the stress distribution and fixation stability of the stem within the femoral canal. This paper focused on the applications of feature extraction and pattern recognition using support vector machine (SVM) to determine the primary stability of the implant. We measured strain with triaxial rosette at the metaphyseal region and micromotion with linear variable direct transducer proximally and distally using composite femora. The root mean squares technique is used to feed the classifier which provides maximum likelihood estimation of amplitude, and radial basis function is used as the kernel parameter which mapped the datasets into separable hyperplanes. The results showed 100% pattern recognition accuracy using SVM for both strain and micromotion. This indicates that DSP could be applied in determining the femoral stem primary stability with high pattern recognition accuracy in biomechanical testing.
NASA Astrophysics Data System (ADS)
Gonçalves, Ítalo Gomes; Kumaira, Sissa; Guadagnin, Felipe
2017-06-01
Implicit modeling has experienced a rise in popularity over the last decade due to its advantages in terms of speed and reproducibility in comparison with manual digitization of geological structures. The potential-field method consists in interpolating a scalar function that indicates to which side of a geological boundary a given point belongs to, based on cokriging of point data and structural orientations. This work proposes a vector potential-field solution from a machine learning perspective, recasting the problem as multi-class classification, which alleviates some of the original method's assumptions. The potentials related to each geological class are interpreted in a compositional data framework. Variogram modeling is avoided through the use of maximum likelihood to train the model, and an uncertainty measure is introduced. The methodology was applied to the modeling of a sample dataset provided with the software Move™. The calculations were implemented in the R language and 3D visualizations were prepared with the rgl package.
An evaluation of portion size estimation aids: precision, ease of use and likelihood of future use.
Faulkner, Gemma P; Livingstone, M Barbara E; Pourshahidi, L Kirsty; Spence, Michelle; Dean, Moira; O'Brien, Sinead; Gibney, Eileen R; Wallace, Julie Mw; McCaffrey, Tracy A; Kerr, Maeve A
2016-09-01
The present study aimed to evaluate the precision, ease of use and likelihood of future use of portion size estimation aids (PSEA). A range of PSEA were used to estimate the serving sizes of a range of commonly eaten foods and rated for ease of use and likelihood of future usage. For each food, participants selected their preferred PSEA from a range of options including: quantities and measures; reference objects; measuring; and indicators on food packets. These PSEA were used to serve out various foods (e.g. liquid, amorphous, and composite dishes). Ease of use and likelihood of future use were noted. The foods were weighed to determine the precision of each PSEA. Males and females aged 18-64 years (n 120). The quantities and measures were the most precise PSEA (lowest range of weights for estimated portion sizes). However, participants preferred household measures (e.g. 200 ml disposable cup) - deemed easy to use (median rating of 5), likely to use again in future (all scored either 4 or 5 on a scale from 1='not very likely' to 5='very likely to use again') and precise (narrow range of weights for estimated portion sizes). The majority indicated they would most likely use the PSEA preparing a meal (94 %), particularly dinner (86 %) in the home (89 %; all P<0·001) for amorphous grain foods. Household measures may be precise, easy to use and acceptable aids for estimating the appropriate portion size of amorphous grain foods.
Lovelock, Paul K; Spurdle, Amanda B; Mok, Myth TS; Farrugia, Daniel J; Lakhani, Sunil R; Healey, Sue; Arnold, Stephen; Buchanan, Daniel; Investigators, kConFab; Couch, Fergus J; Henderson, Beric R; Goldgar, David E; Tavtigian, Sean V; Chenevix-Trench, Georgia; Brown, Melissa A
2007-01-01
Introduction Many of the DNA sequence variants identified in the breast cancer susceptibility gene BRCA1 remain unclassified in terms of their potential pathogenicity. Both multifactorial likelihood analysis and functional approaches have been proposed as a means to elucidate likely clinical significance of such variants, but analysis of the comparative value of these methods for classifying all sequence variants has been limited. Methods We have compared the results from multifactorial likelihood analysis with those from several functional analyses for the four BRCA1 sequence variants A1708E, G1738R, R1699Q, and A1708V. Results Our results show that multifactorial likelihood analysis, which incorporates sequence conservation, co-inheritance, segregation, and tumour immunohistochemical analysis, may improve classification of variants. For A1708E, previously shown to be functionally compromised, analysis of oestrogen receptor, cytokeratin 5/6, and cytokeratin 14 tumour expression data significantly strengthened the prediction of pathogenicity, giving a posterior probability of pathogenicity of 99%. For G1738R, shown to be functionally defective in this study, immunohistochemistry analysis confirmed previous findings of inconsistent 'BRCA1-like' phenotypes for the two tumours studied, and the posterior probability for this variant was 96%. The posterior probabilities of R1699Q and A1708V were 54% and 69%, respectively, only moderately suggestive of increased risk. Interestingly, results from functional analyses suggest that both of these variants have only partial functional activity. R1699Q was defective in foci formation in response to DNA damage and displayed intermediate transcriptional transactivation activity but showed no evidence for centrosome amplification. In contrast, A1708V displayed an intermediate transcriptional transactivation activity and a normal foci formation response in response to DNA damage but induced centrosome amplification. Conclusion These data highlight the need for a range of functional studies to be performed in order to identify variants with partially compromised function. The results also raise the possibility that A1708V and R1699Q may be associated with a low or moderate risk of cancer. While data pooling strategies may provide more information for multifactorial analysis to improve the interpretation of the clinical significance of these variants, it is likely that the development of current multifactorial likelihood approaches and the consideration of alternative statistical approaches will be needed to determine whether these individually rare variants do confer a low or moderate risk of breast cancer. PMID:18036263
A Maximum-Likelihood Approach to Force-Field Calibration.
Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam
2015-09-28
A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-likelihood parameter estimation when the contributions to the maximum-likelihood function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions.
Model-Based Clustering and Data Transformations for Gene Expression Data
2001-04-30
transformation parameters, e.g. Andrews, Gnanadesikan , and Warner (1973). Aitchison tests: Aitchison (1986) tested three aspects of the data for...N in the Box-Cox transformation in Equation (5) is estimated by maximum likelihood using the observa- tions (Andrews, Gnanadesikan , and Warner 1973...Compositional Data. Chapman and Hall. Andrews, D. F., R. Gnanadesikan , and J. L. Warner (1973). Methods for assessing multivari- ate normality. In P. R
Likelihood of atom-atom contacts in crystal structures of halogenated organic compounds.
Jelsch, Christian; Soudani, Sarra; Ben Nasr, Cherif
2015-05-01
The likelihood of occurrence of intermolecular contacts in crystals of halogenated organic compounds has been analysed statistically using tools based on the Hirshfeld surface. Several families of small halogenated molecules (containing organic F, Cl, Br or I atoms) were analysed, based on chemical composition and aromatic or aliphatic character. The behaviour of crystal contacts was also probed for molecules containing O or N. So-called halogen bonding (a halogen making short interactions with O or N, or a π interaction with C) is generally disfavoured, except when H is scarce on the molecular surface. Similarly, halogen⋯halogen contacts are more rare than expected, except for molecules that are poor in H. In general, the H atom is found to be the preferred partner of organic halogen atoms in crystal structures. On the other hand, C⋯C interactions in parallel π-stacking have a high propensity to occur in halogenated aromatic molecules. The behaviour of the four different halogen species (F, Cl, Br, I) is compared in several chemical composition contexts. The analysis tool can be refined by distinguishing several types for a given chemical species, such as H atoms bound to O or C. Such distinction shows, for instance, that C-H⋯Cl and O-H⋯O are the preferred interactions in compounds containing both O and Cl.
Moghaddar, N; van der Werf, J H J
2017-12-01
The objectives of this study were to estimate the additive and dominance variance component of several weight and ultrasound scanned body composition traits in purebred and combined cross-bred sheep populations based on single nucleotide polymorphism (SNP) marker genotypes and then to investigate the effect of fitting additive and dominance effects on accuracy of genomic evaluation. Additive and dominance variance components were estimated in a mixed model equation based on "average information restricted maximum likelihood" using additive and dominance (co)variances between animals calculated from 48,599 SNP marker genotypes. Genomic prediction was based on genomic best linear unbiased prediction (GBLUP), and the accuracy of prediction was assessed based on a random 10-fold cross-validation. Across different weight and scanned body composition traits, dominance variance ranged from 0.0% to 7.3% of the phenotypic variance in the purebred population and from 7.1% to 19.2% in the combined cross-bred population. In the combined cross-bred population, the range of dominance variance decreased to 3.1% and 9.9% after accounting for heterosis effects. Accounting for dominance effects significantly improved the likelihood of the fitting model in the combined cross-bred population. This study showed a substantial dominance genetic variance for weight and ultrasound scanned body composition traits particularly in cross-bred population; however, improvement in the accuracy of genomic breeding values was small and statistically not significant. Dominance variance estimates in combined cross-bred population could be overestimated if heterosis is not fitted in the model. © 2017 Blackwell Verlag GmbH.
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
NASA Astrophysics Data System (ADS)
Brown, Eric; Petersen, Kenni; Lesher, Charles
2017-04-01
Basalts are formed by adiabatic decompression melting of the asthenosphere, and thus provide records of the thermal, chemical and dynamical state of the upper mantle. However, uniquely constraining the importance of these factors through the lens of melting is challenging given the inevitability that primary basalts are the product of variable mixing of melts derived from distinct lithologies having different melting behaviors (e.g. peridotite vs. pyroxenite). Forward mantle melting models, such as REEBOX PRO [1], are useful tools in this regard, because they can account for differences in melting behavior and melt pooling processes, and provide estimates of bulk crust composition and volume that can be compared with geochemical and geophysical constraints, respectively. Nevertheless, these models require critical assumptions regarding mantle temperature, and lithologic abundance(s)/composition(s), all of which are poorly constrained. To provide better constraints on these parameters and their uncertainties, we have coupled a Markov Chain Monte Carlo (MCMC) sampling technique with the REEBOX PRO melting model. The MCMC method systematically samples distributions of key REEBOX PRO input parameters (mantle potential temperature, and initial abundances and compositions of the source lithologies) based on a likelihood function that describes the 'fit' of the model outputs (bulk crust composition and volume and end-member peridotite and pyroxenite melts) relative to geochemical and geophysical constraints and their associated uncertainties. As a case study, we have tested and applied the model to magmatism along Reykjanes Peninsula in Iceland, where pyroxenite has been inferred to be present in the mantle source. This locale is ideal because there exist sufficient geochemical and geophysical data to estimate bulk crust compositions and volumes, as well as the range of near-parental melts derived from the mantle. We find that for the case of passive upwelling, the models that best fit the geochemical and geophysical observables require elevated mantle potential temperatures ( 120 °C above ambient mantle), and 5% pyroxenite. The modeled peridotite source has a trace element composition similar to depleted MORB mantle, whereas the trace element composition of the pyroxenite is similar to enriched mid-ocean ridge basalt. These results highlight the promise of this method for efficiently exploring the range of mantle temperatures, lithologic abundances, and mantle source compositions that are most consistent with available observational constraints in individual volcanic systems. 1 Brown and Lesher (2016), G-cubed, 17, 3929-3968
Eisen, Susan V; Bottonari, Kathryn A; Glickman, Mark E; Spiro, Avron; Schultz, Mark R; Herz, Lawrence; Rosenheck, Robert; Rofman, Ethan S
2011-04-01
Research on patient-centered care supports use of patient/consumer self-report measures in monitoring health outcomes. This study examined the incremental value of self-report mental health measures relative to a clinician-rated measure in predicting functional outcomes among mental health service recipients. Participants (n = 446) completed the Behavior and Symptom Identification Scale, the Brief Symptom Inventory, and the Veterans/Rand Short Form-36 at enrollment in the study (T1) and 3 months later (T2). Global Assessment of Functioning (GAF) ratings, mental health service utilization, and psychiatric diagnoses were obtained from administrative data files. Controlling for demographic and clinical variables, results indicated that improvement based on the self-report measures significantly predicted one or more functional outcomes (i.e., decreased likelihood of post-enrollment psychiatric hospitalization and increased likelihood of paid employment), above and beyond the predictive value of the GAF. Inclusion of self-report measures may be a useful addition to performance measurement efforts.
The Role of Cognitive Factors in Predicting Balance and Fall Risk in a Neuro-Rehabilitation Setting.
Saverino, A; Waller, D; Rantell, K; Parry, R; Moriarty, A; Playford, E D
2016-01-01
There is a consistent body of evidence supporting the role of cognitive functions, particularly executive function, in the elderly and in neurological conditions which become more frequent with ageing. The aim of our study was to assess the role of different domains of cognitive functions to predict balance and fall risk in a sample of adults with various neurological conditions in a rehabilitation setting. This was a prospective, cohort study conducted in a single centre in the UK. 114 participants consecutively admitted to a Neuro-Rehabilitation Unit were prospectively assessed for fall accidents. Baseline assessment included a measure of balance (Berg Balance Scale) and a battery of standard cognitive tests measuring executive function, speed of information processing, verbal and visual memory, visual perception and intellectual function. The outcomes of interest were the risk of becoming a faller, balance and fall rate. Two tests of executive function were significantly associated with fall risk, the Stroop Colour Word Test (IRR 1.01, 95% CI 1.00-1.03) and the number of errors on part B of the Trail Making Test (IRR 1.23, 95% CI 1.03-1.49). Composite scores of executive function, speed of information processing and visual memory domains resulted in 2 to 3 times increased likelihood of having better balance (OR 2.74 95% CI 1.08 to 6.94, OR 2.72 95% CI 1.16 to 6.36 and OR 2.44 95% CI 1.11 to 5.35 respectively). Our results show that specific subcomponents of executive functions are able to predict fall risk, while a more global cognitive dysfunction is associated with poorer balance.
The Role of Cognitive Factors in Predicting Balance and Fall Risk in a Neuro-Rehabilitation Setting
Saverino, A.; Waller, D.; Rantell, K.; Parry, R.; Moriarty, A.; Playford, E. D.
2016-01-01
Introduction There is a consistent body of evidence supporting the role of cognitive functions, particularly executive function, in the elderly and in neurological conditions which become more frequent with ageing. The aim of our study was to assess the role of different domains of cognitive functions to predict balance and fall risk in a sample of adults with various neurological conditions in a rehabilitation setting. Methods This was a prospective, cohort study conducted in a single centre in the UK. 114 participants consecutively admitted to a Neuro-Rehabilitation Unit were prospectively assessed for fall accidents. Baseline assessment included a measure of balance (Berg Balance Scale) and a battery of standard cognitive tests measuring executive function, speed of information processing, verbal and visual memory, visual perception and intellectual function. The outcomes of interest were the risk of becoming a faller, balance and fall rate. Results Two tests of executive function were significantly associated with fall risk, the Stroop Colour Word Test (IRR 1.01, 95% CI 1.00–1.03) and the number of errors on part B of the Trail Making Test (IRR 1.23, 95% CI 1.03–1.49). Composite scores of executive function, speed of information processing and visual memory domains resulted in 2 to 3 times increased likelihood of having better balance (OR 2.74 95% CI 1.08 to 6.94, OR 2.72 95% CI 1.16 to 6.36 and OR 2.44 95% CI 1.11 to 5.35 respectively). Conclusions Our results show that specific subcomponents of executive functions are able to predict fall risk, while a more global cognitive dysfunction is associated with poorer balance. PMID:27115880
Score Estimating Equations from Embedded Likelihood Functions under Accelerated Failure Time Model
NING, JING; QIN, JING; SHEN, YU
2014-01-01
SUMMARY The semiparametric accelerated failure time (AFT) model is one of the most popular models for analyzing time-to-event outcomes. One appealing feature of the AFT model is that the observed failure time data can be transformed to identically independent distributed random variables without covariate effects. We describe a class of estimating equations based on the score functions for the transformed data, which are derived from the full likelihood function under commonly used semiparametric models such as the proportional hazards or proportional odds model. The methods of estimating regression parameters under the AFT model can be applied to traditional right-censored survival data as well as more complex time-to-event data subject to length-biased sampling. We establish the asymptotic properties and evaluate the small sample performance of the proposed estimators. We illustrate the proposed methods through applications in two examples. PMID:25663727
Encircling the dark: constraining dark energy via cosmic density in spheres
NASA Astrophysics Data System (ADS)
Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.
2016-08-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.
Stayman, J Webster; Tilley, Steven; Siewerdsen, Jeffrey H
2014-01-01
Previous investigations [1-3] have demonstrated that integrating specific knowledge of the structure and composition of components like surgical implants, devices, and tools into a model-based reconstruction framework can improve image quality and allow for potential exposure reductions in CT. Using device knowledge in practice is complicated by uncertainties in the exact shape of components and their particular material composition. Such unknowns in the morphology and attenuation properties lead to errors in the forward model that limit the utility of component integration. In this work, a methodology is presented to accommodate both uncertainties in shape as well as unknown energy-dependent attenuation properties of the surgical devices. This work leverages the so-called known-component reconstruction (KCR) framework [1] with a generalized deformable registration operator and modifications to accommodate a spectral transfer function in the component model. Moreover, since this framework decomposes the object into separate background anatomy and "known" component factors, a mixed fidelity forward model can be adopted so that measurements associated with projections through the surgical devices can be modeled with much greater accuracy. A deformable KCR (dKCR) approach using the mixed fidelity model is introduced and applied to a flexible wire component with unknown structure and composition. Image quality advantages of dKCR over traditional reconstruction methods are illustrated in cone-beam CT (CBCT) data acquired on a testbench emulating a 3D-guided needle biopsy procedure - i.e., a deformable component (needle) with strong energy-dependent attenuation characteristics (steel) within a complex soft-tissue background.
Bayesian experimental design for models with intractable likelihoods.
Drovandi, Christopher C; Pettitt, Anthony N
2013-12-01
In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.
Multidimensional stochastic approximation using locally contractive functions
NASA Technical Reports Server (NTRS)
Lawton, W. M.
1975-01-01
A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.
Tree-Based Global Model Tests for Polytomous Rasch Models
ERIC Educational Resources Information Center
Komboz, Basil; Strobl, Carolin; Zeileis, Achim
2018-01-01
Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…
Yu, Peng; Shaw, Chad A
2014-06-01
The Dirichlet-multinomial (DMN) distribution is a fundamental model for multicategory count data with overdispersion. This distribution has many uses in bioinformatics including applications to metagenomics data, transctriptomics and alternative splicing. The DMN distribution reduces to the multinomial distribution when the overdispersion parameter ψ is 0. Unfortunately, numerical computation of the DMN log-likelihood function by conventional methods results in instability in the neighborhood of [Formula: see text]. An alternative formulation circumvents this instability, but it leads to long runtimes that make it impractical for large count data common in bioinformatics. We have developed a new method for computation of the DMN log-likelihood to solve the instability problem without incurring long runtimes. The new approach is composed of a novel formula and an algorithm to extend its applicability. Our numerical experiments show that this new method both improves the accuracy of log-likelihood evaluation and the runtime by several orders of magnitude, especially in high-count data situations that are common in deep sequencing data. Using real metagenomic data, our method achieves manyfold runtime improvement. Our method increases the feasibility of using the DMN distribution to model many high-throughput problems in bioinformatics. We have included in our work an R package giving access to this method and a vingette applying this approach to metagenomic data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Chen, Siyue; Leung, Henry; Dondo, Maxwell
2014-05-01
As computer network security threats increase, many organizations implement multiple Network Intrusion Detection Systems (NIDS) to maximize the likelihood of intrusion detection and provide a comprehensive understanding of intrusion activities. However, NIDS trigger a massive number of alerts on a daily basis. This can be overwhelming for computer network security analysts since it is a slow and tedious process to manually analyse each alert produced. Thus, automated and intelligent clustering of alerts is important to reveal the structural correlation of events by grouping alerts with common features. As the nature of computer network attacks, and therefore alerts, is not known in advance, unsupervised alert clustering is a promising approach to achieve this goal. We propose a joint optimization technique for feature selection and clustering to aggregate similar alerts and to reduce the number of alerts that analysts have to handle individually. More precisely, each identified feature is assigned a binary value, which reflects the feature's saliency. This value is treated as a hidden variable and incorporated into a likelihood function for clustering. Since computing the optimal solution of the likelihood function directly is analytically intractable, we use the Expectation-Maximisation (EM) algorithm to iteratively update the hidden variable and use it to maximize the expected likelihood. Our empirical results, using a labelled Defense Advanced Research Projects Agency (DARPA) 2000 reference dataset, show that the proposed method gives better results than the EM clustering without feature selection in terms of the clustering accuracy.
Institutionalization in Taiwan. The role of caregiver gender.
Kao, Hsueh-Fen Sabrina
2003-10-01
The role of caregiver gender in the likelihood of institutionalization of Taiwanese older adults was explored in this study. A sample of 78 male and 69 female primary caregivers of elderly patients who had experienced a stroke at least 6 months prior to the study were interviewed. Logistic regression analyses were applied to examine direct and interaction effects of the elderly adult's functioning the caregiver's available resources, the degree of caregiver burden, perceived public opinion toward institutionalization, and precipitating events on the likelihood of institutionalization among Taiwanese male and female caregivers. Women were more likely to institutionalize the older adult for whom they cared. The proposed model correctly predicted the likelihood of institutionalization of an elderly adult based on male versus female caregivers at the 92% level. Perceived public opinion toward institutionalization was the most significant predictor of institutionalization for both genders. Perceived public opinion toward institutionalization has a strong influence on whether or not caregivers institutionalize an elderly relative. This is consistent with Chinese culture in which public opinion has a much stronger effect on individual behavior than in the United States. American concepts of "minding one's own business" do not exist in Taiwan. It is logical that the older adults' level of functioning would predict the likelihood of institutionalization regardless of caregiver gender. In terms of caregiver characteristics, working hours in male caregivers is more predictive, and the quality of the relationship with the older adult was more predictive of institutionalization for female caregivers.
Ratmann, Oliver; Andrieu, Christophe; Wiuf, Carsten; Richardson, Sylvia
2009-06-30
Mathematical models are an important tool to explain and comprehend complex phenomena, and unparalleled computational advances enable us to easily explore them without any or little understanding of their global properties. In fact, the likelihood of the data under complex stochastic models is often analytically or numerically intractable in many areas of sciences. This makes it even more important to simultaneously investigate the adequacy of these models-in absolute terms, against the data, rather than relative to the performance of other models-but no such procedure has been formally discussed when the likelihood is intractable. We provide a statistical interpretation to current developments in likelihood-free Bayesian inference that explicitly accounts for discrepancies between the model and the data, termed Approximate Bayesian Computation under model uncertainty (ABCmicro). We augment the likelihood of the data with unknown error terms that correspond to freely chosen checking functions, and provide Monte Carlo strategies for sampling from the associated joint posterior distribution without the need of evaluating the likelihood. We discuss the benefit of incorporating model diagnostics within an ABC framework, and demonstrate how this method diagnoses model mismatch and guides model refinement by contrasting three qualitative models of protein network evolution to the protein interaction datasets of Helicobacter pylori and Treponema pallidum. Our results make a number of model deficiencies explicit, and suggest that the T. pallidum network topology is inconsistent with evolution dominated by link turnover or lateral gene transfer alone.
Recreating a functional ancestral archosaur visual pigment.
Chang, Belinda S W; Jönsson, Karolina; Kazmi, Manija A; Donoghue, Michael J; Sakmar, Thomas P
2002-09-01
The ancestors of the archosaurs, a major branch of the diapsid reptiles, originated more than 240 MYA near the dawn of the Triassic Period. We used maximum likelihood phylogenetic ancestral reconstruction methods and explored different models of evolution for inferring the amino acid sequence of a putative ancestral archosaur visual pigment. Three different types of maximum likelihood models were used: nucleotide-based, amino acid-based, and codon-based models. Where possible, within each type of model, likelihood ratio tests were used to determine which model best fit the data. Ancestral reconstructions of the ancestral archosaur node using the best-fitting models of each type were found to be in agreement, except for three amino acid residues at which one reconstruction differed from the other two. To determine if these ancestral pigments would be functionally active, the corresponding genes were chemically synthesized and then expressed in a mammalian cell line in tissue culture. The expressed artificial genes were all found to bind to 11-cis-retinal to yield stable photoactive pigments with lambda(max) values of about 508 nm, which is slightly redshifted relative to that of extant vertebrate pigments. The ancestral archosaur pigments also activated the retinal G protein transducin, as measured in a fluorescence assay. Our results show that ancestral genes from ancient organisms can be reconstructed de novo and tested for function using a combination of phylogenetic and biochemical methods.
Yadav, Ram Bharos; Srivastava, Subodh; Srivastava, Rajeev
2016-01-01
The proposed framework is obtained by casting the noise removal problem into a variational framework. This framework automatically identifies the various types of noise present in the magnetic resonance image and filters them by choosing an appropriate filter. This filter includes two terms: the first term is a data likelihood term and the second term is a prior function. The first term is obtained by minimizing the negative log likelihood of the corresponding probability density functions: Gaussian or Rayleigh or Rician. Further, due to the ill-posedness of the likelihood term, a prior function is needed. This paper examines three partial differential equation based priors which include total variation based prior, anisotropic diffusion based prior, and a complex diffusion (CD) based prior. A regularization parameter is used to balance the trade-off between data fidelity term and prior. The finite difference scheme is used for discretization of the proposed method. The performance analysis and comparative study of the proposed method with other standard methods is presented for brain web dataset at varying noise levels in terms of peak signal-to-noise ratio, mean square error, structure similarity index map, and correlation parameter. From the simulation results, it is observed that the proposed framework with CD based prior is performing better in comparison to other priors in consideration.
Yang, Ji; Gu, Hongya; Yang, Ziheng
2004-01-01
Chalcone synthase (CHS) is a key enzyme in the biosynthesis of flavonoides, which are important for the pigmentation of flowers and act as attractants to pollinators. Genes encoding CHS constitute a multigene family in which the copy number varies among plant species and functional divergence appears to have occurred repeatedly. In morning glories (Ipomoea), five functional CHS genes (A-E) have been described. Phylogenetic analysis of the Ipomoea CHS gene family revealed that CHS A, B, and C experienced accelerated rates of amino acid substitution relative to CHS D and E. To examine whether the CHS genes of the morning glories underwent adaptive evolution, maximum-likelihood models of codon substitution were used to analyze the functional sequences in the Ipomoea CHS gene family. These models used the nonsynonymous/synonymous rate ratio (omega = d(N)/ d(S)) as an indicator of selective pressure and allowed the ratio to vary among lineages or sites. Likelihood ratio test suggested significant variation in selection pressure among amino acid sites, with a small proportion of them detected to be under positive selection along the branches ancestral to CHS A, B, and C. Positive Darwinian selection appears to have promoted the divergence of subfamily ABC and subfamily DE and is at least partially responsible for a rate increase following gene duplication.
Validation of software for calculating the likelihood ratio for parentage and kinship.
Drábek, J
2009-03-01
Although the likelihood ratio is a well-known statistical technique, commercial off-the-shelf (COTS) software products for its calculation are not sufficiently validated to suit general requirements for the competence of testing and calibration laboratories (EN/ISO/IEC 17025:2005 norm) per se. The software in question can be considered critical as it directly weighs the forensic evidence allowing judges to decide on guilt or innocence or to identify person or kin (i.e.: in mass fatalities). For these reasons, accredited laboratories shall validate likelihood ratio software in accordance with the above norm. To validate software for calculating the likelihood ratio in parentage/kinship scenarios I assessed available vendors, chose two programs (Paternity Index and familias) for testing, and finally validated them using tests derived from elaboration of the available guidelines for the field of forensics, biomedicine, and software engineering. MS Excel calculation using known likelihood ratio formulas or peer-reviewed results of difficult paternity cases were used as a reference. Using seven testing cases, it was found that both programs satisfied the requirements for basic paternity cases. However, only a combination of two software programs fulfills the criteria needed for our purpose in the whole spectrum of functions under validation with the exceptions of providing algebraic formulas in cases of mutation and/or silent allele.
Ramsay-Curve Differential Item Functioning
ERIC Educational Resources Information Center
Woods, Carol M.
2011-01-01
Differential item functioning (DIF) occurs when an item on a test, questionnaire, or interview has different measurement properties for one group of people versus another, irrespective of true group-mean differences on the constructs being measured. This article is focused on item response theory based likelihood ratio testing for DIF (IRT-LR or…
Bacio, Guadalupe A.; Estrada, Yannine; Huang, Shi; Martínez, Marcos; Sardinas, Krystal; Prado, Guillermo
2015-01-01
The purpose of this cross-sectional study was to test the transactional relationships of risk and protective factors that influence initiation of alcohol, tobacco, and drug use among Hispanic youth. Ecodevelopmental theory was used to identify factors at multiple ecological levels with a focus on four school-level characteristics (i.e. school socioeconomic status, school climate, school acculturation, and school ethnic composition). A sample of 741 Hispanic adolescents (M age =13.9, SD =.67) and their caregivers were recruited from 18 participating middle schools in Miami-Dade County, FL. Structural equation modeling was used to test the hypothesized ecodevelopmental model of early substance use, accounting for school clustering effects. Results provided strong support for the model (CFI = .95; RMSEA =.03). School SES was indirectly related to the likelihood of starting to use substances through perceived peer use norms (β =.03, p <.02). Similarly, school climate had an indirect effect on substance use initiation through family functioning and perceptions of peer use norms (β = −.03, p < .01). Neither school ethnic composition nor school acculturation had indirect effects on initiation of substance use. Results highlight the importance of the interplay of risk and protective factors at multiple ecological levels that impact early substance use initiation. Further, findings underscore the key role of school level characteristics on initiation of substance use and present opportunities for intervention. PMID:26054814
Nutrients affecting brain composition and behavior
NASA Technical Reports Server (NTRS)
Wurtman, R. J.
1987-01-01
This review examines the changes in brain composition and in various brain functions, including behavior, that can follow the ingestion of particular foods or nutrients. It details those that are best understood: the increases in serotonin, catecholamine, or acetylcholine synthesis that can occur subsequent to food-induced increases in brain levels of tryptophan, tyrosine, or choline; it also discusses the various processes that must intervene between the mouth and the synapse, so to speak, in order for a nutrient to affect neurotransmission, and it speculates as to additional brain chemicals that may ultimately be found to be affected by changes in the availability of their nutrient precursors. Because the brain chemicals best known to be nutrient dependent overlap with those thought to underlie the actions of most of the drugs used to treat psychiatric diseases, knowledge of this dependence may help the psychiatrist to understand some of the pathologic processes occurring in his/her patients, particularly those with appetitive symptoms. At the very least, such knowledge should provide the psychiatrist with objective criteria for judging when to take seriously assertions that particular foods or nutrients do indeed affect behavior (e.g., in hyperactive children). If the food can be shown to alter neurotransmitter release, it may be behaviorally-active; however, if it lacks a discernible neurochemical effect, the likelihood that it really alters behavior is small.
A general framework for updating belief distributions.
Bissiri, P G; Holmes, C C; Walker, S G
2016-11-01
We propose a framework for general Bayesian inference. We argue that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered as a special case. Modern application areas make it increasingly challenging for Bayesians to attempt to model the true data-generating mechanism. For instance, when the object of interest is low dimensional, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. More importantly, there are settings where the parameter of interest does not directly index a family of density functions and thus the Bayesian approach to learning about such parameters is currently regarded as problematic. Our framework uses loss functions to connect information in the data to functionals of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is known yet provides coherent subjective inference in much more general settings. Connections to other inference frameworks are highlighted.
NASA Astrophysics Data System (ADS)
Yu, Haitao; Liu, Jing; Cai, Lihui; Wang, Jiang; Cao, Yibin; Hao, Chongqing
2017-02-01
Electroencephalogram (EEG) signal evoked by acupuncture stimulation at "Zusanli" acupoint is analyzed to investigate the modulatory effect of manual acupuncture on the functional brain activity. Power spectral density of EEG signal is first calculated based on the autoregressive Burg method. It is shown that the EEG power is significantly increased during and after acupuncture in delta and theta bands, but decreased in alpha band. Furthermore, synchronization likelihood is used to estimate the nonlinear correlation between each pairwise EEG signals. By applying a threshold to resulting synchronization matrices, functional networks for each band are reconstructed and further quantitatively analyzed to study the impact of acupuncture on network structure. Graph theoretical analysis demonstrates that the functional connectivity of the brain undergoes obvious change under different conditions: pre-acupuncture, acupuncture, and post-acupuncture. The minimum path length is largely decreased and the clustering coefficient keeps increasing during and after acupuncture in delta and theta bands. It is indicated that acupuncture can significantly modulate the functional activity of the brain, and facilitate the information transmission within different brain areas. The obtained results may facilitate our understanding of the long-lasting effect of acupuncture on the brain function.
Kim, Jinhee; Lee, Yunhwan; Kye, Seunghee; Chung, Yoon-Sok; Lee, Okhee
2017-04-01
Serum vitamin D levels have been reported to be associated with individual components of body composition. However, the relationship between serum vitamin D and combined indices of adverse body composition is largely unknown. This cross-sectional study examined the association between serum vitamin D and osteosarcopenic obesity in a nationally representative sample of middle-aged and older adults. We analysed the Korea National Health and Nutrition Examination Surveys (IV and V) conducted in 2008-2010, consisting of 5908 (2485 men, 3423 women) aged ≥ 50 years. Serum vitamin D levels were determined by radioimmunoassay, and body composition was evaluated by dual-energy x-ray absorptiometry. The association between serum vitamin D levels and the number of abnormalities in body composition, including osteosarcopenic obesity, a low bone and muscle mass with concurrent high fat mass, was analysed by multinomial logistic regression adjusting for covariates. In men, after controlling for covariates, higher vitamin D levels were associated with a significantly reduced likelihood of the number of phenotypes of adverse body composition (P for trend < 0.05). Those in the highest tertile group of serum vitamin D levels, compared with those in the lowest tertile, were less likely to have adverse body composition, numbering one (odds ratio [OR] = 0.67, 95% confidence interval [CI]: 0.49, 0.92), two (OR = 0.49, 95% CI: 0.33, 0.73), and three (osteosarcopenic obesity; OR = 0.42, 95% CI: 0.26, 0.67). In women, those in the highest tertile group of serum vitamin D levels, compared with those in the lowest tertile, were less likely to have osteosarcopenic obesity (OR = 0.55, 95% CI: 0.33, 0.93). Vitamin D deficiency (<20 ng/mL) in men was significantly associated with an increased likelihood of a higher number of adverse body composition, especially for osteosarcopenic obesity (OR = 2.08, 95% CI: 1.42, 3.03). Vitamin D deficient women, compared with those having normal levels of serum vitamin D, were also more likely to demonstrate osteosarcopenic obesity (OR = 1.99, 95% CI: 1.30, 3.05). A high serum vitamin D level in mid- and late-life was associated with reduced odds of multiple adverse body composition, especially osteosarcopenic obesity, suggesting potential health benefits of maintaining adequate levels of vitamin D. © 2016 The Authors. Journal of Cachexia, Sarcopenia and Muscle published by John Wiley & Sons Ltd on behalf of the Society on Sarcopenia, Cachexia and Wasting Disorders.
Dang, Silvain S; Gorzalka, Boris B
2015-01-01
Introduction Past studies have shown an association between low sexual functioning and engaging in sexually coercive behaviors among men. The mechanism of this relationship is not well understood. Moreover, most studies in this area have been done in incarcerated sex offenders. Aims The aim of the current study was to investigate the role of potential distal predictors of sexual coercion, including insecure attachment style and dysfunctional sexual beliefs, in mediating the relationship between sexual functioning and sexual coercion. The study also seeks to extend past findings to a novel non-forensic population. Methods Male university students (N = 367) anonymously completed online questionnaires. Main Outcome Measures Participants completed the Sexual Experiences Survey, Improved Illinois Rape Myth Acceptance Scale, Hostility Towards Women Scale, Likelihood of Rape Item, Experiences in Close Relationships Scale, Dysfunctional Sexual Beliefs Scale, and Brief Sexual Functioning Questionnaire. Results Sexual functioning was not significantly associated with sexually coercive behaviors in our sample (r = 0.08, P = 0.247), though a significant correlation between sexual functioning and rape myth acceptance was found (r = 0.18, P = 0.007). Path analysis of all variables showed that the likelihood of rape item was the strongest correlate of sexually coercive behaviors (β = 0.34, P < 0.001), while dysfunctional sexual beliefs appeared to mediate the association between anxious attachment and likelihood of rape item score. Anxious (r = −0.27, P = 0.001) and avoidant (r = −0.19, P = 0.004) attachment also correlated significantly with lower sexual functioning. Conclusions These findings suggest the relationship between sexual functioning and sexual coercion may be less robust than previously reported, and may be due to a shared association with other factors. The results elaborate on the interrelation between attachment style and dysfunctional sexual beliefs as predictors of sexual coercion proclivity, suggesting avenues for further research. PMID:26185675
The Joker: A custom Monte Carlo sampler for binary-star and exoplanet radial velocity data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-01-01
Given sparse or low-quality radial-velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and MCMC posterior sampling over the orbital parameters. The Joker is a custom-built Monte Carlo sampler that can produce a posterior sampling for orbital parameters given sparse or noisy radial-velocity measurements, even when the likelihood function is poorly behaved. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still highly informative and can be used in hierarchical (population) modeling.
Survivorship analysis when cure is a possibility: a Monte Carlo study.
Goldman, A I
1984-01-01
Parametric survivorship analyses of clinical trials commonly involves the assumption of a hazard function constant with time. When the empirical curve obviously levels off, one can modify the hazard function model by use of a Gompertz or Weibull distribution with hazard decreasing over time. Some cancer treatments are thought to cure some patients within a short time of initiation. Then, instead of all patients having the same hazard, decreasing over time, a biologically more appropriate model assumes that an unknown proportion (1 - pi) have constant high risk whereas the remaining proportion (pi) have essentially no risk. This paper discusses the maximum likelihood estimation of pi and the power curves of the likelihood ratio test. Monte Carlo studies provide results for a variety of simulated trials; empirical data illustrate the methods.
Pedophiles: mental retardation, maternal age, and sexual orientation.
Blanchard, R; Watson, M S; Choy, A; Dickey, R; Klassen, P; Kuban, M; Ferren, D J
1999-04-01
Intellectual functioning, parental age, and sexual orientation in 991 male sexual offenders were investigated. Sources of data included semistructured interviews, clinical charts, phallometric tests, and self-administered questionnaires. The results suggest two main conclusions: (i) Among pedophiles in general, erotic preference moves away from adult women along two dimensions: age and sex. The extent of this movement is greater, along both dimensions, for pedophiles with lower levels of intellectual functioning. (ii) High maternal age (or some factor it represents) increases the likelihood of exclusive sexual interest in boys. Intellectual deficiency (or some factor it represents) decreases the likelihood of exclusive sexual interest in girls. These two factors summate, so that a pedophile with both factors is more likely to be sexually interested in boys than a pedophile with only one.
Tanasescu, Radu; Cottam, William J; Condon, Laura; Tench, Christopher R; Auer, Dorothee P
2016-09-01
Maladaptive mechanisms of pain processing in chronic pain conditions (CP) are poorly understood. We used coordinate based meta-analysis of 266 fMRI pain studies to study functional brain reorganisation in CP and experimental models of hyperalgesia. The pattern of nociceptive brain activation was similar in CP, hyperalgesia and normalgesia in controls. However, elevated likelihood of activation was detected in the left putamen, left frontal gyrus and right insula in CP comparing stimuli of the most painful vs. other site. Meta-analysis of contrast maps showed no difference between CP, controls, mood conditions. In contrast, experimental hyperalgesia induced stronger activation in the bilateral insula, left cingulate and right frontal gyrus. Activation likelihood maps support a shared neural pain signature of cutaneous nociception in CP and controls. We also present a double dissociation between neural correlates of transient and persistent pain sensitisation with general increased activation intensity but unchanged pattern in experimental hyperalgesia and, by contrast, focally increased activation likelihood, but unchanged intensity, in CP when stimulated at the most painful body part. Copyright © 2016. Published by Elsevier Ltd.
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation
Li, Hong; Lu, Mingquan
2017-01-01
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.
Wang, Fei; Li, Hong; Lu, Mingquan
2017-06-30
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Changing food preference as a function of mood.
Christensen, Larry; Brooks, Alisa
2006-07-01
The authors investigated the effect of mood on food selection. Participants (N = 98) indicated the likelihood of general eating and the likelihood of eating specific foods after reading and projecting themselves onto the events and emotions described in a sad and a happy vignette. Both men and women believed they were more likely to consume food following a happy versus a sad event, and men believed they were significantly more likely to eat than did women. However, the type of food men and women believed they would consume interacted with the type of event experienced. Vegetarian snack foods were more likely to be consumed following a happy versus a sad event, with men more likely to eat snack foods. Men did not significantly change in likelihood of consuming sweet foods as their mood changed. However, women believed they were more likely to consume sweet foods following a sad event. The authors discuss the results in terms of a self-medication hypothesis and the effect of carbohydrates on central serotonin and endogenous opioids. Overall, results demonstrated that mood influences belief in the likelihood of food selection.
Empirical likelihood-based confidence intervals for mean medical cost with censored data.
Jeyarajah, Jenny; Qin, Gengsheng
2017-11-10
In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.
Salje, Ekhard K H; Planes, Antoni; Vives, Eduard
2017-10-01
Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.
Uszko-Lencer, Nicole H M K; Mesquita, Rafael; Janssen, Eefje; Werter, Christ; Brunner-La Rocca, Hans-Peter; Pitta, Fabio; Wouters, Emiel F M; Spruit, Martijn A
2017-08-01
In-depth analyses of the measurement properties of the 6-minute walk test (6MWT) in patients with chronic heart failure (CHF) are lacking. We investigated the reliability, construct validity, and determinants of the distance covered in the 6MWT (6MWD) in CHF patients. 337 patients were studied (median age 65years, 70% male, ejection fraction 35%). Participants performed two 6MWTs on subsequent days. Demographics, anthropometrics, clinical data, ejection fraction, maximal exercise capacity, body composition, lung function, and symptoms of anxiety and depression were also assessed. Construct validity was assessed in terms of convergent, discriminant and known-groups validity. Stepwise linear regression was used. 6MWT was reliable (ICC=0.90, P<0.0001). The learning effect was 31m (95%CI 27, 35m). Older age (≥65years), lower lung diffusing capacity (<80% predicted) and higher NYHA class (NYHA III) were associated with a lower likelihood of a meaningful increase in the second test (OR 0.45-0.56, P<0.05 for all). The best 6MWD had moderate-to-good correlations with peak exercise capacity (r s =0.54-0.69) and no-to-fair correlations with body composition, lung function, ejection fraction, and symptoms of anxiety and depression (r s =0.04-0.49). Patients with higher NYHA classes had lower 6MWD. 6MWD was independently associated with maximal power output during maximal exercise, estimated glomerular filtration rate and age (51.7% of the variability). 6MWT was found to be reliable and valid in patients with mild-to-moderate CHF. Maximal exercise capacity, renal function and age were significant determinants of the best 6MWD. These findings strengthen the clinical utility of the 6MWT in CHF. Copyright © 2017 Elsevier B.V. All rights reserved.
Fuzzy multinomial logistic regression analysis: A multi-objective programming approach
NASA Astrophysics Data System (ADS)
Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan
2017-05-01
Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.
The social value of candidate HIV cures: actualism versus possibilism
Brown, Regina; Evans, Nicholas Greig
2017-01-01
A sterilising or functional cure for HIV is a serious scientific challenge but presents a viable pathway to the eradication of HIV. Such an event would be extremely valuable in terms of relieving the burden of a terrible disease; however, a coordinated commitment to implement healthcare interventions, particularly in regions that bear the brunt of the HIV epidemic, is lacking. In this paper, we examine two strategies for evaluating candidate HIV cures, based on our beliefs about the likelihood of global implementation. We reject possibilist interpretations of social value that do not account for the likelihood that a plan to cure HIV will be followed through. We argue, instead, for an actualist ranking of options for action, which accounts for the likelihood that a cure will be low cost, scalable and easy to administer worldwide. PMID:27402887
The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-03-01
Given sparse or low-quality radial velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and Markov chain Monte Carlo (MCMC) posterior sampling over the orbital parameters. Here we create a custom Monte Carlo sampler for sparse or noisy radial velocity measurements of two-body systems that can produce posterior samples for orbital parameters even when the likelihood function is poorly behaved. The six standard orbital parameters for a binary system can be split into four nonlinear parameters (period, eccentricity, argument of pericenter, phase) and two linear parameters (velocity amplitude, barycenter velocity). We capitalize on this by building a sampling method in which we densely sample the prior probability density function (pdf) in the nonlinear parameters and perform rejection sampling using a likelihood function marginalized over the linear parameters. With sparse or uninformative data, the sampling obtained by this rejection sampling is generally multimodal and dense. With informative data, the sampling becomes effectively unimodal but too sparse: in these cases we follow the rejection sampling with standard MCMC. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still informative and can be used in hierarchical (population) modeling. We give some examples that show how the posterior pdf depends sensitively on the number and time coverage of the observations and their uncertainties.
Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data
NASA Astrophysics Data System (ADS)
Glüsenkamp, Thorsten
2018-06-01
Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.
SPOTting model parameters using a ready-made Python package
NASA Astrophysics Data System (ADS)
Houska, Tobias; Kraft, Philipp; Breuer, Lutz
2015-04-01
The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for optimization methods. Here we see simple algorithms like the MCMC struggling to find the global optimum of the function, while algorithms like SCE-UA and DE-MCZ show their strengths. Thirdly, we apply an uncertainty analysis of a one-dimensional physically based hydrological model build with the Catchment Modelling Framework (CMF). The model is driven by meteorological and groundwater data from a Free Air Carbon Enrichment (FACE) experiment in Linden (Hesse, Germany). Simulation results are evaluated with measured soil moisture data. We search for optimal parameter sets of the van Genuchten-Mualem function and find different equally optimal solutions with some of the algorithms. The case studies reveal that the implemented SPOT methods work sufficiently well. They further show the benefit of having one tool at hand that includes a number of parameter search methods, likelihood functions and a priori parameter distributions within one platform independent package.
Al-Atiyat, R M; Aljumaah, R S
2014-08-27
This study aimed to estimate evolutionary distances and to reconstruct phylogeny trees between different Awassi sheep populations. Thirty-two sheep individuals from three different geographical areas of Jordan and the Kingdom of Saudi Arabia (KSA) were randomly sampled. DNA was extracted from the tissue samples and sequenced using the T7 promoter universal primer. Different phylogenetic trees were reconstructed from 0.64-kb DNA sequences using the MEGA software with the best general time reverse distance model. Three methods of distance estimation were then used. The maximum composite likelihood test was considered for reconstructing maximum likelihood, neighbor-joining and UPGMA trees. The maximum likelihood tree indicated three major clusters separated by cytosine (C) and thymine (T). The greatest distance was shown between the South sheep and North sheep. On the other hand, the KSA sheep as an outgroup showed shorter evolutionary distance to the North sheep population than to the others. The neighbor-joining and UPGMA trees showed quite reliable clusters of evolutionary differentiation of Jordan sheep populations from the Saudi population. The overall results support geographical information and ecological types of the sheep populations studied. Summing up, the resulting phylogeny trees may contribute to the limited information about the genetic relatedness and phylogeny of Awassi sheep in nearby Arab countries.
Cut points of muscle strength associated with metabolic syndrome in men.
Sénéchal, Martin; McGavock, Jonathan M; Church, Timothy S; Lee, Duck-Chul; Earnest, Conrad P; Sui, Xuemei; Blair, Steven N
2014-08-01
The loss of muscle strength with age increases the likelihood of chronic conditions, including metabolic syndrome (MetS). However, the minimal threshold of muscle strength at which the risk for MetS increases has never been established. This study aimed to identify a threshold of muscle strength associated with MetS in men. We created receiver operating curves for muscle strength and the risk of MetS from a cross-sectional sample of 5685 men age <50 yr and 1541 men age ≥50 yr enrolled in the Aerobics Center Longitudinal Study. The primary outcome measure, the MetS, was defined according to the National Cholesterol Education Program Adult Treatment Panel III criteria. Upper and lower body muscle strength was treated as a composite measure of one-repetition maximum tests on bench and leg press and scaled to body weight. Low muscle strength was defined as the lowest age-specific 20th percentile, whereas high muscle strength was defined as composite muscle strength above the 20th percentile. In men aged <50 yr, the odds of MetS were 2.20-fold (95% confidence interval = 1.89-2.54) higher in those with low muscle strength, independent of age, smoking, and alcohol intake. The strength of this association was similar for men age ≥50 yr (odds ratio = 2.11, 95% confidence interval = 1.62-2.74). In men age < 50 yr, the composite strength threshold associated with MetS was 2.57 kg·kg body weight, whereas in men age ≥ 50 yr the threshold was 2.35 kg·kg body weight. This study is the first to identify a threshold of muscle strength associated with an increased likelihood of MetS in men. Measures of muscle strength may help identify men at risk of chronic disease.
Young, Mark T; Bell, Mark A; Brusatte, Stephen L
2011-12-23
Metriorhynchid crocodylomorphs were the only group of archosaurs to fully adapt to a pelagic lifestyle. During the Jurassic and Early Cretaceous, this group diversified into a variety of ecological and morphological types, from large super-predators with a broad short snout and serrated teeth to specialized piscivores/teuthophages with an elongate tubular snout and uncarinated teeth. Here, we use an integrated repertoire of geometric morphometric (form), biomechanical finite-element analysis (FEA; function) and phylogenetic data to examine the nature of craniofacial evolution in this clade. FEA stress values significantly correlate with morphometric values representing skull length and breadth, indicating that form and function are associated. Maximum-likelihood methods, which assess which of several models of evolution best explain the distribution of form and function data on a phylogenetic tree, show that the two major metriorhynchid subclades underwent different evolutionary modes. In geosaurines, both form and function are best explained as evolving under 'random' Brownian motion, whereas in metriorhynchines, the form metrics are best explained as evolving under stasis and the function metric as undergoing a directional change (towards most efficient low-stress piscivory). This suggests that the two subclades were under different selection pressures, and that metriorhynchines with similar skull shape were driven to become functionally divergent.
Clear: Composition of Likelihoods for Evolve and Resequence Experiments.
Iranmehr, Arya; Akbari, Ali; Schlötterer, Christian; Bafna, Vineet
2017-06-01
The advent of next generation sequencing technologies has made whole-genome and whole-population sampling possible, even for eukaryotes with large genomes. With this development, experimental evolution studies can be designed to observe molecular evolution "in action" via evolve-and-resequence (E&R) experiments. Among other applications, E&R studies can be used to locate the genes and variants responsible for genetic adaptation. Most existing literature on time-series data analysis often assumes large population size, accurate allele frequency estimates, or wide time spans. These assumptions do not hold in many E&R studies. In this article, we propose a method-composition of likelihoods for evolve-and-resequence experiments (Clear)-to identify signatures of selection in small population E&R experiments. Clear takes whole-genome sequences of pools of individuals as input, and properly addresses heterogeneous ascertainment bias resulting from uneven coverage. Clear also provides unbiased estimates of model parameters, including population size, selection strength, and dominance, while being computationally efficient. Extensive simulations show that Clear achieves higher power in detecting and localizing selection over a wide range of parameters, and is robust to variation of coverage. We applied the Clear statistic to multiple E&R experiments, including data from a study of adaptation of Drosophila melanogaster to alternating temperatures and a study of outcrossing yeast populations, and identified multiple regions under selection with genome-wide significance. Copyright © 2017 by the Genetics Society of America.
Smoking restrictions in bars and bartender smoking in the US, 1992-2007.
Bitler, Marianne P; Carpenter, Christopher; Zavodny, Madeline
2011-05-01
The present work is an analysis of whether adoption of state clean indoor air laws (SCIALs) covering bars reduces the proportion of bartenders who smoke primarily by reducing smoking among people already employed as bartenders when restrictions are adopted or by changing the composition of the bartender workforce with respect to smoking behaviours. Logistic regressions were estimated for a variety of smoking outcomes, controlling for individual demographic characteristics, state economic characteristics, and state, year, and month fixed effects, using data on 1380 bartenders from the 1992-2007 Tobacco Use Supplement to the Current Population Survey combined with data on SCIALs from ImpacTeen. State restrictions on smoking in bars are negatively associated with whether a bartender smokes, with a 1-point increase in restrictiveness (on a scale of 0-3) associated with a 5.3% reduction in the odds of smoking. Bar SCIALs are positively associated with the likelihood a bartender reports never having smoked cigarettes but not with the likelihood a bartender reports having been a former smoker. State clean indoor air laws covering bars appear to reduce smoking among bartenders primarily by changing the composition of the bartender workforce with respect to smoking rather than by reducing smoking among people already employed as bartenders when restrictions are adopted. Such laws may nonetheless be an important public health tool for reducing secondhand smoke.
Crystal Structure Prediction via Deep Learning.
Ryan, Kevin; Lengyel, Jeff; Shatruk, Michael
2018-06-06
We demonstrate the application of deep neural networks as a machine-learning tool for the analysis of a large collection of crystallographic data contained in the crystal structure repositories. Using input data in the form of multi-perspective atomic fingerprints, which describe coordination topology around unique crystallographic sites, we show that the neural-network model can be trained to effectively distinguish chemical elements based on the topology of their crystallographic environment. The model also identifies structurally similar atomic sites in the entire dataset of ~50000 crystal structures, essentially uncovering trends that reflect the periodic table of elements. The trained model was used to analyze templates derived from the known binary and ternary crystal structures in order to predict the likelihood to form new compounds that could be generated by placing elements into these structural templates in combinatorial fashion. Statistical analysis of predictive performance of the neural-network model, which was applied to a test set of structures never seen by the model during training, indicates its ability to predict known elemental compositions with a high likelihood of success. In ~30% of cases, the known compositions were found among top-10 most likely candidates proposed by the model. These results suggest that the approach developed in this work can be used to effectively guide the synthetic efforts in the discovery of new materials, especially in the case of systems composed of 3 or more chemical elements.
The Role of Parametric Assumptions in Adaptive Bayesian Estimation
ERIC Educational Resources Information Center
Alcala-Quintana, Rocio; Garcia-Perez, Miguel A.
2004-01-01
Variants of adaptive Bayesian procedures for estimating the 5% point on a psychometric function were studied by simulation. Bias and standard error were the criteria to evaluate performance. The results indicated a superiority of (a) uniform priors, (b) model likelihood functions that are odd symmetric about threshold and that have parameter…
Emitter Number Estimation by the General Information Theoretic Criterion from Pulse Trains
2002-12-01
negative log likelihood function plus a penalty function. The general information criteria by Yin and Krishnaiah [11] are different from the regular...548-551, Victoria, BC, Canada, March 1999 DRDC Ottawa TR 2002-156 11 11. L. Zhao, P. P. Krishnaiah and Z. Bai, “On some nonparametric methods for
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason L. Wright
Finding and identifying Cryptography is a growing concern in the malware analysis community. In this paper, a heuristic method for determining the likelihood that a given function contains a cryptographic algorithm is discussed and the results of applying this method in various environments is shown. The algorithm is based on frequency analysis of opcodes that make up each function within a binary.
Semiparametric Item Response Functions in the Context of Guessing
ERIC Educational Resources Information Center
Falk, Carl F.; Cai, Li
2016-01-01
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood-based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
Generalized linear mixed models with varying coefficients for longitudinal data.
Zhang, Daowen
2004-03-01
The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.
Maximum likelihood orientation estimation of 1-D patterns in Laguerre-Gauss subspaces.
Di Claudio, Elio D; Jacovitti, Giovanni; Laurenti, Alberto
2010-05-01
A method for measuring the orientation of linear (1-D) patterns, based on a local expansion with Laguerre-Gauss circular harmonic (LG-CH) functions, is presented. It lies on the property that the polar separable LG-CH functions span the same space as the 2-D Cartesian separable Hermite-Gauss (2-D HG) functions. Exploiting the simple steerability of the LG-CH functions and the peculiar block-linear relationship among the two expansion coefficients sets, maximum likelihood (ML) estimates of orientation and cross section parameters of 1-D patterns are obtained projecting them in a proper subspace of the 2-D HG family. It is shown in this paper that the conditional ML solution, derived by elimination of the cross section parameters, surprisingly yields the same asymptotic accuracy as the ML solution for known cross section parameters. The accuracy of the conditional ML estimator is compared to the one of state of art solutions on a theoretical basis and via simulation trials. A thorough proof of the key relationship between the LG-CH and the 2-D HG expansions is also provided.
Spatial design and strength of spatial signal: Effects on covariance estimation
Irvine, Kathryn M.; Gitelman, Alix I.; Hoeting, Jennifer A.
2007-01-01
In a spatial regression context, scientists are often interested in a physical interpretation of components of the parametric covariance function. For example, spatial covariance parameter estimates in ecological settings have been interpreted to describe spatial heterogeneity or “patchiness” in a landscape that cannot be explained by measured covariates. In this article, we investigate the influence of the strength of spatial dependence on maximum likelihood (ML) and restricted maximum likelihood (REML) estimates of covariance parameters in an exponential-with-nugget model, and we also examine these influences under different sampling designs—specifically, lattice designs and more realistic random and cluster designs—at differing intensities of sampling (n=144 and 361). We find that neither ML nor REML estimates perform well when the range parameter and/or the nugget-to-sill ratio is large—ML tends to underestimate the autocorrelation function and REML produces highly variable estimates of the autocorrelation function. The best estimates of both the covariance parameters and the autocorrelation function come under the cluster sampling design and large sample sizes. As a motivating example, we consider a spatial model for stream sulfate concentration.
A review of contemporary methods for the presentation of scientific uncertainty.
Makinson, K A; Hamby, D M; Edwards, J A
2012-12-01
Graphic methods for displaying uncertainty are often the most concise and informative way to communicate abstract concepts. Presentation methods currently in use for the display and interpretation of scientific uncertainty are reviewed. Numerous subjective and objective uncertainty display methods are presented, including qualitative assessments, node and arrow diagrams, standard statistical methods, box-and-whisker plots,robustness and opportunity functions, contribution indexes, probability density functions, cumulative distribution functions, and graphical likelihood functions.
Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module
NASA Astrophysics Data System (ADS)
Martinez, Gregory D.; McKay, James; Farmer, Ben; Scott, Pat; Roebber, Elinore; Putze, Antje; Conrad, Jan
2017-11-01
We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics.
Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.
dos Reis, Mario; Yang, Ziheng
2011-07-01
The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.
Prediction of primary vs secondary hypertension in children.
Baracco, Rossana; Kapur, Gaurav; Mattoo, Tej; Jain, Amrish; Valentini, Rudolph; Ahmed, Maheen; Thomas, Ronald
2012-05-01
Despite current guidelines, variability exists in the workup of hypertensive children due to physician preferences. The study evaluates primary vs secondary hypertension diagnosis from investigations routinely performed in hypertensive children. This retrospective study included children 5 to 19 years with primary and secondary hypertension. The proportions of abnormal laboratory and imaging tests were compared between primary and secondary hypertension groups. Risk factors for primary vs secondary hypertension were evaluated by logistic regression and likelihood function analysis. Patients with secondary hypertension were younger (5-12 years) and had a higher proportion of abnormal creatinine, renal ultrasound, and echocardiogram findings. There was no significant difference in abnormal results of thyroid function, urine catecholamines, plasma renin, and aldosterone. Abnormal renal ultrasound findings and age were predictors of secondary hypertension by regression and likelihood function analysis. Children aged 5 to 12 years with abnormal renal ultrasound findings and high diastolic blood pressures are at higher risk for secondary hypertension that requires detailed evaluation. © 2012 Wiley Periodicals, Inc.
A baseline-free procedure for transformation models under interval censorship.
Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin
2005-12-01
An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.
NASA Technical Reports Server (NTRS)
Pierson, Willard J., Jr.
1989-01-01
The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.
Extending the BEAGLE library to a multi-FPGA platform.
Jin, Zheming; Bakos, Jason D
2013-01-19
Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein's pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein's pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform's peak memory bandwidth and the implementation's memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE's CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor.
Liu, Ping-Li; Du, Liang; Huang, Yuan; Gao, Shu-Min; Yu, Meng
2017-02-07
Leucine-rich repeat receptor-like protein kinases (LRR-RLKs) are the largest group of receptor-like kinases in plants and play crucial roles in development and stress responses. The evolutionary relationships among LRR-RLK genes have been investigated in flowering plants; however, no comprehensive studies have been performed for these genes in more ancestral groups. The subfamily classification of LRR-RLK genes in plants, the evolutionary history and driving force for the evolution of each LRR-RLK subfamily remain to be understood. We identified 119 LRR-RLK genes in the Physcomitrella patens moss genome, 67 LRR-RLK genes in the Selaginella moellendorffii lycophyte genome, and no LRR-RLK genes in five green algae genomes. Furthermore, these LRR-RLK sequences, along with previously reported LRR-RLK sequences from Arabidopsis thaliana and Oryza sativa, were subjected to evolutionary analyses. Phylogenetic analyses revealed that plant LRR-RLKs belong to 19 subfamilies, eighteen of which were established in early land plants, and one of which evolved in flowering plants. More importantly, we found that the basic structures of LRR-RLK genes for most subfamilies are established in early land plants and conserved within subfamilies and across different plant lineages, but divergent among subfamilies. In addition, most members of the same subfamily had common protein motif compositions, whereas members of different subfamilies showed variations in protein motif compositions. The unique gene structure and protein motif compositions of each subfamily differentiate the subfamily classifications and, more importantly, provide evidence for functional divergence among LRR-RLK subfamilies. Maximum likelihood analyses showed that some sites within four subfamilies were under positive selection. Much of the diversity of plant LRR-RLK genes was established in early land plants. Positive selection contributed to the evolution of a few LRR-RLK subfamilies.
Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi
2015-07-01
Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.
NASA Technical Reports Server (NTRS)
1979-01-01
A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.
Nonparametric spirometry reference values for Hispanic Americans.
Glenn, Nancy L; Brown, Vanessa M
2011-02-01
Recent literature sites ethnic origin as a major factor in developing pulmonary function reference values. Extensive studies established reference values for European and African Americans, but not for Hispanic Americans. The Third National Health and Nutrition Examination Survey defines Hispanic as individuals of Spanish speaking cultures. While no group was excluded from the target population, sample size requirements only allowed inclusion of individuals who identified themselves as Mexican Americans. This research constructs nonparametric reference value confidence intervals for Hispanic American pulmonary function. The method is applicable to all ethnicities. We use empirical likelihood confidence intervals to establish normal ranges for reference values. Its major advantage: it is model free, but shares asymptotic properties of model based methods. Statistical comparisons indicate that empirical likelihood interval lengths are comparable to normal theory intervals. Power and efficiency studies agree with previously published theoretical results.
The Role of CMR in Cardiomyopathies
Kramer, Christopher M.
2015-01-01
Cardiac magnetic resonance imaging (CMR) has made major inroads in the new millenium in the diagnosis and assessment of prognosis for patients with cardiomyopathies. Imaging of left and right ventricular structure and function and tissue characterization with late gadolinium enhancement (LGE) as well as T1 and T2 mapping enable accurate diagnosis of the underlying etiology. In the setting of coronary artery disease, either transmurality of LGE or contractile reserve in response to dobutamine can assess the likelihood of recovery of function after revascularization. The presence of scar reduces the likelihood of response to medical therapy and to cardiac resynchronization therapy in heart failure. The presence and extent of LGE relate to overall cardiovascular outcome in cardiomyopathies. An emerging major role for CMR in cardiomyopathies is to identify myocardial scar for diagnostic and prognostic purposes. PMID:26033902
Foreground effect on the J-factor estimation of classical dwarf spheroidal galaxies
NASA Astrophysics Data System (ADS)
Ichikawa, Koji; Ishigaki, Miho N.; Matsumoto, Shigeki; Ibe, Masahiro; Sugai, Hajime; Hayashi, Kohei; Horigome, Shun-ichi
2017-07-01
The gamma-ray observation of the dwarf spheroidal galaxies (dSphs) is a promising approach to search for the dark matter annihilation (or decay) signal. The dSphs are the nearby satellite galaxies with a clean environment and dense dark matter halo so that they give stringent constraints on the O(1) TeV dark matter. However, recent studies have revealed that current estimation of astrophysical factors relevant for the dark matter searches are not conservative, where the various non-negligible systematic uncertainties are not taken into account. Among them, the effect of foreground stars on the astrophysical factors has not been paid much attention, which becomes more important for deeper and wider stellar surveys in the future. In this article, we assess the effects of the foreground contamination by generating the mock samples of stars and using a model of future spectrographs. We investigate various data cuts to optimize the quality of the data and find that the cuts on the velocity and surface gravity can efficiently eliminate the contamination. We also propose a new likelihood function that includes the foreground distribution function. We apply this likelihood function to the fit of the three types of the mock data (Ursa Minor, Draco with large dark matter halo and Draco with small halo) and three cases of the observation. The likelihood successfully reproduces the input J-factor value while the fit without considering the foreground distribution gives a large deviation from the input value by a factor of 3.
NASA Astrophysics Data System (ADS)
Mohammed, Amal A.; Abraheem, Sudad K.; Fezaa Al-Obedy, Nadia J.
2018-05-01
In this paper is considered with Burr type XII distribution. The maximum likelihood, Bayes methods of estimation are used for estimating the unknown scale parameter (α). Al-Bayyatis’ loss function and suggest loss function are used to find the reliability with the least loss. So the reliability function is expanded in terms of a set of power function. For this performance, the Matlab (ver.9) is used in computations and some examples are given.
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.
A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits
Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling
2013-01-01
Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762
Laskin, Debra L.; Gow, Andrew J.
2017-01-01
Both aging and chronic inflammation produce complex structural and biochemical alterations to the lung known to impact work of breathing. Mice deficient in surfactant protein D (Sftpd) develop progressive age-related lung pathology characterized by tissue destruction/remodeling, accumulation of foamy macrophages and alteration in surfactant composition. This study proposes to relate changes in tissue structure seen in normal aging and in chronic inflammation to altered lung mechanics using a computational model. Alterations in lung function in aging and Sftpd -/- mice have been inferred from fitting simple mechanical models to respiratory impedance data (Zrs), however interpretation has been confounded by the simultaneous presence of multiple coexisting pathophysiologic processes. In contrast to the inverse modeling approach, this study uses simulation from experimental measurements to recapitulate how aging and inflammation alter Zrs. Histologic and mechanical measurements were made in C57BL6/J mice and congenic Sftpd-/- mice at 8, 27 and 80 weeks of age (n = 8/group). An anatomic computational model based on published airway morphometry was developed and Zrs was simulated between 0.5 and 20 Hz. End expiratory pressure dependent changes in airway caliber and recruitment were estimated from mechanical measurements. Tissue elements were simulated using the constant phase model of viscoelasticity. Baseline elastance distribution was estimated in 8-week-old wild type mice, and stochastically varied for each condition based on experimentally measured alteration in elastic fiber composition, alveolar geometry and surfactant composition. Weighing reduction in model error against increasing model complexity allowed for identification of essential features underlying mechanical pathology and their contribution to Zrs. Using a maximum likelihood approach, alteration in lung recruitment and diminished elastic fiber density were shown predictive of mechanical alteration at airway opening, to a greater extent than overt acinar wall destruction. Model-predicted deficits in PEEP-dependent lung recruitment correlate with altered lung lining fluid composition independent of age or genotype. PMID:28837561
Massa, Christopher B; Groves, Angela M; Jaggernauth, Smita U; Laskin, Debra L; Gow, Andrew J
2017-08-01
Both aging and chronic inflammation produce complex structural and biochemical alterations to the lung known to impact work of breathing. Mice deficient in surfactant protein D (Sftpd) develop progressive age-related lung pathology characterized by tissue destruction/remodeling, accumulation of foamy macrophages and alteration in surfactant composition. This study proposes to relate changes in tissue structure seen in normal aging and in chronic inflammation to altered lung mechanics using a computational model. Alterations in lung function in aging and Sftpd -/- mice have been inferred from fitting simple mechanical models to respiratory impedance data (Zrs), however interpretation has been confounded by the simultaneous presence of multiple coexisting pathophysiologic processes. In contrast to the inverse modeling approach, this study uses simulation from experimental measurements to recapitulate how aging and inflammation alter Zrs. Histologic and mechanical measurements were made in C57BL6/J mice and congenic Sftpd-/- mice at 8, 27 and 80 weeks of age (n = 8/group). An anatomic computational model based on published airway morphometry was developed and Zrs was simulated between 0.5 and 20 Hz. End expiratory pressure dependent changes in airway caliber and recruitment were estimated from mechanical measurements. Tissue elements were simulated using the constant phase model of viscoelasticity. Baseline elastance distribution was estimated in 8-week-old wild type mice, and stochastically varied for each condition based on experimentally measured alteration in elastic fiber composition, alveolar geometry and surfactant composition. Weighing reduction in model error against increasing model complexity allowed for identification of essential features underlying mechanical pathology and their contribution to Zrs. Using a maximum likelihood approach, alteration in lung recruitment and diminished elastic fiber density were shown predictive of mechanical alteration at airway opening, to a greater extent than overt acinar wall destruction. Model-predicted deficits in PEEP-dependent lung recruitment correlate with altered lung lining fluid composition independent of age or genotype.
NASA Technical Reports Server (NTRS)
Clark, R. T.; Mccallister, R. D.
1982-01-01
The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.
Maximum likelihood estimation for life distributions with competing failure modes
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1979-01-01
Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
Quantum-state reconstruction by maximizing likelihood and entropy.
Teo, Yong Siah; Zhu, Huangjun; Englert, Berthold-Georg; Řeháček, Jaroslav; Hradil, Zdeněk
2011-07-08
Quantum-state reconstruction on a finite number of copies of a quantum system with informationally incomplete measurements, as a rule, does not yield a unique result. We derive a reconstruction scheme where both the likelihood and the von Neumann entropy functionals are maximized in order to systematically select the most-likely estimator with the largest entropy, that is, the least-bias estimator, consistent with a given set of measurement data. This is equivalent to the joint consideration of our partial knowledge and ignorance about the ensemble to reconstruct its identity. An interesting structure of such estimators will also be explored.
A 3D approximate maximum likelihood localization solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-09-23
A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with acoustic transmitters and vocalizing marine mammals to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives and support Marine Renewable Energy. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2005-04-01
We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.
Genome-wide evidence for divergent selection between populations of a major agricultural pathogen.
Hartmann, Fanny E; McDonald, Bruce A; Croll, Daniel
2018-06-01
The genetic and environmental homogeneity in agricultural ecosystems is thought to impose strong and uniform selection pressures. However, the impact of this selection on plant pathogen genomes remains largely unknown. We aimed to identify the proportion of the genome and the specific gene functions under positive selection in populations of the fungal wheat pathogen Zymoseptoria tritici. First, we performed genome scans in four field populations that were sampled from different continents and on distinct wheat cultivars to test which genomic regions are under recent selection. Based on extended haplotype homozygosity and composite likelihood ratio tests, we identified 384 and 81 selective sweeps affecting 4% and 0.5% of the 35 Mb core genome, respectively. We found differences both in the number and the position of selective sweeps across the genome between populations. Using a XtX-based outlier detection approach, we identified 51 extremely divergent genomic regions between the allopatric populations, suggesting that divergent selection led to locally adapted pathogen populations. We performed an outlier detection analysis between two sympatric populations infecting two different wheat cultivars to identify evidence for host-driven selection. Selective sweep regions harboured genes that are likely to play a role in successfully establishing host infections. We also identified secondary metabolite gene clusters and an enrichment in genes encoding transporter and protein localization functions. The latter gene functions mediate responses to environmental stress, including interactions with the host. The distinct gene functions under selection indicate that both local host genotypes and abiotic factors contributed to local adaptation. © 2018 The Authors. Molecular Ecology Published by John Wiley & Sons Ltd.
Spatial competition dynamics between reef corals under ocean acidification
Horwitz, Rael; Hoogenboom, Mia O.; Fine, Maoz
2017-01-01
Climate change, including ocean acidification (OA), represents a major threat to coral-reef ecosystems. Although previous experiments have shown that OA can negatively affect the fitness of reef corals, these have not included the long-term effects of competition for space on coral growth rates. Our multispecies year-long study subjected reef-building corals from the Gulf of Aqaba (Red Sea) to competitive interactions under present-day ocean pH (pH 8.1) and predicted end-of-century ocean pH (pH 7.6). Results showed coral growth is significantly impeded by OA under intraspecific competition for five out of six study species. Reduced growth from OA, however, is negligible when growth is already suppressed in the presence of interspecific competition. Using a spatial competition model, our analysis indicates shifts in the competitive hierarchy and a decrease in overall coral cover under lowered pH. Collectively, our case study demonstrates how modified competitive performance under increasing OA will in all likelihood change the composition, structure and functionality of reef coral communities. PMID:28067281
Cook, Alison; Glass, Christy
2015-09-01
Previous research on the effects of leadership diversity on firm outcomes has produced inconsistent and inconclusive findings. While some scholars argue that diversity increases organizational equity and enhances performance, others argue that diversity increases conflict, reduces cooperation and harms performance. This study tests the impact of a variety of compositional factors on firm outcomes. Specifically, we analyze whether and how board composition affects the advancement and mobility of women CEOs and firm performance. Our analysis relies on a unique data set of all Chief Executive Officers (CEOs) and Board of Directors (BODs) in Fortune 500 companies over a ten-year period. We find a marginally significant positive relationship between board diversity and the likelihood of a woman being appointed CEO. We further find that board diversity significantly and positively influences the post-promotion success of women CEOs. Our findings suggest that board composition is critical for the appointment and success of women CEOs, and increasing board diversity should be central to any organizational diversity efforts. Copyright © 2015 Elsevier Inc. All rights reserved.
Semi-Parametric Item Response Functions in the Context of Guessing. CRESST Report 844
ERIC Educational Resources Information Center
Falk, Carl F.; Cai, Li
2015-01-01
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
The Effect of Chemical Functionalization on Mechanical Properties of Nanotube/Polymer Composites
NASA Technical Reports Server (NTRS)
Odegard, G. M.; Frankland, S. J. V.; Gates, T. S.
2003-01-01
The effects of the chemical functionalization of a carbon nanotube embedded in a nanotube/polyethylene composite on the bulk elastic properties are presented. Constitutive equations are established for both functionalized and non-functionalized nanotube composites systems by using an equivalent-continuum modeling technique. The elastic properties of both composites systems are predicted for various nanotube lengths, volume fractions, and orientations. The results indicate that for the specific composite material considered in this study, most of the elastic stiffness constants of the functionalized composite are either less than or equal to those of the non-functionalized composite.
A new Bayesian Inference-based Phase Associator for Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Meier, Men-Andrin; Heaton, Thomas; Clinton, John; Wiemer, Stefan
2013-04-01
State of the art network-based Earthquake Early Warning (EEW) systems can provide warnings for large magnitude 7+ earthquakes. Although regions in the direct vicinity of the epicenter will not receive warnings prior to damaging shaking, real-time event characterization is available before the destructive S-wave arrival across much of the strongly affected region. In contrast, in the case of the more frequent medium size events, such as the devastating 1994 Mw6.7 Northridge, California, earthquake, providing timely warning to the smaller damage zone is more difficult. For such events the "blind zone" of current systems (e.g. the CISN ShakeAlert system in California) is similar in size to the area over which severe damage occurs. We propose a faster and more robust Bayesian inference-based event associator, that in contrast to the current standard associators (e.g. Earthworm Binder), is tailored to EEW and exploits information other than only phase arrival times. In particular, the associator potentially allows for reliable automated event association with as little as two observations, which, compared to the ShakeAlert system, would speed up the real-time characterizations by about ten seconds and thus reduce the blind zone area by up to 80%. We compile an extensive data set of regional and teleseismic earthquake and noise waveforms spanning a wide range of earthquake magnitudes and tectonic regimes. We pass these waveforms through a causal real-time filterbank with passband filters between 0.1 and 50Hz, and, updating every second from the event detection, extract the maximum amplitudes in each frequency band. Using this dataset, we define distributions of amplitude maxima in each passband as a function of epicentral distance and magnitude. For the real-time data, we pass incoming broadband and strong motion waveforms through the same filterbank and extract an evolving set of maximum amplitudes in each passband. We use the maximum amplitude distributions to check whether the incoming waveforms are consistent with amplitude and frequency patterns of local earthquakes by means of a maximum likelihood approach. If such a single-station event likelihood is larger than a predefined threshold value we check whether there are neighboring stations that also have single-station event likelihoods above the threshold. If this is the case for at least one other station, we evaluate whether the respective relative arrival times are in agreement with a common earthquake origin (assuming a simple velocity model and using an Equal Differential Time location scheme). Additionally we check if there are stations where, given the preliminary location, observations would be expected but were not reported ("not-yet-arrived data"). Together, the single-station event likelihood functions and the location likelihood function constitute the multi-station event likelihood function. This function can then be combined with various types of prior information (such as station noise levels, preceding seismicity, fault proximity, etc.) to obtain a Bayesian posterior distribution, representing the degree of belief that the ensemble of the current real-time observations correspond to a local earthquake, rather than to some other signal source irrelevant for EEW. Additional to the reduction of the blind zone size, this approach facilitates the eventual development of an end-to-end probabilistic framework for an EEW system that provides systematic real-time assessment of the risk of false alerts, which enables end users of EEW to implement damage mitigation strategies only above a specified certainty level.
Geological mapping in northwestern Saudi Arabia using LANDSAT multispectral techniques
NASA Technical Reports Server (NTRS)
Blodget, H. W.; Brown, G. F.; Moik, J. G.
1975-01-01
Various computer enhancement and data extraction systems using LANDSAT data were assessed and used to complement a continuing geologic mapping program. Interactive digital classification techniques using both the parallel-piped and maximum-likelihood statistical approaches achieve very limited success in areas of highly dissected terrain. Computer enhanced imagery developed by color compositing stretched MSS ratio data was constructed for a test site in northwestern Saudi Arabia. Initial results indicate that several igneous and sedimentary rock types can be discriminated.
2010-03-03
obtainable while for the free-decay problem we simply have to include the initial conditions as random variables to be predicted. A different approach that...important and useful properties of MLEs is that, under regularity conditions , they are asymptotically unbiased and possess the minimum possible...becomes pLðzjh;s2G;MiÞ (i.e. the likelihood is conditional on the specified model). However, in this work we will only consider a single model and drop the
Hou, Zhi-hui; Lu, Bin; Gao, Yang; Yu, Fang-fang; Cao, Hui-li; Jiang, Shi-liang; Roy, Sion K; Budoff, Matthew J
2012-11-01
To document the prevalence of coronary artery disease (CAD) and major adverse cardiac events (MACE) in patients younger than 45 years of age with intermediate pretest likelihood of CAD, and to determine whether coronary computed tomography angiography (cCTA) is useful for risk stratification of this cohort. We followed 452 intermediate pretest likelihood (according to Diamond and Forrester) outpatients who were suspected of CAD and underwent cCTA. They were all younger than 45 years old. The endpoint was MACE, defined as composite cardiac death, nonfatal myocardial infarction, or coronary revascularization. Follow-up was completed in 427 patients (94.5%) with a median follow-up period of 1081 days. No plaque was noted in 357 (83.6%) patients. Nonsignificant CAD was noted in 33 (7.7%) individuals and 37 (8.7%) patients with significant CAD. At the end of the follow-up period, 12 (2.8%) patients experienced MACE. The annualized event rate was 0.2% in patients with no plaque, 2.0% in patients with nonsignificant CAD, and 7.3% in patients with significant CAD. Hypertension, smoking, and significant CAD in cCTA were significant predictors of MACE in univariate analysis. Moreover, cCTA remained a predictor (P < .001) of events after multivariate correction (hazard ratio: 8.345, 95% CI: 3.438-17.823, P < .001). The prevalence of CAD and MACE in young adults with an intermediate pretest likelihood of CAD was considerable. cCTA is effective in restratifying patients into either a low or high posttest risk group. These results further emphasize the usefulness of cCTA in this cohort. Copyright © 2012 AUR. Published by Elsevier Inc. All rights reserved.
Marcum, Zachary A; Perera, Subashan; Thorpe, Joshua M; Switzer, Galen E; Castle, Nicholas G; Strotmeyer, Elsa S; Simonsick, Eleanor M; Ayonayon, Hilsa N; Phillips, Caroline L; Rubin, Susan; Zucker-Levin, Audrey R; Bauer, Douglas C; Shorr, Ronald I; Kang, Yihuang; Gray, Shelly L; Hanlon, Joseph T
2016-07-01
Few studies have compared the risk of recurrent falls across various antidepressant agents-using detailed dosage and duration data-among community-dwelling older adults, including those who have a history of a fall/fracture. To examine the association of antidepressant use with recurrent falls, including among those with a history of falls/fractures, in community-dwelling elders. This was a longitudinal analysis of 2948 participants with data collected via interview at year 1 from the Health, Aging and Body Composition study and followed through year 7 (1997-2004). Any antidepressant medication use was self-reported at years 1, 2, 3, 5, and 6 and further categorized as (1) selective serotonin reuptake inhibitors (SSRIs), (2) tricyclic antidepressants, and (3) others. Dosage and duration were examined. The outcome was recurrent falls (≥2) in the ensuing 12-month period following each medication data collection. Using multivariable generalized estimating equations models, we observed a 48% greater likelihood of recurrent falls in antidepressant users compared with nonusers (adjusted odds ratio [AOR] = 1.48; 95% CI = 1.12-1.96). Increased likelihood was also found among those taking SSRIs (AOR = 1.62; 95% CI = 1.15-2.28), with short duration of use (AOR = 1.47; 95% CI = 1.04-2.00), and taking moderate dosages (AOR = 1.59; 95% CI = 1.15-2.18), all compared with no antidepressant use. Stratified analysis revealed an increased likelihood among users with a baseline history of falls/fractures compared with nonusers (AOR = 1.83; 95% CI = 1.28-2.63). Antidepressant use overall, SSRI use, short duration of use, and moderate dosage were associated with recurrent falls. Those with a history of falls/fractures also had an increased likelihood of recurrent falls. © The Author(s) 2016.
NASA Technical Reports Server (NTRS)
Paque, Julie M.; Lofgren, Gary E.; Le, Loan
2000-01-01
The observed textures and chemistry of Ca-Al-rich inclusions (CAIs) are presumed to be the culmination of a series of repeated heating and cooling events in the early history of the solar nebula. We have examined the effects of these heating/cooling cycles experimentally on a bulk composition representing an average Type B Ca-Al-rich inclusion composition. We have tested the effect of the nature of the starting material. Although the most recent and/or highest temperature event prior to incorporation into the parent body dominates the texture and chemistry of the CAI, prior events also affect the phase compositions and textures. We have determined that heating precursor grains to about 1275 C prior to the final melting event increases the likelihood of anorthite crystallization in subsequent higher temperature events and a prior high temperature even that produced dendritic melilite results in melilite that shows evidence of rapid crystallization in subsequent lower temperature events. Prior low temperature pre-crystallization events produce final ran products with pyroxene compositions similar to Type B Ca-Al-rich inclusions, and the glass (residual liquid) composition is more anorthitic than any other experiments to date. The addition of Pt powder to the starting material appears to enhance the ability of anorthite to nucleate from this composition.
Lunar Meteorites Sayh Al Uhaymir 449 and Dhofar 925, 960, and 961: Windows into South Pole
NASA Technical Reports Server (NTRS)
Ziegler, Ryan A.; Jolliff, B. L.; Korotev, R. L.
2013-01-01
In 2003, three lunar meteorites were collected in close proximity to each other in the Dhofar region of Oman: Dhofar 925 (49 g), Dhofar 960 (35 g), and Dhofar 961 (22 g). In 2006, lunar meteorite Sayh al Uhaymir (SaU) 449 (16.5 g) was found about 100 km to the NE. Despite significant differences in the bulk composition of Dhofar 961 relative to Dhofar 925/960 and SaU 449 (which are identical to each other), these four meteorites are postulated to be paired based on their find locations, bulk composition, and detailed petrographic analysis. Hereafter, they will collectively be referred to as the Dhofar 961 clan. Comparison of meteorite and component bulk compositions to Lunar Prospector 5-degree gamma-ray data suggest the most likely provenance of this meteorite group is within the South Pole-Aitken Basin. As the oldest, largest, and deepest recognizable basin on the Moon, the composition of the material within the SPA basin is of particular importance to lunar science. Here we review and expand upon the geochemistry and petrography of the Dhofar 961 clan and assess the likelihood that these meteorites come from within the SPA basin based on their bulk compositions and the compositions and characteristics of the major lithologic components found within the breccia.
Variational Bayesian Parameter Estimation Techniques for the General Linear Model
Starke, Ludger; Ostwald, Dirk
2017-01-01
Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walter, Thorsten
2005-06-17
In this thesis two searches for electroweak single top quark production with the CDF experiment have been presented, a cutbased search and an iterated discriminant analysis. Both searches find no significant evidence for electroweak single top production using a data set corresponding to an integrated luminosity of 162 pb -1 collected with CDF. Therefore limits on s- and t-channel single top production are determined using a likelihood technique. For the cutbased search a likelihood function based on lepton charge times pseudorapidity of the non-bottom jet was used if exactly one bottom jet was identified in the event. In case ofmore » two identified bottom jets a likelihood function based on the total number of observed events was used. The systematic uncertainties have been treated in a Bayesian approach, all sources of systematic uncertainties have been integrated out. An improved signal modeling using the MadEvent Monte Carlo program matched to NLO calculations has been used. The obtained limits for the s- and t-channel single top production cross sections are 13.6 pb and 10.1 pb, respectively. To date, these are most stringent limits published for the s- and the t-channel single top quark production modes.« less
NASA Astrophysics Data System (ADS)
Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.
2015-12-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.
Chan, Vincent; Chu, Michael W A; Leong-Poi, Howard; Latter, David A; Hall, Judith; Thorpe, Kevin E; de Varennes, Benoit E; Quan, Adrian; Tsang, Wendy; Dhingra, Natasha; Yared, Kibar; Teoh, Hwee; Chu, F Victor; Chan, Kwan-Leung; Mesana, Thierry G; Connelly, Kim A; Ruel, Marc; Jüni, Peter; Mazer, C David; Verma, Subodh
2017-05-30
The gold-standard treatment of severe mitral regurgitation (MR) due to degenerative disease is valve repair, which is surgically performed with either a leaflet resection or leaflet preservation approach. Recent data suggest that functional mitral stenosis (MS) may occur following valve repair using a leaflet resection strategy, which adversely affects patient prognosis. A randomised comparison of these two approaches to mitral repair on functional MS has not been conducted. This is a prospective, multicentre randomised controlled trial designed to test the hypothesis that leaflet preservation leads to better preservation of mitral valve geometry, and therefore, will be superior to leaflet resection for the primary outcome of functional MS as assessed by 12-month mean mitral valve gradient at peak exercise. Eighty-eight patients with posterior leaflet prolapse will be randomised intraoperatively once deemed by the operating surgeon to feasibly undergo mitral repair using either a leaflet resection or leaflet preservation approach. Secondary end points include comparison of repair strategies with regard to mitral valve orifice area, leaflet coaptation height, 6 min walk test and a composite major adverse event end point consisting of recurrent MR ≥2+, death or hospital readmission for congestive heart failure within 12 months of surgery. Institutional ethics approval has been obtained from all enrolling sites. Overall, there remains clinical equipoise regarding the mitral valve repair strategy that is associated with the least likelihood of functional MS. This trial hopes to introduce high-quality evidence to help surgical decision making in this context. NCT02552771. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Extended maximum likelihood halo-independent analysis of dark matter direct detection data
Gelmini, Graciela B.; Georgescu, Andreea; Gondolo, Paolo; ...
2015-11-24
We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark mattermore » particles with elastic spin-independent interactions and neutron to proton coupling ratio f n/f p=-0.7, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with f n/f p=-0.8.« less
Sohng, Hee Yon; Kuniyuki, Alan; Edelson, Jane; Weir, Rosy Chang; Song, Hui; Tu, Shin-Ping
2013-01-01
Understanding and enhancing change capabilities, including Practice Adaptive Reserve (PAR), of Community Health Centers (CHCs) may mitigate cancer-related health disparities. Using stratified random sampling, we recruited 232 staff from seven CHCs serving Asian Pacific Islander communities to complete a self-administered survey. We performed multilevel regression analyses to examine PAR composite scores by CHC, position type, and number of years worked at their clinic. The mean PAR score was 0.7 (s.d. 0.14). Higher scores were associated with a greater perceived likelihood that clinic staff would participate in an evidence-based intervention (EBI). Constructs such as communication, clinic flow, sensemaking, change valence, and resource availability were positively associated with EBI implementation or trended toward significance. PAR scores are positively associated with perceived likelihood of clinic staff participation in cancer screening EBI. Future research is needed to determine PAR levels most conducive to implementing change and to developing interventions that enhance Adaptive Reserve.
Müller, Christoph; Waha, Katharina; Bondeau, Alberte; Heinke, Jens
2014-08-01
Development efforts for poverty reduction and food security in sub-Saharan Africa will have to consider future climate change impacts. Large uncertainties in climate change impact assessments do not necessarily complicate, but can inform development strategies. The design of development strategies will need to consider the likelihood, strength, and interaction of climate change impacts across biosphere properties. We here explore the spread of climate change impact projections and develop a composite impact measure to identify hotspots of climate change impacts, addressing likelihood and strength of impacts. Overlapping impacts in different biosphere properties (e.g. flooding, yields) will not only claim additional capacity to respond, but will also narrow the options to respond and develop. Regions with severest projected climate change impacts often coincide with regions of high population density and poverty rates. Science and policy need to propose ways of preparing these areas for development under climate change impacts. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.
2017-06-01
In this paper, an online fault detection and classification method is proposed for thermocouples used in nuclear power plants. In the proposed method, the fault data are detected by the classification method, which classifies the fault data from the normal data. Deep belief network (DBN), a technique for deep learning, is applied to classify the fault data. The DBN has a multilayer feature extraction scheme, which is highly sensitive to a small variation of data. Since the classification method is unable to detect the faulty sensor; therefore, a technique is proposed to identify the faulty sensor from the fault data. Finally, the composite statistical hypothesis test, namely generalized likelihood ratio test, is applied to compute the fault pattern of the faulty sensor signal based on the magnitude of the fault. The performance of the proposed method is validated by field data obtained from thermocouple sensors of the fast breeder test reactor.
Influence of weather, rank, and home advantage on football outcomes in the Gulf region.
Brocherie, Franck; Girard, Olivier; Farooq, Abdulaziz; Millet, Grégoire P
2015-02-01
The objective of this study was to investigate the effects of weather, rank, and home advantage on international football match results and scores in the Gulf Cooperation Council (GCC) region. Football matches (n = 2008) in six GCC countries were analyzed. To determine the weather influence on the likelihood of favorable outcome and goal difference, generalized linear model with a logit link function and multiple regression analysis were performed. In the GCC region, home teams tend to have greater likelihood of a favorable outcome (P < 0.001) and higher goal difference (P < 0.001). Temperature difference was identified as a significant explanatory variable when used independently (P < 0.001) or after adjustment for home advantage and team ranking (P < 0.001). The likelihood of favorable outcome for GCC teams increases by 3% for every 1-unit increase in temperature difference. After inclusion of interaction with opposition, this advantage remains significant only when playing against non-GCC opponents. While home advantage increased the odds of favorable outcome (P < 0.001) and goal difference (P < 0.001) after inclusion of interaction term, the likelihood of favorable outcome for a GCC team decreased (P < 0.001) when playing against a stronger opponent. Finally, the temperature and wet bulb globe temperature approximation were found as better indicators of the effect of environmental conditions than absolute and relative humidity or heat index on match outcomes. In GCC region, higher temperature increased the likelihood of a favorable outcome when playing against non-GCC teams. However, international ranking should be considered because an opponent with a higher rank reduced, but did not eliminate, the likelihood of a favorable outcome.
Wang, Lina; Li, Hao; Yang, Zhongyuan; Guo, Zhuming; Zhang, Quan
2015-07-01
This study was designed to assess the efficiency of the serum thyrotropin to thyroglobulin ratio for thyroid nodule evaluation in euthyroid patients. Cross-sectional study. Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China. Retrospective analysis was performed for 400 previously untreated cases presenting with thyroid nodules. Thyroid function was tested with commercially available radioimmunoassays. The receiver operating characteristic curves were constructed to determine cutoff values. The efficacy of the thyrotropin:thyroglobulin ratio and thyroid-stimulating hormone for thyroid nodule evaluation was evaluated in terms of sensitivity, specificity, positive predictive value, positive likelihood ratio, negative likelihood ratio, and odds ratio. In receiver operating characteristic curve analysis, the area under the curve was 0.746 for the thyrotropin:thyroglobulin ratio and 0.659 for thyroid-stimulating hormone. With a cutoff point value of 24.97 IU/g for the thyrotropin:thyroglobulin ratio, the sensitivity, specificity, positive predictive value, positive likelihood ratio, and negative likelihood ratio were 78.9%, 60.8%, 75.5%, 2.01, and 0.35, respectively. The odds ratio for the thyrotropin:thyroglobulin ratio indicating malignancy was 5.80. With a cutoff point value of 1.525 µIU/mL for thyroid-stimulating hormone, the sensitivity, specificity, positive predictive value, positive likelihood ratio, and negative likelihood ratio were 74.0%, 53.2%, 70.8%, 1.58, and 0.49, respectively. The odds ratio indicating malignancy for thyroid-stimulating hormone was 3.23. Increasing preoperative serum thyrotropin:thyroglobulin ratio is a risk factor for thyroid carcinoma, and the correlation of the thyrotropin:thyroglobulin ratio to malignancy is higher than that for serum thyroid-stimulating hormone. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
Koffarnus, Mikhail N; Johnson, Matthew W; Thompson-Lake, Daisy G Y; Wesley, Michael J; Lohrenz, Terry; Montague, P Read; Bickel, Warren K
2016-08-01
Cocaine users have a higher incidence of risky sexual behavior and HIV infection than nonusers. Our aim was to measure whether safer sex discount rates-a measure of the likelihood of having immediate unprotected sex versus waiting to have safer sex-differed between controls and cocaine users of varying severity. Of the 162 individuals included in the primary data analyses, 69 met the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.; DSM-IV-TR) criteria for cocaine dependence, 29 were recreational cocaine users who did not meet the dependence criteria, and 64 were controls. Participants completed the Sexual Discounting Task, which measures a person's likelihood of using a condom when one is immediately available and how that likelihood decreases as a function of delay to condom availability with regard to 4 images chosen by the participants of hypothetical sexual partners differing in perceived desirability and likelihood of having a sexually transmitted infection. When a condom was immediately available, the stated likelihood of condom use sometimes differed between cocaine users and controls, which depended on the image condition. Even after controlling for rates of condom use when one is immediately available, the cocaine-dependent and recreational users groups were more sensitive to delay to condom availability than controls. Safer sex discount rates were also related to intelligence scores. The Sexual Discounting Task identifies delay as a key variable that impacts the likelihood of using a condom among these groups and suggests that HIV prevention efforts may be differentially effective based on an individual's safer sex discount rate. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Su, Jingjun; Du, Xinzhong; Li, Xuyong
2018-05-16
Uncertainty analysis is an important prerequisite for model application. However, the existing phosphorus (P) loss indexes or indicators were rarely evaluated. This study applied generalized likelihood uncertainty estimation (GLUE) method to assess the uncertainty of parameters and modeling outputs of a non-point source (NPS) P indicator constructed in R language. And the influences of subjective choices of likelihood formulation and acceptability threshold of GLUE on model outputs were also detected. The results indicated the following. (1) Parameters RegR 2 , RegSDR 2 , PlossDP fer , PlossDP man , DPDR, and DPR were highly sensitive to overall TP simulation and their value ranges could be reduced by GLUE. (2) Nash efficiency likelihood (L 1 ) seemed to present better ability in accentuating high likelihood value simulations than the exponential function (L 2 ) did. (3) The combined likelihood integrating the criteria of multiple outputs acted better than single likelihood in model uncertainty assessment in terms of reducing the uncertainty band widths and assuring the fitting goodness of whole model outputs. (4) A value of 0.55 appeared to be a modest choice of threshold value to balance the interests between high modeling efficiency and high bracketing efficiency. Results of this study could provide (1) an option to conduct NPS modeling under one single computer platform, (2) important references to the parameter setting for NPS model development in similar regions, (3) useful suggestions for the application of GLUE method in studies with different emphases according to research interests, and (4) important insights into the watershed P management in similar regions.
FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods.
Zierke, Stephanie; Bakos, Jason D
2010-04-12
Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10x speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs).
Mack, Jennifer W; Cook, E Francis; Wolfe, Joanne; Grier, Holcombe E; Cleary, Paul D; Weeks, Jane C
2007-04-10
Patients often overestimate their chances of surviving cancer. Factors that contribute to accurate understanding of prognosis are not known. We assessed understanding of likelihood of cure and functional outcome among parents of children with cancer and sought to identify factors that place parents at risk for overly optimistic beliefs about prognosis. We conducted a cross-sectional survey of 194 parents of children with cancer (response rate, 70%) who were treated at the Dana-Farber Cancer Institute and Children's Hospital in Boston, MA, and the children's physicians. Parent and physician expectations for likelihood of cure and functional outcome were compared. In 152 accurate or optimistic parents, we determined factors associated with accurate understanding of likelihood of cure compared with optimism. The majority of parents (61%) were more optimistic than physicians about the likelihood of cure. Parents' beliefs about other outcomes of cancer treatment were similar (quality-of-life impairment, P = .70) or more pessimistic (physical impairment, P = .01; intellectual impairment, P = .01) than physicians' beliefs. Parents and physicians were more likely to agree about chances of cure when physicians had confidence in knowledge of prognosis (odds ratio [OR] = 2.55, P = .004) and allowed parents to take their preferred decision-making role (OR = 1.89, P = .019). Parents of children with cancer are overly optimistic about chances of cure but not about other outcomes of cancer therapy. Parents tend to be overly optimistic about cure when physicians have little confidence and when the decision-making process does not meet parents' preferences. These findings suggest that physicians are partly responsible for parents' unrealistic expectations about cure.
NASA Technical Reports Server (NTRS)
Paradella, W. R. (Principal Investigator); Vitorello, I.; Monteiro, M. D.
1984-01-01
Enhancement techniques and thematic classifications were applied to the metasediments of Bambui Super Group (Upper Proterozoic) in the Region of Serra do Ramalho, SW of the state of Bahia. Linear contrast stretch, band-ratios with contrast stretch, and color-composites allow lithological discriminations. The effects of human activities and of vegetation cover mask and limit, in several ways, the lithological discrimination with digital MSS data. Principal component images and color composite of linear contrast stretch of these products, show lithological discrimination through tonal gradations. This set of products allows the delineations of several metasedimentary sequences to a level superior to reconnaissance mapping. Supervised (maximum likelihood classifier) and nonsupervised (K-Means classifier) classification of the limestone sequence, host to fluorite mineralization show satisfactory results.
Nutrient balance affects foraging behaviour of a trap-building predator
Mayntz, David; Toft, Søren; Vollrath, Fritz
2009-01-01
Predator foraging may be affected by previous prey capture, but it is unknown how nutrient balance affects foraging behaviour. Here, we use a trap-building predator to test whether nutrients from previous prey captures affect foraging behaviour. We fed orb-weaving spiders (Zygiella x-notata) prey flies of different nutrient composition and in different amounts during their first instar and measured the subsequent frequency of web building and aspects of web architecture. We found that both the likelihood of web building and the number of radii in the web were affected by prey nutrient composition while prey availability affected capture area and mesh height. Our results show that both the balance of nutrients in captured prey and the previous capture rate may affect future foraging behaviour of predators. PMID:19640870
van Heeringen, Kees; Bijttebier, Stijn; Desmyter, Stefanie; Vervaet, Myriam; Baeken, Chris
2014-01-01
Objective: We conducted meta-analyses of functional and structural neuroimaging studies comparing adolescent and adult individuals with a history of suicidal behavior and a psychiatric disorder to psychiatric controls in order to objectify changes in brain structure and function in association with a vulnerability to suicidal behavior. Methods: Magnetic resonance imaging studies published up to July 2013 investigating structural or functional brain correlates of suicidal behavior were identified through computerized and manual literature searches. Activation foci from 12 studies encompassing 475 individuals, i.e., 213 suicide attempters and 262 psychiatric controls were subjected to meta-analytical study using anatomic or activation likelihood estimation (ALE). Result: Activation likelihood estimation revealed structural deficits and functional changes in association with a history of suicidal behavior. Structural findings included reduced volumes of the rectal gyrus, superior temporal gyrus and caudate nucleus. Functional differences between study groups included an increased reactivity of the anterior and posterior cingulate cortices. Discussion: A history of suicidal behavior appears to be associated with (probably interrelated) structural deficits and functional overactivation in brain areas, which contribute to a decision-making network. The findings suggest that a vulnerability to suicidal behavior can be defined in terms of a reduced motivational control over the intentional behavioral reaction to salient negative stimuli. PMID:25374525
Controlled Fission: Teaching Supercharged Subjects.
ERIC Educational Resources Information Center
Pace, David
2003-01-01
Shaping classroom experiences before controversial material is encountered in a class increases the likelihood that students will maintain higher mental function while examining that material. Presents 10 strategies for planning a course that facilitates quality discussion and thoughtful debate. (SLD)
Quasi-likelihood generalized linear regression analysis of fatality risk data
DOT National Transportation Integrated Search
2009-01-01
Transportation-related fatality risks is a function of many interacting human, vehicle, and environmental factors. Statisitcally valid analysis of such data is challenged both by the complexity of plausable structural models relating fatality rates t...
NASA Astrophysics Data System (ADS)
Krestyannikov, E.; Tohka, J.; Ruotsalainen, U.
2008-06-01
This paper presents a novel statistical approach for joint estimation of regions-of-interest (ROIs) and the corresponding time-activity curves (TACs) from dynamic positron emission tomography (PET) brain projection data. It is based on optimizing the joint objective function that consists of a data log-likelihood term and two penalty terms reflecting the available a priori information about the human brain anatomy. The developed local optimization strategy iteratively updates both the ROI and TAC parameters and is guaranteed to monotonically increase the objective function. The quantitative evaluation of the algorithm is performed with numerically and Monte Carlo-simulated dynamic PET brain data of the 11C-Raclopride and 18F-FDG tracers. The results demonstrate that the method outperforms the existing sequential ROI quantification approaches in terms of accuracy, and can noticeably reduce the errors in TACs arising due to the finite spatial resolution and ROI delineation.
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
An indecent proposal: the dual functions of indirect speech.
Chakroff, Aleksandr; Thomas, Kyle A; Haque, Omar S; Young, Liane
2015-01-01
People often use indirect speech, for example, when trying to bribe a police officer by asking whether there might be "a way to take care of things without all the paperwork." Recent game theoretic accounts suggest that a speaker uses indirect speech to reduce public accountability for socially risky behaviors. The present studies examine a secondary function of indirect speech use: increasing the perceived moral permissibility of an action. Participants report that indirect speech is associated with reduced accountability for unethical behavior, as well as increased moral permissibility and increased likelihood of unethical behavior. Importantly, moral permissibility was a stronger mediator of the effect of indirect speech on likelihood of action, for judgments of one's own versus others' unethical action. In sum, the motorist who bribes the police officer with winks and nudges may not only avoid public punishment but also maintain the sense that his actions are morally permissible. Copyright © 2014 Cognitive Science Society, Inc.
Monte Carlo-based Reconstruction in Water Cherenkov Detectors using Chroma
NASA Astrophysics Data System (ADS)
Seibert, Stanley; Latorre, Anthony
2012-03-01
We demonstrate the feasibility of event reconstruction---including position, direction, energy and particle identification---in water Cherenkov detectors with a purely Monte Carlo-based method. Using a fast optical Monte Carlo package we have written, called Chroma, in combination with several variance reduction techniques, we can estimate the value of a likelihood function for an arbitrary event hypothesis. The likelihood can then be maximized over the parameter space of interest using a form of gradient descent designed for stochastic functions. Although slower than more traditional reconstruction algorithms, this completely Monte Carlo-based technique is universal and can be applied to a detector of any size or shape, which is a major advantage during the design phase of an experiment. As a specific example, we focus on reconstruction results from a simulation of the 200 kiloton water Cherenkov far detector option for LBNE.
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.
1987-01-01
The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.
NASA Technical Reports Server (NTRS)
Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.
1995-01-01
We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.
Calibration of two complex ecosystem models with different likelihood functions
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.
In silico identification of functional regions in proteins.
Nimrod, Guy; Glaser, Fabian; Steinberg, David; Ben-Tal, Nir; Pupko, Tal
2005-06-01
In silico prediction of functional regions on protein surfaces, i.e. sites of interaction with DNA, ligands, substrates and other proteins, is of utmost importance in various applications in the emerging fields of proteomics and structural genomics. When a sufficient number of homologs is found, powerful prediction schemes can be based on the observation that evolutionarily conserved regions are often functionally important, typically, only the principal functionally important region of the protein is detected, while secondary functional regions with weaker conservation signals are overlooked. Moreover, it is challenging to unambiguously identify the boundaries of the functional regions. We present a new methodology, called PatchFinder, that automatically identifies patches of conserved residues that are located in close proximity to each other on the protein surface. PatchFinder is based on the following steps: (1) Assignment of conservation scores to each amino acid position on the protein surface. (2) Assignment of a score to each putative patch, based on its likelihood to be functionally important. The patch of maximum likelihood is considered to be the main functionally important region, and the search is continued for non-overlapping patches of secondary importance. We examined the accuracy of the method using the IGPS enzyme, the SH2 domain and a benchmark set of 112 proteins. These examples demonstrated that PatchFinder is capable of identifying both the main and secondary functional patches. The PatchFinder program is available at: http://ashtoret.tau.ac.il/~nimrodg/
A Bayesian Alternative for Multi-objective Ecohydrological Model Specification
NASA Astrophysics Data System (ADS)
Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.
2015-12-01
Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.
The Atacama Cosmology Telescope: Likelihood for Small-Scale CMB Data
NASA Technical Reports Server (NTRS)
Dunkley, J.; Calabrese, E.; Sievers, J.; Addison, G. E.; Battaglia, N.; Battistelli, E. S.; Bond, J. R.; Das, S.; Devlin, M. J.; Dunner, R.;
2013-01-01
The Atacama Cosmology Telescope has measured the angular power spectra of microwave fluctuations to arcminute scales at frequencies of 148 and 218 GHz, from three seasons of data. At small scales the fluctuations in the primordial Cosmic Microwave Background (CMB) become increasingly obscured by extragalactic foregounds and secondary CMB signals. We present results from a nine-parameter model describing these secondary effects, including the thermal and kinematic Sunyaev-Zel'dovich (tSZ and kSZ) power; the clustered and Poisson-like power from Cosmic Infrared Background (CIB) sources, and their frequency scaling; the tSZ-CIB correlation coefficient; the extragalactic radio source power; and thermal dust emission from Galactic cirrus in two different regions of the sky. In order to extract cosmological parameters, we describe a likelihood function for the ACT data, fitting this model to the multi-frequency spectra in the multipole range 500 < l < 10000. We extend the likelihood to include spectra from the South Pole Telescope at frequencies of 95, 150, and 220 GHz. Accounting for different radio source levels and Galactic cirrus emission, the same model provides an excellent fit to both datasets simultaneously, with ?2/dof= 675/697 for ACT, and 96/107 for SPT. We then use the multi-frequency likelihood to estimate the CMB power spectrum from ACT in bandpowers, marginalizing over the secondary parameters. This provides a simplified 'CMB-only' likelihood in the range 500 < l < 3500 for use in cosmological parameter estimation
Stable-Carbon Isotopic Composition of Maple Sap and Foliage 1
Leavitt, Steven W.; Long, Austin
1985-01-01
The 13C/12C ratios of Acer grandidentatum sap sugar collected during the dormant period are compared to those of buds, leaves, and wood developed over the following growing season. As the primary carbon source for cellulose manufacture at initiation of annual growth in deciduous trees, sap sucrose would be expected to have an isotopic composition similar to first-formed cellulose. Although constancy in concentration and 13C/12C ratios of the maple sap sugar suggests any gains or losses (e.g. to maintenance metabolism) do not appreciably alter composition, the 13C/12C ratios of cellulose of the enlarging buds in the spring are quite distinct from those of the sap sugar, seemingly precluding a simple direct biochemical pathway of sap sucrose→glucose→cellulose in favor of a more complex pathway with greater likelihood of isotopic fractionation. The 13C/12C ratios of the leaves and in the growth ring were initially similar to the sap sugar but decreased steadily over the growing season. PMID:16664259
Stable-carbon isotopic composition of maple sap and foliage.
Leavitt, S W; Long, A
1985-06-01
The (13)C/(12)C ratios of Acer grandidentatum sap sugar collected during the dormant period are compared to those of buds, leaves, and wood developed over the following growing season. As the primary carbon source for cellulose manufacture at initiation of annual growth in deciduous trees, sap sucrose would be expected to have an isotopic composition similar to first-formed cellulose. Although constancy in concentration and (13)C/(12)C ratios of the maple sap sugar suggests any gains or losses (e.g. to maintenance metabolism) do not appreciably alter composition, the (13)C/(12)C ratios of cellulose of the enlarging buds in the spring are quite distinct from those of the sap sugar, seemingly precluding a simple direct biochemical pathway of sap sucrose-->glucose-->cellulose in favor of a more complex pathway with greater likelihood of isotopic fractionation. The (13)C/(12)C ratios of the leaves and in the growth ring were initially similar to the sap sugar but decreased steadily over the growing season.
A New Lifetime Distribution with Bathtube and Unimodal Hazard Function
NASA Astrophysics Data System (ADS)
Barriga, Gladys D. C.; Louzada-Neto, Francisco; Cancho, Vicente G.
2008-11-01
In this paper we propose a new lifetime distribution which accommodate bathtub-shaped, unimodal, increasing and decreasing hazard function. Some special particular cases are derived, including the standard Weibull distribution. Maximum likelihood estimation is considered for estimate the tree parameters present in the model. The methodology is illustrated in a real data set on industrial devices on a lite test.
Neerhof, H J; Madsen, P; Ducrocq, V P; Vollema, A R; Jensen, J; Korsgaard, I R
2000-05-01
The relationship between mastitis and functional longevity was assessed with survival analysis on data of Danish Black and White dairy cows. Different methods of including the effect of mastitis treatment on the culling decision by a farmer in the model were compared. The model in which mastitis treatment was assumed to have an effect on functional longevity until the end of the lactation had the highest likelihood, and the model in which mastitis treatment had an effect for only a short period had the lowest likelihood. A cow with mastitis had 1.69 times greater risk of being culled than did a healthy herdmate with all other effects being the same. A model without mastitis treatment was used to predict transmitting abilities of bulls for risk of being culled, based on longevity records of their daughters, and was expressed in terms of risk of being culled. The correlation between the risk of being culled and the national evaluations of the bulls for mastitis resistance was approximately -0.4, indicating that resistance against mastitis was genetically correlated with a lower risk of being culled and, thus, a longer functional length of productive life.
Del Casale, Antonio; Ferracuti, Stefano; Rapinesi, Chiara; De Rossi, Pietro; Angeletti, Gloria; Sani, Gabriele; Kotzalidis, Georgios D; Girardi, Paolo
2015-12-01
Several studies reported that hypnosis can modulate pain perception and tolerance by affecting cortical and subcortical activity in brain regions involved in these processes. We conducted an Activation Likelihood Estimation (ALE) meta-analysis on functional neuroimaging studies of pain perception under hypnosis to identify brain activation-deactivation patterns occurring during hypnotic suggestions aiming at pain reduction, including hypnotic analgesic, pleasant, or depersonalization suggestions (HASs). We searched the PubMed, Embase and PsycInfo databases; we included papers published in peer-reviewed journals dealing with functional neuroimaging and hypnosis-modulated pain perception. The ALE meta-analysis encompassed data from 75 healthy volunteers reported in 8 functional neuroimaging studies. HASs during experimentally-induced pain compared to control conditions correlated with significant activations of the right anterior cingulate cortex (Brodmann's Area [BA] 32), left superior frontal gyrus (BA 6), and right insula, and deactivation of right midline nuclei of the thalamus. HASs during experimental pain impact both cortical and subcortical brain activity. The anterior cingulate, left superior frontal, and right insular cortices activation increases could induce a thalamic deactivation (top-down inhibition), which may correlate with reductions in pain intensity. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nakae, Ken; Ikegaya, Yuji; Ishikawa, Tomoe; Oba, Shigeyuki; Urakubo, Hidetoshi; Koyama, Masanori; Ishii, Shin
2014-01-01
Crosstalk between neurons and glia may constitute a significant part of information processing in the brain. We present a novel method of statistically identifying interactions in a neuron–glia network. We attempted to identify neuron–glia interactions from neuronal and glial activities via maximum-a-posteriori (MAP)-based parameter estimation by developing a generalized linear model (GLM) of a neuron–glia network. The interactions in our interest included functional connectivity and response functions. We evaluated the cross-validated likelihood of GLMs that resulted from the addition or removal of connections to confirm the existence of specific neuron-to-glia or glia-to-neuron connections. We only accepted addition or removal when the modification improved the cross-validated likelihood. We applied the method to a high-throughput, multicellular in vitro Ca2+ imaging dataset obtained from the CA3 region of a rat hippocampus, and then evaluated the reliability of connectivity estimates using a statistical test based on a surrogate method. Our findings based on the estimated connectivity were in good agreement with currently available physiological knowledge, suggesting our method can elucidate undiscovered functions of neuron–glia systems. PMID:25393874
NASA Astrophysics Data System (ADS)
Baluev, Roman V.
2013-08-01
We present PlanetPack, a new software tool that we developed to facilitate and standardize the advanced analysis of radial velocity (RV) data for the goal of exoplanets detection, characterization, and basic dynamical N-body simulations. PlanetPack is a command-line interpreter, that can run either in an interactive mode or in a batch mode of automatic script interpretation. Its major abilities include: (i) advanced RV curve fitting with the proper maximum-likelihood treatment of unknown RV jitter; (ii) user-friendly multi-Keplerian as well as Newtonian N-body RV fits; (iii) use of more efficient maximum-likelihood periodograms that involve the full multi-planet fitting (sometimes called as “residual” or “recursive” periodograms); (iv) easily calculatable parametric 2D likelihood function level contours, reflecting the asymptotic confidence regions; (v) fitting under some useful functional constraints is user-friendly; (vi) basic tasks of short- and long-term planetary dynamical simulation using a fast Everhart-type integrator based on Gauss-Legendre spacings; (vii) fitting the data with red noise (auto-correlated errors); (viii) various analytical and numerical methods for the tasks of determining the statistical significance. It is planned that further functionality may be added to PlanetPack in the future. During the development of this software, a lot of effort was made to improve the calculational speed, especially for CPU-demanding tasks. PlanetPack was written in pure C++ (standard of 1998/2003), and is expected to be compilable and useable on a wide range of platforms.
Factors affecting the reproductive success of dominant male meerkats.
Spong, Göran F; Hodge, Sarah J; Young, Andrew J; Clutton-Brock, Tim H
2008-05-01
Identifying traits that affect the reproductive success of individuals is fundamental for our understanding of evolutionary processes. In cooperative breeders, a dominant male typically restricts mating access to the dominant female for extended periods, resulting in pronounced variation in reproductive success among males. This may result in strong selection for traits that increase the likelihood of dominance acquisition, dominance retention and reproductive rates while dominant. However, despite considerable research on reproductive skew, few studies have explored the factors that influence these three processes among males in cooperative species. Here we use genetic, behavioural and demographic data to investigate the factors affecting reproductive success in dominant male meerkats (Suricata suricatta). Our data show that dominant males sire the majority of all offspring surviving to 1 year. A male's likelihood of becoming dominant is strongly influenced by age, but not by weight. Tenure length and reproductive rate, both important components of dominant male reproductive success, are largely affected by group size and composition, rather than individual traits. Dominant males in large groups have longer tenures, but after this effect is controlled, male tenure length also correlates negatively to the number of adult females in the group. Male reproductive rate also declines as the number of intra- and extra-group competitors increases. As the time spent in the dominant position and reproductive rate while dominant explain > 80% of the total variance in reproductive success, group composition thus has major implications for male reproductive success.
Spatial dependence of extreme rainfall
NASA Astrophysics Data System (ADS)
Radi, Noor Fadhilah Ahmad; Zakaria, Roslinazairimah; Satari, Siti Zanariah; Azman, Muhammad Az-zuhri
2017-05-01
This study aims to model the spatial extreme daily rainfall process using the max-stable model. The max-stable model is used to capture the dependence structure of spatial properties of extreme rainfall. Three models from max-stable are considered namely Smith, Schlather and Brown-Resnick models. The methods are applied on 12 selected rainfall stations in Kelantan, Malaysia. Most of the extreme rainfall data occur during wet season from October to December of 1971 to 2012. This period is chosen to assure the available data is enough to satisfy the assumption of stationarity. The dependence parameters including the range and smoothness, are estimated using composite likelihood approach. Then, the bootstrap approach is applied to generate synthetic extreme rainfall data for all models using the estimated dependence parameters. The goodness of fit between the observed extreme rainfall and the synthetic data is assessed using the composite likelihood information criterion (CLIC). Results show that Schlather model is the best followed by Brown-Resnick and Smith models based on the smallest CLIC's value. Thus, the max-stable model is suitable to be used to model extreme rainfall in Kelantan. The study on spatial dependence in extreme rainfall modelling is important to reduce the uncertainties of the point estimates for the tail index. If the spatial dependency is estimated individually, the uncertainties will be large. Furthermore, in the case of joint return level is of interest, taking into accounts the spatial dependence properties will improve the estimation process.
Functionally Graded Multifunctional Hybrid Composites for Extreme Environments
2010-02-01
Develop multifunctional FGHC with multiple layers: a ceramic thermal barrier layer, a graded ceramic /metal composite (GCMeC) layer and a high...AFOSR-MURI Functionally Graded Hybrid Composites Actively Cooled PMC White (UIUC) FGHC Fabrication Team Graded Ceramic Metal Composites (GCMeC...Composites Fabrication and Characterization of Bulk Ceramic MAX Phase and MAX–Metal Composites AFOSR-MURI Functionally Graded Hybrid Composites Mn
Exact likelihood evaluations and foreground marginalization in low resolution WMAP data
NASA Astrophysics Data System (ADS)
Slosar, Anže; Seljak, Uroš; Makarov, Alexey
2004-06-01
The large scale anisotropies of Wilkinson Microwave Anisotropy Probe (WMAP) data have attracted a lot of attention and have been a source of controversy, with many favorite cosmological models being apparently disfavored by the power spectrum estimates at low l. All the existing analyses of theoretical models are based on approximations for the likelihood function, which are likely to be inaccurate on large scales. Here we present exact evaluations of the likelihood of the low multipoles by direct inversion of the theoretical covariance matrix for low resolution WMAP maps. We project out the unwanted galactic contaminants using the WMAP derived maps of these foregrounds. This improves over the template based foreground subtraction used in the original analysis, which can remove some of the cosmological signal and may lead to a suppression of power. As a result we find an increase in power at low multipoles. For the quadrupole the maximum likelihood values are rather uncertain and vary between 140 and 220 μK2. On the other hand, the probability distribution away from the peak is robust and, assuming a uniform prior between 0 and 2000 μK2, the probability of having the true value above 1200 μK2 (as predicted by the simplest cold dark matter model with a cosmological constant) is 10%, a factor of 2.5 higher than predicted by the WMAP likelihood code. We do not find the correlation function to be unusual beyond the low quadrupole value. We develop a fast likelihood evaluation routine that can be used instead of WMAP routines for low l values. We apply it to the Markov chain Monte Carlo analysis to compare the cosmological parameters between the two cases. The new analysis of WMAP either alone or jointly with the Sloan Digital Sky Survey (SDSS) and the Very Small Array (VSA) data reduces the evidence for running to less than 1σ, giving αs=-0.022±0.033 for the combined case. The new analysis prefers about a 1σ lower value of Ωm, a consequence of an increased integrated Sachs-Wolfe (ISW) effect contribution required by the increase in the spectrum at low l. These results suggest that the details of foreground removal and full likelihood analysis are important for parameter estimation from the WMAP data. They are robust in the sense that they do not change significantly with frequency, mask, or details of foreground template marginalization. The marginalization approach presented here is the most conservative method to remove the foregrounds and should be particularly useful in the analysis of polarization, where foreground contamination may be much more severe.
Kinematic Structural Modelling in Bayesian Networks
NASA Astrophysics Data System (ADS)
Schaaf, Alexander; de la Varga, Miguel; Florian Wellmann, J.
2017-04-01
We commonly capture our knowledge about the spatial distribution of distinct geological lithologies in the form of 3-D geological models. Several methods exist to create these models, each with its own strengths and limitations. We present here an approach to combine the functionalities of two modeling approaches - implicit interpolation and kinematic modelling methods - into one framework, while explicitly considering parameter uncertainties and thus model uncertainty. In recent work, we proposed an approach to implement implicit modelling algorithms into Bayesian networks. This was done to address the issues of input data uncertainty and integration of geological information from varying sources in the form of geological likelihood functions. However, one general shortcoming of implicit methods is that they usually do not take any physical constraints into consideration, which can result in unrealistic model outcomes and artifacts. On the other hand, kinematic structural modelling intends to reconstruct the history of a geological system based on physically driven kinematic events. This type of modelling incorporates simplified, physical laws into the model, at the cost of a substantial increment of usable uncertain parameters. In the work presented here, we show an integration of these two different modelling methodologies, taking advantage of the strengths of both of them. First, we treat the two types of models separately, capturing the information contained in the kinematic models and their specific parameters in the form of likelihood functions, in order to use them in the implicit modelling scheme. We then go further and combine the two modelling approaches into one single Bayesian network. This enables the direct flow of information between the parameters of the kinematic modelling step and the implicit modelling step and links the exclusive input data and likelihoods of the two different modelling algorithms into one probabilistic inference framework. In addition, we use the capabilities of Noddy to analyze the topology of structural models to demonstrate how topological information, such as the connectivity of two layers across an unconformity, can be used as a likelihood function. In an application to a synthetic case study, we show that our approach leads to a successful combination of the two different modelling concepts. Specifically, we show that we derive ensemble realizations of implicit models that now incorporate the knowledge of the kinematic aspects, representing an important step forward in the integration of knowledge and a corresponding estimation of uncertainties in structural geological models.
Two stochastic models useful in petroleum exploration
NASA Technical Reports Server (NTRS)
Kaufman, G. M.; Bradley, P. G.
1972-01-01
A model of the petroleum exploration process that tests empirically the hypothesis that at an early stage in the exploration of a basin, the process behaves like sampling without replacement is proposed along with a model of the spatial distribution of petroleum reserviors that conforms to observed facts. In developing the model of discovery, the following topics are discussed: probabilitistic proportionality, likelihood function, and maximum likelihood estimation. In addition, the spatial model is described, which is defined as a stochastic process generating values of a sequence or random variables in a way that simulates the frequency distribution of areal extent, the geographic location, and shape of oil deposits
The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.
Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R
2013-01-01
In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.
Quantifying (dis)agreement between direct detection experiments in a halo-independent way
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feldstein, Brian; Kahlhoefer, Felix, E-mail: brian.feldstein@physics.ox.ac.uk, E-mail: felix.kahlhoefer@physics.ox.ac.uk
We propose an improved method to study recent and near-future dark matter direct detection experiments with small numbers of observed events. Our method determines in a quantitative and halo-independent way whether the experiments point towards a consistent dark matter signal and identifies the best-fit dark matter parameters. To achieve true halo independence, we apply a recently developed method based on finding the velocity distribution that best describes a given set of data. For a quantitative global analysis we construct a likelihood function suitable for small numbers of events, which allows us to determine the best-fit particle physics properties of darkmore » matter considering all experiments simultaneously. Based on this likelihood function we propose a new test statistic that quantifies how well the proposed model fits the data and how large the tension between different direct detection experiments is. We perform Monte Carlo simulations in order to determine the probability distribution function of this test statistic and to calculate the p-value for both the dark matter hypothesis and the background-only hypothesis.« less
Lateral OFC activity predicts decision bias due to first impressions during ultimatum games.
Kim, Hackjin; Choi, Min-Jo; Jang, In-Ji
2012-02-01
Despite the prevalence and potentially harmful consequences of first impression bias during social decision-making, its precise neural underpinnings remain unclear. Here, on the basis of the fMRI study using ultimatum games, the authors show that the responders' decisions to accept or reject offers were significantly affected by facial trustworthiness of proposers. Analysis using a model-based fMRI method revealed that activity in the right lateral OFC (lOFC) of responders increased as a function of negative decision bias, indicating a greater likelihood of rejecting otherwise fair offers, possibly because of the facial trustworthiness of proposers. In addition, lOFC showed changes in functional connectivity strength with amygdala and insula as a function of decision bias, and individual differences in the strengths of connectivities between lOFC and bilateral insula were also found to predict the likelihood of responders to reject offers from untrustworthy-looking proposers. The present findings emphasize that the lOFC plays a pivotal role in integrating signals related to facial impression and creating signal biasing decisions during social interactions.
Parametric Model Based On Imputations Techniques for Partly Interval Censored Data
NASA Astrophysics Data System (ADS)
Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah
2017-12-01
The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.
Statistical Signal Processing and the Motor Cortex
Brockwell, A.E.; Kass, R.E.; Schwartz, A.B.
2011-01-01
Over the past few decades, developments in technology have significantly improved the ability to measure activity in the brain. This has spurred a great deal of research into brain function and its relation to external stimuli, and has important implications in medicine and other fields. As a result of improved understanding of brain function, it is now possible to build devices that provide direct interfaces between the brain and the external world. We describe some of the current understanding of function of the motor cortex region. We then discuss a typical likelihood-based state-space model and filtering based approach to address the problems associated with building a motor cortical-controlled cursor or robotic prosthetic device. As a variation on previous work using this approach, we introduce the idea of using Markov chain Monte Carlo methods for parameter estimation in this context. By doing this instead of performing maximum likelihood estimation, it is possible to expand the range of possible models that can be explored, at a cost in terms of computational load. We demonstrate results obtained applying this methodology to experimental data gathered from a monkey. PMID:21765538
Extending the BEAGLE library to a multi-FPGA platform
2013-01-01
Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707
Robust, Adaptive Radar Detection and Estimation
2015-07-21
cost function is not a convex function in R, we apply a transformation variables i.e., let X = σ2R−1 and S′ = 1 σ2 S. Then, the revised cost function in...1 viv H i . We apply this inverse covariance matrix in computing the SINR as well as estimator variance. • Rank Constrained Maximum Likelihood: Our...even as almost all available training samples are corrupted. Probability of Detection vs. SNR We apply three test statistics, the normalized matched
Learning Quantitative Sequence-Function Relationships from Massively Parallel Experiments
NASA Astrophysics Data System (ADS)
Atwal, Gurinder S.; Kinney, Justin B.
2016-03-01
A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such "massively parallel" experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of "diffeomorphic modes"—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.
Maximum Likelihood Analysis in the PEN Experiment
NASA Astrophysics Data System (ADS)
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Reyes-Valdés, M H; Stelly, D M
1995-01-01
Frequencies of meiotic configurations in cytogenetic stocks are dependent on chiasma frequencies in segments defined by centromeres, breakpoints, and telomeres. The expectation maximization algorithm is proposed as a general method to perform maximum likelihood estimations of the chiasma frequencies in the intervals between such locations. The estimates can be translated via mapping functions into genetic maps of cytogenetic landmarks. One set of observational data was analyzed to exemplify application of these methods, results of which were largely concordant with other comparable data. The method was also tested by Monte Carlo simulation of frequencies of meiotic configurations from a monotelodisomic translocation heterozygote, assuming six different sample sizes. The estimate averages were always close to the values given initially to the parameters. The maximum likelihood estimation procedures can be extended readily to other kinds of cytogenetic stocks and allow the pooling of diverse cytogenetic data to collectively estimate lengths of segments, arms, and chromosomes. Images Fig. 1 PMID:7568226
Methods for estimating drought streamflow probabilities for Virginia streams
Austin, Samuel H.
2014-01-01
Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.
Grinband, Jack; Savitskaya, Judith; Wager, Tor D; Teichert, Tobias; Ferrera, Vincent P; Hirsch, Joy
2011-07-15
The dorsal medial frontal cortex (dMFC) is highly active during choice behavior. Though many models have been proposed to explain dMFC function, the conflict monitoring model is the most influential. It posits that dMFC is primarily involved in detecting interference between competing responses thus signaling the need for control. It accurately predicts increased neural activity and response time (RT) for incompatible (high-interference) vs. compatible (low-interference) decisions. However, it has been shown that neural activity can increase with time on task, even when no decisions are made. Thus, the greater dMFC activity on incompatible trials may stem from longer RTs rather than response conflict. This study shows that (1) the conflict monitoring model fails to predict the relationship between error likelihood and RT, and (2) the dMFC activity is not sensitive to congruency, error likelihood, or response conflict, but is monotonically related to time on task. Copyright © 2010 Elsevier Inc. All rights reserved.
Maximum likelihood density modification by pattern recognition of structural motifs
Terwilliger, Thomas C.
2004-04-13
An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.
Accurate Structural Correlations from Maximum Likelihood Superpositions
Theobald, Douglas L; Wuttke, Deborah S
2008-01-01
The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091
Equivalence of binormal likelihood-ratio and bi-chi-squared ROC curve models
Hillis, Stephen L.
2015-01-01
A basic assumption for a meaningful diagnostic decision variable is that there is a monotone relationship between it and its likelihood ratio. This relationship, however, generally does not hold for a decision variable that results in a binormal ROC curve. As a result, receiver operating characteristic (ROC) curve estimation based on the assumption of a binormal ROC-curve model produces improper ROC curves that have “hooks,” are not concave over the entire domain, and cross the chance line. Although in practice this “improperness” is usually not noticeable, sometimes it is evident and problematic. To avoid this problem, Metz and Pan proposed basing ROC-curve estimation on the assumption of a binormal likelihood-ratio (binormal-LR) model, which states that the decision variable is an increasing transformation of the likelihood-ratio function of a random variable having normal conditional diseased and nondiseased distributions. However, their development is not easy to follow. I show that the binormal-LR model is equivalent to a bi-chi-squared model in the sense that the families of corresponding ROC curves are the same. The bi-chi-squared formulation provides an easier-to-follow development of the binormal-LR ROC curve and its properties in terms of well-known distributions. PMID:26608405
Prosocial Bystander Behavior in Bullying Dynamics: Assessing the Impact of Social Capital.
Evans, Caroline B R; Smokowski, Paul R
2015-12-01
Individuals who observe a bullying event, but are not directly involved as a bully or victim, are referred to as bystanders. Prosocial bystanders are those individuals who actively intervene in bullying dynamics to support the victim and this prosocial behavior often ends the bullying. The current study examines how social capital in the form of social support, community engagement, mental health functioning, and positive school experiences and characteristics is associated with the likelihood of engaging in prosocial bystander behavior in a large sample (N = 5752; 51.03% female) of racially/ethnically diverse rural youth. It was hypothesized that social capital would be associated with an increased likelihood of engaging in prosocial bystander behavior. Following multiple imputation, an ordered logistic regression with robust standard errors was run. The hypothesis was partially supported and results indicated that social capital in the form of friend and teacher support, ethnic identity, religious orientation, and future optimism were significantly associated with an increased likelihood of engaging in prosocial bystander behavior. Contrary to the hypothesis, a decreased rate of self-esteem was significantly associated with an increased likelihood of engaging in prosocial bystander behavior. The findings highlight the importance of positive social relationships and community engagement in increasing prosocial bystander behavior and ultimately decreasing school bullying. Implications were discussed.
75 FR 65054 - General Motors, LLC, Receipt of Petition for Decision of Inconsequential Noncompliance
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-21
... reduce the likelihood of shifting errors.'' Thus, in all but the rarest circumstances, the primary function of the PRNDM display is to inform the driver of gear selection and relative position of the gears...
7 CFR 623.9 - Easement priority.
Code of Federal Regulations, 2011 CFR
2011-01-01
... restored, (e) Wetland function or values, (f) Likelihood of successful restoration of wetland values, (g... AGRICULTURE WATER RESOURCES EMERGENCY WETLANDS RESERVE PROGRAM § 623.9 Easement priority. The State... government expenditure on restoration and easement purchase. The factors for determining the priority for...
Cycles till failure of silver-zinc cells with completing failures modes: Preliminary data analysis
NASA Technical Reports Server (NTRS)
Sidik, S. M.; Leibecki, H. F.; Bozek, J. M.
1980-01-01
One hundred and twenty nine cells were run through charge-discharge cycles until failure. The experiment design was a variant of a central composite factorial in five factors. Preliminary data analysis consisted of response surface estimation of life. Batteries fail under two basic modes; a low voltage condition and an internal shorting condition. A competing failure modes analysis using maximum likelihood estimation for the extreme value life distribution was performed. Extensive diagnostics such as residual plotting and probability plotting were employed to verify data quality and choice of model.
NASA Astrophysics Data System (ADS)
Zolotov, Mikhail
2018-01-01
Chemical and phase compositions of Venus's surface could reflect history of gas- and fluid-rock interactions, recent and past climate changes, and a loss of water from the Earth's sister planet. The concept of chemical weathering on Venus through gas-solid type reactions has been established in 1960s after the discovery of hot and dense CO2-rich atmosphere inferred from Earth-based and Mariner 2 radio emission data. Initial works suggested carbonation, hydration, and oxidation of exposed igneous rocks and a control (buffering) of atmospheric gases by solid-gas type chemical equilibria in the near-surface lithosphere. Calcite, quartz, wollastonite, amphiboles, and Fe oxides were considered likely secondary minerals. Since the late 1970s, measurements of trace gases in the sub-cloud atmosphere by Pioneer Venus and Venera entry probes and Earth-based infrared spectroscopy doubted the likelihood of hydration and carbonation. The H2O gas content appeared to be low to allow a stable existence of hydrated and a majority of OH-bearing minerals. The concentration of SO2 was too high to allow the stability of calcite and Ca-rich silicates with respect to sulfatization to CaSO4. In 1980s, the supposed ongoing consumption of atmospheric SO2 to sulfates gained support by the detection of an elevated bulk S content at Venera and Vega landing sites. The induced composition of the near-surface atmosphere implied oxidation of ferrous minerals to magnetite and hematite, consistent with the infrared reflectance of surface materials. The likelihood of sulfatization and oxidation has been illustrated in modeling experiments at simulated Venus conditions. Venus's surface morphology suggests that hot surface rocks and fines of mainly mafic composition contacted atmospheric gases during several hundreds of millions years since a global volcanic resurfacing. Some exposed materials could have reacted at higher and lower temperatures in a presence of diverse gases at different altitudinal, volcanic, impact, and atmospheric settings. On highly deformed tessera terrains, more ancient rocks of unknown composition could reflect interactions with putative water-rich atmospheres and even aqueous solutions. Salt-, Fe oxide, or silica-rich formations would indicate past aqueous processes. The apparent diversity of affected solids, surface temperatures, pressures, and gas/fluid compositions throughout Venus's history implies multiple signs of chemical alteration, which remain to be investigated. The current understanding of chemical weathering is limited by the uncertain composition of the deep atmosphere, by the lack of direct data on the phase composition of surface materials, and by the uncertain data on thermodynamics of minerals and their solid solutions. In the preparation for further entry probe and lander missions, rock alteration needs to be investigated through chemical kinetic experiments and calculations of solid-gas(fluid) equilibria to constrain past and present processes.
Measuring Work Functioning: Validity of a Weighted Composite Work Functioning Approach.
Boezeman, Edwin J; Sluiter, Judith K; Nieuwenhuijsen, Karen
2015-09-01
To examine the construct validity of a weighted composite work functioning measurement approach. Workers (health-impaired/healthy) (n = 117) completed a composite measure survey that recorded four central work functioning aspects with existing scales: capacity to work, quality of work performance, quantity of work, and recovery from work. Previous derived weights reflecting the relative importance of these aspects of work functioning were used to calculate the composite weighted work functioning score of the workers. Work role functioning, productivity, and quality of life were used for validation. Correlations were calculated and norms applied to examine convergent and divergent construct validity. A t test was conducted and a norm applied to examine discriminative construct validity. Overall the weighted composite work functioning measure demonstrated construct validity. As predicted, the weighted composite score correlated (p < .001) strongly (r > .60) with work role functioning and productivity (convergent construct validity), and moderately (.30 < r < .60) with physical quality of life and less strongly than work role functioning and productivity with mental quality of life (divergent validity). Further, the weighted composite measure detected that health-impaired workers show with a large effect size (Cohen's d > .80) significantly worse work functioning than healthy workers (discriminative validity). The weighted composite work functioning measurement approach takes into account the relative importance of the different work functioning aspects and demonstrated good convergent, fair divergent, and good discriminative construct validity.
A Sm-Nd isotopic study of atmospheric dusts and particulates from major river systems
NASA Technical Reports Server (NTRS)
Goldstein, S. L.; Onions, R. K.; Hamilton, P. J.
1984-01-01
Nd-143/Nd-144 ratios, together with Sm and Nd abundances, are given for particulates from major and minor rivers as well as continental sediments and aeolian dusts collected over the Atlantic, Pacific, and Indian Oceans. In combination with data from the literature, the present results have implications for the age, history, and composition of the sedimentary mass and the continental crust. It is noted that the average ratio of Sm/Nd is about 0.19 in the upper continental crust, and has remained so since the early Archean, thereby precluding the likelihood of major mafic-to-felsic or felsic-to-mafic trends in the overall composition of the upper continental crust through earth history. The average 'crustal residence age' of the entire sedimentary mass is about 1.9 Ga.
NASA Technical Reports Server (NTRS)
Temple, Enoch C.
1994-01-01
The space industry has developed many composite materials that have high durability in proportion to their weights. Many of these materials have a likelihood for flaws that is higher than in traditional metals. There are also coverings (such as paint) that develop flaws that may adversely affect the performance of the system in which they are used. Therefore there is a need to monitor the soundness of composite structures. To meet this monitoring need, many nondestructive evaluation (NDE) systems have been developed. An NDE system is designed to detect material flaws and make flaw measurements without destroying the inspected item. Also, the detection operation is expected to be performed in a rapid manner in a field or production environment. Some of the most recent video-based NDE methodologies are shearography, holography, thermography, and video image correlation.
Aarts, Esther; Roelofs, Ardi; van Turennout, Miranda
2008-04-30
Previous studies have found no agreement on whether anticipatory activity in the anterior cingulate cortex (ACC) reflects upcoming conflict, error likelihood, or actual control adjustments. Using event-related functional magnetic resonance imaging, we investigated the nature of preparatory activity in the ACC. Informative cues told the participants whether an upcoming target would or would not involve conflict in a Stroop-like task. Uninformative cues provided no such information. Behavioral responses were faster after informative than after uninformative cues, indicating cue-based adjustments in control. ACC activity was larger after informative than uninformative cues, as would be expected if the ACC is involved in anticipatory control. Importantly, this activation in the ACC was observed for informative cues even when the information conveyed by the cue was that the upcoming target evokes no response conflict and has low error likelihood. This finding demonstrates that the ACC is involved in anticipatory control processes independent of upcoming response conflict or error likelihood. Moreover, the response of the ACC to the target stimuli was critically dependent on whether the cue was informative or not. ACC activity differed among target conditions after uninformative cues only, indicating ACC involvement in actual control adjustments. Together, these findings argue strongly for a role of the ACC in anticipatory control independent of anticipated conflict and error likelihood, and also show that such control can eliminate conflict-related ACC activity during target processing. Models of frontal cortex conflict-detection and conflict-resolution mechanisms require modification to include consideration of these anticipatory control properties of the ACC.
Functional Recovery in Major Depressive Disorder: Focus on Early Optimized Treatment.
Habert, Jeffrey; Katzman, Martin A; Oluboka, Oloruntoba J; McIntyre, Roger S; McIntosh, Diane; MacQueen, Glenda M; Khullar, Atul; Milev, Roumen V; Kjernisted, Kevin D; Chokka, Pratap R; Kennedy, Sidney H
2016-09-01
This article presents the case that a more rapid, individualized approach to treating major depressive disorder (MDD) may increase the likelihood of achieving full symptomatic and functional recovery for individual patients and that studies show it is possible to make earlier decisions about appropriateness of treatment in order to rapidly optimize that treatment. A PubMed search was conducted using terms including major depressive disorder, early improvement, predictor, duration of untreated illness, and function. English-language articles published before September 2015 were included. Additional studies were found within identified research articles and reviews. Thirty antidepressant studies reporting predictor criteria and outcome measures are included in this review. Studies were reviewed to extract definitions of predictors, outcome measures, and results of the predictor analysis. Results were summarized separately for studies reporting effects of early improvement, baseline characteristics, and duration of untreated depression. Shorter duration of the current depressive episode and duration of untreated depression are associated with better symptomatic and functional outcomes in MDD. Early improvement of depressive symptoms predicts positive symptomatic outcomes (response and remission), and early functional improvement predicts an increased likelihood of functional remission. The approach to treatment of depression that exhibits the greatest potential for achieving full symptomatic and functional recovery is early optimized treatment: early diagnosis followed by rapid individualized treatment. Monitoring symptoms and function early in treatment is crucial to ensuring that patients do not remain on ineffective or poorly tolerated treatment, which may delay recovery and heighten the risk of residual functional deficits. © Copyright 2016 Physicians Postgraduate Press, Inc.
NASA Astrophysics Data System (ADS)
Gupta, Nikhil; Paramsothy, Muralidharan
2014-06-01
The special topic "Metal- and Polymer-Matrix Composites" is intended to capture the state of the art in the research and practice of functional composites. The current set of articles related to metal-matrix composites includes reviews on functionalities such as self-healing, self-lubricating, and self-cleaning capabilities; research results on a variety of aluminum-matrix composites; and investigations on advanced composites manufacturing methods. In addition, the processing and properties of carbon nanotube-reinforced polymer-matrix composites and adhesive bonding of laminated composites are discussed. The literature on functional metal-matrix composites is relatively scarce compared to functional polymer-matrix composites. The demand for lightweight composites in the transportation sector is fueling the rapid development in this field, which is captured in the current set of articles. The possibility of simultaneously tailoring several desired properties is attractive but very challenging, and it requires significant advancements in the science and technology of composite materials. The progress captured in the current set of articles shows promise for developing materials that seem capable of moving this field from laboratory-scale prototypes to actual industrial applications.
Cui, Helen W; Devlies, Wout; Ravenscroft, Samuel; Heers, Hendrik; Freidin, Andrew J; Cleveland, Robin O; Ganeshan, Balaji; Turney, Benjamin W
2017-07-01
Understanding the factors affecting success of extracorporeal shockwave lithotripsy (SWL) would improve informed decision-making on the most appropriate treatment modality for an individual patient. Although stone size and skin-to-stone distance do correlate with fragmentation efficacy, it has been shown that stone composition and architecture, as reflected by structural heterogeneity on CT, are also important factors. This study aims to determine if CT texture analysis (CTTA), a novel, nondestructive, and objective tool that generates statistical metrics reflecting stone heterogeneity, could have utility in predicting likelihood of SWL success. Seven spontaneously passed, intact renal tract stones, were scanned ex vivo using standard CT KUB and micro-CT. The stones were then fragmented in vitro using a clinical lithotripter, after which, chemical composition analysis was performed. CTTA was used to generate a number of metrics that were correlated to the number of shocks needed to fragment the stone. CTTA metrics reflected stone characteristics and composition, and predicted ease of SWL fragmentation. The strongest correlation with number of shocks required to fragment the stone was mean Hounsfield unit (HU) density (r = 0.806, p = 0.028) and a CTTA metric measuring the entropy of the pixel distribution of the stone image (r = 0.804, p = 0.039). Using multiple linear regression analysis, the best model showed that CTTA metrics of entropy and kurtosis could predict 92% of the outcome of number of shocks needed to fragment the stone. This was superior to using stone volume or density. CTTA metrics entropy and kurtosis have been shown in this experimental ex vivo setting to strongly predict fragmentation by SWL. This warrants further investigation in a larger clinical study for the contribution of CT textural metrics as a measure of stone heterogeneity, along with other known clinical factors, to predict likelihood of SWL success.
NASA Astrophysics Data System (ADS)
Drăghici, S.; Proştean, O.; Răduca, E.; Haţiegan, C.; Hălălae, I.; Pădureanu, I.; Nedeloni, M.; (Barboni Haţiegan, L.
2017-01-01
In this paper a method with which a set of characteristic functions are associated to a LDPC code is shown and also functions that represent the evolution density of messages that go along the edges of a Tanner graph. Graphic representations of the density evolution are shown respectively the study and simulation of likelihood threshold that render asymptotic boundaries between which there are decodable codes were made using MathCad V14 software.
Baert, Jan M; De Laender, Frederik; Sabbe, Koen; Janssen, Colin R
2016-12-01
There is now ample evidence that biodiversity stabilizes aggregated ecosystem functions, such as primary production, in changing environments. In primary producer systems, this stabilizing effect is found to be driven by higher functional resistance (i.e., reduced changes in functions by environmental changes) rather than through higher functional resilience (i.e., rapid recovery following environmental changes) in more diverse systems. The stability of aggregated ecosystem functions directly depends on changes in species composition and by consequence their functional contributions to ecosystem functions. Still, it remains only theoretically explored how biodiversity can stabilize ecosystem functions by affecting compositional stability. Here, we demonstrate how biodiversity effects on compositional stability drive biodiversity effects on functional stability in diatom communities. In a microcosm experiment, we exposed 39 communities of five different levels of species richness (1, 2, 4, 6, and 8 species) to three concentrations of a chemical stressor (0, 25, and 250 μg/L atrazine) for four weeks, after which all communities were transferred to atrazine-free medium for three more weeks. Biodiversity simultaneously increased, increasing functional and compositional resistance, but decreased functional and compositional resilience. These results confirm the theoretically proposed link between biodiversity effects on functional and compositional stability in primary producer systems, and provide a mechanistic underpinning for observed biodiversity-stability relationships. Finally, we discuss how higher compositional stability can be expected to become increasingly important in stabilizing ecosystem functions under field conditions when multiple environmental stressors fluctuate simultaneously. © 2016 by the Ecological Society of America.
Fischer, Ute; McDonnell, Lori; Orasanu, Judith
2007-05-01
Approaches to mitigating the likelihood of psychosocial problems during space missions emphasize preflight measures such as team training and team composition. Additionally, it may be necessary to monitor team interactions during missions for signs of interpersonal stress. The present research was conducted to identify features in team members' communications indicative of team functioning. Team interactions were studied in the context of six computer-simulated search and rescue missions. There were 12 teams of 4 U.S. men who participated; however, the present analyses contrast the top two teams with the two least successful teams. Communications between team members were analyzed using linguistic analysis software and a coding scheme developed to characterize task-related and social dimensions of team interactions. Coding reliability was established by having two raters independently code three transcripts. Between-rater agreement ranged from 78.1 to 97.9%. Team performance was significantly associated with team members' task-related communications, specifically with the extent to which task-critical information was shared. Successful and unsuccessful teams also showed different interactive patterns, in particular concerning the frequencies of elaborations and no-responses. Moreover, task success was negatively correlated with variability in team members' word count, and positively correlated with the number of positive emotion words and the frequency of assenting relative to dissenting responses. Analyses isolated certain task-related and social features of team communication related to team functioning. Team success was associated with the extent to which team members shared task-critical information, equally participated and built on each other's contributions, showed agreement, and positive affect.
Microbial community assembly patterns under incipient conditions in a basaltic soil system
NASA Astrophysics Data System (ADS)
Sengupta, A.; Stegen, J.; Alves Meira Neto, A.; Wang, Y.; Chorover, J.; Troch, P. A. A.; Maier, R. M.
2017-12-01
In sub-surface environments, the biotic components are critically linked to the abiotic processes. However, there is limited understanding of community establishment, functional associations, and community assembly processes of such microbes in sub-surface environments. This study presents the first analysis of microbial signatures in an incipient terrestrial basalt soil system conducted under controlled conditions. A sub-meter scale sampling of a soil mesocosm revealed the contrasting distribution patterns of simple soil parameters such as bulk density and electrical conductivity. Phylogenetic analysis of 16S rRNA gene indicated the presence of a total 40 bacterial and archaeal phyla, with high relative abundance of Actinobacteria on the surface and highest abundance of Proteobacteria throughout the system. Community diversity patterns were inferred to be dependent on depth profile and average water content in the system. Predicted functional gene analysis suggested mixotrophy lifestyles with both autotrophic and heterotrophic metabolisms, likelihood of a unique salt tolerant methanogenic pathway with links to novel Euryarchea, signatures of an incomplete nitrogen cycle, and predicted enzymes of extracellular iron (II) to iron (III) conversion followed by intracellular uptake, transport and regulation. Null modeling revealed microbial community assembly was predominantly governed by variable selection, but the influence of the variable selection did not show systematic spatial structure. The presence of significant heterogeneity in predicted functions and ecologically deterministic shifts in community composition in a homogeneous incipient basalt highlights the complexity exhibited by microorganisms even in the simplest of environmental systems. This presents an opportunity to further develop our understanding of how microbial communities establish, evolve, impact, and respond in sub-surface environments.
Hurricane Sandy Exposure and the Mental Health of World Trade Center Responders.
Bromet, Evelyn J; Clouston, Sean; Gonzalez, Adam; Kotov, Roman; Guerrera, Kathryn M; Luft, Benjamin J
2017-04-01
The psychological consequences of a second disaster on populations exposed to an earlier disaster have rarely been studied prospectively. Using a pre- and postdesign, we examined the effects of Hurricane Sandy on possible World Trade Center (WTC) related posttraumatic stress disorder (PTSD Checklist score of ≥ 50) and overall depression (major depressive disorder [MDD]; Patient Health Questionnaire depression score of ≥ 10) among 870 WTC responders with a follow-up monitoring visit at the Long Island WTC Health Program during the 6 months post-Hurricane Sandy. The Hurricane Sandy exposures evaluated were damage to home (8.3%) and to possessions (7.8%), gasoline shortage (24.1%), prolonged power outage (42.7%), and filing a Federal Emergency Management Agency claim (11.3%). A composite exposure score also was constructed. In unadjusted analyses, Hurricane Sandy exposures were associated with 1.77 to 5.38 increased likelihood of PTSD and 1.58 to 4.13 likelihood of MDD; odds ratios for ≥ 3 exposures were 6.47 for PTSD and 6.45 for MDD. After adjusting for demographic characteristics, WTC exposure, pre-Hurricane Sandy mental health status, and time between assessments, reporting ≥ 3 Hurricane Sandy exposures was associated with a 3.29 and 3.71 increased likelihood of PTSD and MDD, respectively. These findings underscore the importance of assessing the impact of a subsequent disaster in ongoing responder health surveillance programs. Copyright © 2017 International Society for Traumatic Stress Studies.
Likelihood ratio meta-analysis: New motivation and approach for an old method.
Dormuth, Colin R; Filion, Kristian B; Platt, Robert W
2016-03-01
A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience. Copyright © 2016 Elsevier Inc. All rights reserved.
Lee, Jinseok; Chon, Ki H
2010-09-01
We present particle filtering (PF) algorithms for an accurate respiratory rate extraction from pulse oximeter recordings over a broad range: 12-90 breaths/min. These methods are based on an autoregressive (AR) model, where the aim is to find the pole angle with the highest magnitude as it corresponds to the respiratory rate. However, when SNR is low, the pole angle with the highest magnitude may not always lead to accurate estimation of the respiratory rate. To circumvent this limitation, we propose a probabilistic approach, using a sequential Monte Carlo method, named PF, which is combined with the optimal parameter search (OPS) criterion for an accurate AR model-based respiratory rate extraction. The PF technique has been widely adopted in many tracking applications, especially for nonlinear and/or non-Gaussian problems. We examine the performances of five different likelihood functions of the PF algorithm: the strongest neighbor, nearest neighbor (NN), weighted nearest neighbor (WNN), probability data association (PDA), and weighted probability data association (WPDA). The performance of these five combined OPS-PF algorithms was measured against a solely OPS-based AR algorithm for respiratory rate extraction from pulse oximeter recordings. The pulse oximeter data were collected from 33 healthy subjects with breathing rates ranging from 12 to 90 breaths/ min. It was found that significant improvement in accuracy can be achieved by employing particle filters, and that the combined OPS-PF employing either the NN or WNN likelihood function achieved the best results for all respiratory rates considered in this paper. The main advantage of the combined OPS-PF with either the NN or WNN likelihood function is that for the first time, respiratory rates as high as 90 breaths/min can be accurately extracted from pulse oximeter recordings.
On the complex quantification of risk: systems-based perspective on terrorism.
Haimes, Yacov Y
2011-08-01
This article highlights the complexity of the quantification of the multidimensional risk function, develops five systems-based premises on quantifying the risk of terrorism to a threatened system, and advocates the quantification of vulnerability and resilience through the states of the system. The five premises are: (i) There exists interdependence between a specific threat to a system by terrorist networks and the states of the targeted system, as represented through the system's vulnerability, resilience, and criticality-impact. (ii) A specific threat, its probability, its timing, the states of the targeted system, and the probability of consequences can be interdependent. (iii) The two questions in the risk assessment process: "What is the likelihood?" and "What are the consequences?" can be interdependent. (iv) Risk management policy options can reduce both the likelihood of a threat to a targeted system and the associated likelihood of consequences by changing the states (including both vulnerability and resilience) of the system. (v) The quantification of risk to a vulnerable system from a specific threat must be built on a systemic and repeatable modeling process, by recognizing that the states of the system constitute an essential step to construct quantitative metrics of the consequences based on intelligence gathering, expert evidence, and other qualitative information. The fact that the states of all systems are functions of time (among other variables) makes the time frame pivotal in each component of the process of risk assessment, management, and communication. Thus, risk to a system, caused by an initiating event (e.g., a threat) is a multidimensional function of the specific threat, its probability and time frame, the states of the system (representing vulnerability and resilience), and the probabilistic multidimensional consequences. © 2011 Society for Risk Analysis.
Examination of universal purchase programs as a driver of vaccine uptake among US States, 1995-2014.
Mulligan, Karen; Snider, Julia Thornton; Arthur, Phyllis; Frank, Gregory; Tebeka, Mahlet; Walker, Amy; Abrevaya, Jason
2018-06-01
Immunization against numerous potentially life-threatening illnesses has been a great public health achievement. In the United States, the Vaccines for Children (VFC) program has provided vaccines to uninsured and underinsured children since the early 1990s, increasing vaccination rates. In recent years, some states have adopted Universal Purchase (UP) programs with the stated aim of further increasing vaccination rates. Under UP programs, states also purchase vaccines for privately-insured children at federally-contracted VFC prices and bill private health insurers for the vaccines through assessments. In this study, we estimated the effect of UP adoption in a state on children's vaccination rates using state-level and individual-level data from the 1995-2014 National Immunization Survey. For the state-level analysis, we performed ordinary least squares regression to estimate the state's vaccination rate as a function of whether the state had UP in the given year, state demographic characteristics, other vaccination policies, state fixed effects, and a time trend. For the individual analysis, we performed logistic regression to estimate a child's likelihood of being vaccinated as a function of whether the state had UP in the given year, the child's demographic characteristics, state characteristics and vaccine policies, state fixed effects, and a time trend. We performed separate regressions for each of nine recommended vaccines, as well as composite measures on whether a child was up-to-date on all required vaccines. In the both the state-level and individual-level analyses, we found UP had no significant (p < 0.10) effect on any of the vaccines or composite measures in our base case specifications. Results were similar in alternative specifications. We hypothesize that UP was ineffective in increasing vaccination rates. Policymakers seeking to increase vaccination rates would do well to consider other policies such as addressing provider practice issues and vaccine hesitancy. Copyright © 2018. Published by Elsevier Ltd.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
A quasi-likelihood approach to non-negative matrix factorization
Devarajan, Karthik; Cheung, Vincent C.K.
2017-01-01
A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511
The effect of social cues on marketing decisions
NASA Astrophysics Data System (ADS)
Hentschel, H. G. E.; Pan, Jiening; Family, Fereydoon; Zhang, Zhenyu; Song, Yiping
2012-02-01
We address the question as to what extent individuals, when given information in marketing polls on the decisions made by the previous Nr individuals questioned, are likely to change their original choices. The processes can be formulated in terms of a Cost function equivalent to a Hamiltonian, which depends on the original likelihood of an individual making a positive decision in the absence of social cues p0; the strength of the social cue J; and memory size Nr. We find both positive and negative herding effects are significant. Specifically, if p0>1/2 social cues enhance positive decisions, while for p0<1/2 social cues reduce the likelihood of a positive decision.
2009-11-17
set of chains , the step adds scheduled methods that have an a priori likelihood of a failure outcome (Lines 3-5). It identifies the max eul value of the...activity meeting its objective, as well as its expected contribution to the schedule. By explicitly calculating these values , PADS is able to summarize the...variables. One of the main difficulties of this model is convolving the probability density functions and value functions while solving the model; this
A Seakeeping Performance and Affordability Tradeoff Study for the Coast Guard Offshore Patrol Cutter
2016-06-01
Index Polar Plot for Sea State 4, All Headings Are Relative to the Wave Motion and Velocity is Given in Meters per Second...40 Figure 15. Probability and Cumulative Density Functions of Annual Sea State Occurrences in the Open Ocean, North Pacific...criteria at a given sea state. Probability distribution functions are available that describe the likelihood that an operational area will experience
Greenery in the university environment: Students’ preferences and perceived restoration likelihood
2018-01-01
A large body of evidence shows that interaction with greenery can be beneficial for human stress reduction, emotional states, and improved cognitive function. It can, therefore, be expected that university students might benefit from greenery in the university environment. Before investing in real-life interventions in a university environment, it is necessary to first explore students’ perceptions of greenery in the university environment. This study examined (1) preference for university indoor and outdoor spaces with and without greenery (2) perceived restoration likelihood of university outdoor spaces with and without greenery and (3) if preference and perceived restoration likelihood ratings were modified by demographic characteristics or connectedness to nature in Dutch university students (N = 722). Digital photographic stimuli represented four university spaces (lecture hall, classroom, study area, university outdoor space). For each of the three indoor spaces there were four or five stimuli conditions: (1) the standard design (2) the standard design with a colorful poster (3) the standard design with a nature poster (4) the standard design with a green wall (5) the standard design with a green wall plus interior plants. The university outdoor space included: (1) the standard design (2) the standard design with seating (3) the standard design with colorful artifacts (4) the standard design with green elements (5) the standard design with extensive greenery. Multi-level analyses showed that students gave higher preference ratings to the indoor spaces with a nature poster, a green wall, or a green wall plus interior plants than to the standard designs and the designs with the colorful posters. Students also rated preference and perceived restoration likelihood of the outdoor spaces that included greenery higher than those without. Preference and perceived restoration likelihood were not modified by demographic characteristics, but students with strong connectedness to nature rated preference and perceived restoration likelihood overall higher than students with weak connectedness to nature. The findings suggest that students would appreciate the integration of greenery in the university environment. PMID:29447184
Greenery in the university environment: Students' preferences and perceived restoration likelihood.
van den Bogerd, Nicole; Dijkstra, S Coosje; Seidell, Jacob C; Maas, Jolanda
2018-01-01
A large body of evidence shows that interaction with greenery can be beneficial for human stress reduction, emotional states, and improved cognitive function. It can, therefore, be expected that university students might benefit from greenery in the university environment. Before investing in real-life interventions in a university environment, it is necessary to first explore students' perceptions of greenery in the university environment. This study examined (1) preference for university indoor and outdoor spaces with and without greenery (2) perceived restoration likelihood of university outdoor spaces with and without greenery and (3) if preference and perceived restoration likelihood ratings were modified by demographic characteristics or connectedness to nature in Dutch university students (N = 722). Digital photographic stimuli represented four university spaces (lecture hall, classroom, study area, university outdoor space). For each of the three indoor spaces there were four or five stimuli conditions: (1) the standard design (2) the standard design with a colorful poster (3) the standard design with a nature poster (4) the standard design with a green wall (5) the standard design with a green wall plus interior plants. The university outdoor space included: (1) the standard design (2) the standard design with seating (3) the standard design with colorful artifacts (4) the standard design with green elements (5) the standard design with extensive greenery. Multi-level analyses showed that students gave higher preference ratings to the indoor spaces with a nature poster, a green wall, or a green wall plus interior plants than to the standard designs and the designs with the colorful posters. Students also rated preference and perceived restoration likelihood of the outdoor spaces that included greenery higher than those without. Preference and perceived restoration likelihood were not modified by demographic characteristics, but students with strong connectedness to nature rated preference and perceived restoration likelihood overall higher than students with weak connectedness to nature. The findings suggest that students would appreciate the integration of greenery in the university environment.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less
Code of Federal Regulations, 2011 CFR
2011-04-01
... INVESTIGATIONAL NEW DRUG APPLICATION Expanded Access to Investigational Drugs for Treatment Use § 312.300 General. (a) Scope. This subpart contains the requirements for the use of investigational new drugs and... on such factors as survival, day-to-day functioning, or the likelihood that the disease, if left...
Code of Federal Regulations, 2010 CFR
2010-04-01
... INVESTIGATIONAL NEW DRUG APPLICATION Expanded Access to Investigational Drugs for Treatment Use § 312.300 General. (a) Scope. This subpart contains the requirements for the use of investigational new drugs and... on such factors as survival, day-to-day functioning, or the likelihood that the disease, if left...
The striking similarities between standard, distractor-free, and target-free recognition
Dobbins, Ian G.
2012-01-01
It is often assumed that observers seek to maximize correct responding during recognition testing by actively adjusting a decision criterion. However, early research by Wallace (Journal of Experimental Psychology: Human Learning and Memory 4:441–452, 1978) suggested that recognition rates for studied items remained similar, regardless of whether or not the tests contained distractor items. We extended these findings across three experiments, addressing whether detection rates or observer confidence changed when participants were presented standard tests (targets and distractors) versus “pure-list” tests (lists composed entirely of targets or distractors). Even when observers were made aware of the composition of the pure-list test, the endorsement rates and confidence patterns remained largely similar to those observed during standard testing, suggesting that observers are typically not striving to maximize the likelihood of success across the test. We discuss the implications for decision models that assume a likelihood ratio versus a strength decision axis, as well as the implications for prior findings demonstrating large criterion shifts using target probability manipulations. PMID:21476108
NASA Astrophysics Data System (ADS)
Bovy Jo; Hogg, David W.; Roweis, Sam T.
2011-06-01
We generalize the well-known mixtures of Gaussians approach to density estimation and the accompanying Expectation-Maximization technique for finding the maximum likelihood parameters of the mixture to the case where each data point carries an individual d-dimensional uncertainty covariance and has unique missing data properties. This algorithm reconstructs the error-deconvolved or "underlying" distribution function common to all samples, even when the individual data points are samples from different distributions, obtained by convolving the underlying distribution with the heteroskedastic uncertainty distribution of the data point and projecting out the missing data directions. We show how this basic algorithm can be extended with conjugate priors on all of the model parameters and a "split-and-"erge- procedure designed to avoid local maxima of the likelihood. We demonstrate the full method by applying it to the problem of inferring the three-dimensional veloc! ity distribution of stars near the Sun from noisy two-dimensional, transverse velocity measurements from the Hipparcos satellite.
Dvorak, Robert D; Kuvaas, Nicholas J; Lamis, Dorian A; Pearson, Matthew R; Stevenson, Brittany L
2015-01-01
Emotional and behavioral regulation has been linked to coping and enhancement motives and associated with different patterns of alcohol use and problems. The current studies examined emotional instability, urgency, and internal drinking motives as predictors of alcohol dependence symptoms as well as the likelihood and severity of Diagnostic and Statistical Manual of Mental Disorders-5th editionAlcohol Use Disorder (AUD). In Study 1, college drinkers (n = 621) completed alcohol involvement and behavioral/emotional functioning assessments. There was an indirect association between emotional instability and dependence symptoms via both coping and enhancement drinking motives which was potentiated by trait urgency. In Study 2, college drinkers (n = 510) completed alcohol involvement, behavioral/emotional functioning, and AUD criteria assessments. A significant indirect effect from emotional instability to the likelihood of meeting AUD criteria, via drinking to cope was found, again potentiated by urgency. © The Author(s) 2016.
Case finding of lifestyle and mental health disorders in primary care: validation of the ‘CHAT’ tool
Goodyear-Smith, Felicity; Coupe, Nicole M; Arroll, Bruce; Elley, C Raina; Sullivan, Sean; McGill, Anne-Thea
2008-01-01
Background Primary care is accessible and ideally placed for case finding of patients with lifestyle and mental health risk factors and subsequent intervention. The short self-administered Case-finding and Help Assessment Tool (CHAT) was developed for lifestyle and mental health assessment of adult patients in primary health care. This tool checks for tobacco use, alcohol and other drug misuse, problem gambling, depression, anxiety and stress, abuse, anger problems, inactivity, and eating disorders. It is well accepted by patients, GPs and nurses. Aim To assess criterion-based validity of CHAT against a composite gold standard. Design of study Conducted according to the Standards for Reporting of Diagnostic Accuracy statement for diagnostic tests. Setting Primary care practices in Auckland, New Zealand. Method One thousand consecutive adult patients completed CHAT and a composite gold standard. Sensitivities, specificities, positive and negative predictive values, and likelihood ratios were calculated. Results Response rates for each item ranged from 79.6 to 99.8%. CHAT was sensitive and specific for almost all issues screened, except exercise and eating disorders. Sensitivity ranged from 96% (95% confidence interval [CI] = 87 to 99%) for major depression to 26% (95% CI = 22 to 30%) for exercise. Specificity ranged from 97% (95% CI = 96 to 98%) for problem gambling and problem drug use to 40% (95% CI = 36 to 45%) for exercise. All had high likelihood ratios (3–30), except exercise and eating disorders. Conclusion CHAT is a valid and acceptable case-finding tool for most common lifestyle and mental health conditions. PMID:18186993
McCool, Brian A.; Frye, Gerald D.; Pulido, Marisa D.; Botting, Shaleen K.
2010-01-01
It is well known that the anxiolytic potential of ethanol is maintained during chronic exposure. We have confirmed this using a light-dark box paradigm following chronic ethanol ingestion via a liquid diet. However, cessation from chronic ethanol exposure is known to cause severe withdrawal anxiety. These opposing effects on anxiety likely result from neuro-adaptations of neurotransmitter systems within the brain regions regulating anxiety. Recent work highlights the importance of amygdala ligand-gated chloride channels in the expression of anxiety. We have therefore examined the effects of chronic ethanol exposure on GABAA and strychnine-sensitive glycine receptors expressed by acutely isolated adult rat lateral/basolateral amygdala neurons. Chronic ethanol exposure increased the functional expression of GABAA receptors in acutely isolated basolateral amygdala neurons without altering strychnine-sensitive glycine receptors. Neither the acute ethanol nor benzodiazepine sensitivity of either receptor system was affected. We explored the likelihood that subunit composition might influence each receptor’s response to chronic ethanol. Importantly, when expressed in a mammalian heterologous system, GABAA receptors composed of unique α subunits were differentially sensitive to acute ethanol. Likewise, the presence of the β subunit appeared to influence the acute ethanol sensitivity of glycine receptors containing the α2 subunit. Our results suggest that the facilitation of GABAA receptors during chronic ethanol exposure may help explain the maintenance of ethanol’s anti-anxiety effects during chronic ethanol exposure. Furthermore, the subunit composition of GABAA and strychnine-sensitive glycine receptors may ultimately influence the response of each system to chronic ethanol exposure. PMID:12560122
McCool, Brian A; Frye, Gerald D; Pulido, Marisa D; Botting, Shaleen K
2003-02-14
It is well known that the anxiolytic potential of ethanol is maintained during chronic exposure. We have confirmed this using a light-dark box paradigm following chronic ethanol ingestion via a liquid diet. However, cessation from chronic ethanol exposure is known to cause severe withdrawal anxiety. These opposing effects on anxiety likely result from neuro-adaptations of neurotransmitter systems within the brain regions regulating anxiety. Recent work highlights the importance of amygdala ligand-gated chloride channels in the expression of anxiety. We have therefore examined the effects of chronic ethanol exposure on GABA(A) and strychnine-sensitive glycine receptors expressed by acutely isolated adult rat lateral/basolateral amygdala neurons. Chronic ethanol exposure increased the functional expression of GABA(A) receptors in acutely isolated basolateral amygdala neurons without altering strychnine-sensitive glycine receptors. Neither the acute ethanol nor benzodiazepine sensitivity of either receptor system was affected. We explored the likelihood that subunit composition might influence each receptor's response to chronic ethanol. Importantly, when expressed in a mammalian heterologous system, GABA(A) receptors composed of unique alpha subunits were differentially sensitive to acute ethanol. Likewise, the presence of the beta subunit appeared to influence the acute ethanol sensitivity of glycine receptors containing the alpha(2) subunit. Our results suggest that the facilitation of GABA(A) receptors during chronic ethanol exposure may help explain the maintenance of ethanol's anti-anxiety effects during chronic ethanol exposure. Furthermore, the subunit composition of GABA(A) and strychnine-sensitive glycine receptors may ultimately influence the response of each system to chronic ethanol exposure.
Clinical decision making and the expected value of information.
Willan, Andrew R
2007-01-01
The results of the HOPE study, a randomized clinical trial, provide strong evidence that 1) ramipril prevents the composite outcome of cardiovascular death, myocardial infarction or stroke in patients who are at high risk of a cardiovascular event and 2) ramipril is cost-effective at a threshold willingness-to-pay of $10,000 to prevent an event of the composite outcome. In this report the concept of the expected value of information is used to determine if the information provided by the HOPE study is sufficient for decision making in the US and Canada. and results Using the cost-effectiveness data from a clinical trial, or from a meta-analysis of several trials, one can determine, based on the number of future patients that would benefit from the health technology under investigation, the expected value of sample information (EVSI) of a future trial as a function of proposed sample size. If the EVSI exceeds the cost for any particular sample size then the current information is insufficient for decision making and a future trial is indicated. If, on the other hand, there is no sample size for which the EVSI exceeds the cost, then there is sufficient information for decision making and no future trial is required. Using the data from the HOPE study these concepts are applied for various assumptions regarding the fixed and variable cost of a future trial and the number of patients who would benefit from ramipril. Expected value of information methods provide a decision-analytic alternative to the standard likelihood methods for assessing the evidence provided by cost-effectiveness data from randomized clinical trials.
Qobadi, Mina; Collier, Charlene; Zhang, Lei
2016-11-01
Objectives To determine the prevalence of postpartum depression (PPD) among new mothers in Mississippi during 2009-2011 and evaluate the effects of different stressful life events in the year before delivery on the likelihood of PPD. Methods We used Mississippi Pregnancy Risk Assessment Monitoring System (PRAMS) 2009-2011 data (n = 3695) to evaluate the effects of different stressful life events on PPD. We categorized 13 stressors into 4 groups: financial, relational, trauma-related, and emotional. A composite score of the mothers' responses (≥10) to the three items: "I felt down, depressed, or sad", "I felt hopeless", and "I felt slowed down" was used to measure PPD. The items were rated on a Likert scale from (1) never to 5 (always). Descriptive statistics, Chi square tests, t tests, and logistic regression analyses were conducted using SAS 9.3 Proc Survey procedure (SAS Institute, Cary, NC, USA). Results The overall prevalence of self-reported PPD was 14.8 %. Mothers who experienced high relational with low financial and high trauma related stresses had the highest likelihood of PPD diagnosis after adjusting for confounders (OR = 8.6; 95 % CI, 3.5-21.3), followed by those who reported high relational stress with low financial and low trauma stresses (OR = 5.9; 95 % CI, 3.5-10.2). Those with high financial, low relational, and low trauma had the least likelihood of PPD (OR = 2.2; 95 % CI, 1.6-3.0) compared to women with low stress in all three categories. Conclusion Our findings showed that the likelihood of PPD was higher among women who had high relational stress, indicating that efforts to effectively prevent PPD need to focus on healthy relationships between partners during pregnancy.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Robinson, Michael J; Sheehan, David; Gaynor, Paula J; Marangell, Lauren B; Tanaka, Yoko; Lipsius, Sarah; Ohara, Fumihiro; Namiki, Chihiro
2013-11-01
The aim of this study was to evaluate the relationship between painful physical symptoms (PPS) and outcomes in major depressive disorder (MDD). Post-hoc analysis of two identically designed 8-week trials compared the efficacy of 60 mg/day duloxetine (N=523) with that of placebo (N=532) in treating PPS associated with MDD. The Montgomery-Åsberg Depression Rating Scale (MADRS) total score, the Brief Pain Inventory (BPI) average pain score, and the Sheehan Disability Scale global functional impairment score assessed depression symptoms, pain, and functioning, respectively. Remission was defined as a MADRS score of 10 or less, and the BPI response subgroup was defined as a 50% or greater reduction from baseline. Path analyses assessed relationships among variables. Duloxetine-treated patients who had a 50% or greater reduction in BPI score at endpoint had higher rates of remission. Path analysis indicated that 16% of likelihood of remission in depression symptoms was because of the direct effect of treatment, 41% because of pain reduction, and 43% because of functional improvement. Path analysis also indicated that 51% of improvement in functioning was attributed to pain improvement and 43% to mood improvement. Results demonstrate that improvement in pain and mood contributes to functional improvement, and pain reduction and functional improvement increase the likelihood of remission of depressive symptoms with duloxetine treatment in patients with both MDD and PPS at baseline.
Elwér, Sofia; Johansson, Klara; Hammarström, Anne
2014-03-10
Health consequences of the gender segregated labour market have previously been demonstrated in the light of gender composition of occupations and workplaces, with somewhat mixed results. Associations between the gender composition and health status have been suggested to be shaped by the psychosocial work environment. The present study aims to analyse how workplace gender composition is related to psychological distress and to explore the importance of the psychosocial work environment for psychological distress at workplaces with different gender compositions. The study population consisted of participants from the Northern Swedish Cohort with a registered workplace in 2007 when the participants were 42 years old (N=795). Questionnaire data were supplemented with register data on the gender composition of the participants' workplaces divided into three groups: workplaces with more women, mixed workplaces, and workplaces with more men. Associations between psychological distress and gender composition were analysed with multivariate logistic regression analysis adjusting for socioeconomic position, previous psychological distress, psychosocial work environment factors and gender. Logistic regression analyses (including interaction terms for gender composition and each work environment factor) were also used to assess differential associations between psychosocial work factor and psychological distress according to gender composition. Working at workplaces with a mixed gender composition was related to a higher likelihood of psychological distress compared to workplaces with more men, after adjustments for socioeconomic position, psychological distress at age 21, psychosocial work environment factors and gender. Psychosocial work environment factors did not explain the association between gender composition and psychological distress. The association between gender composition and psychological distress cannot be explained by differences in the perception of the psychosocial work environment and thus the work environment hypothesis is not supported. Workplaces with a mixed gender composition needs further research attention to explain the negative development of psychological distress during working life for both women and men at these workplaces.
2014-01-01
Background Health consequences of the gender segregated labour market have previously been demonstrated in the light of gender composition of occupations and workplaces, with somewhat mixed results. Associations between the gender composition and health status have been suggested to be shaped by the psychosocial work environment. The present study aims to analyse how workplace gender composition is related to psychological distress and to explore the importance of the psychosocial work environment for psychological distress at workplaces with different gender compositions. Methods The study population consisted of participants from the Northern Swedish Cohort with a registered workplace in 2007 when the participants were 42 years old (N = 795). Questionnaire data were supplemented with register data on the gender composition of the participants’ workplaces divided into three groups: workplaces with more women, mixed workplaces, and workplaces with more men. Associations between psychological distress and gender composition were analysed with multivariate logistic regression analysis adjusting for socioeconomic position, previous psychological distress, psychosocial work environment factors and gender. Logistic regression analyses (including interaction terms for gender composition and each work environment factor) were also used to assess differential associations between psychosocial work factor and psychological distress according to gender composition. Results Working at workplaces with a mixed gender composition was related to a higher likelihood of psychological distress compared to workplaces with more men, after adjustments for socioeconomic position, psychological distress at age 21, psychosocial work environment factors and gender. Psychosocial work environment factors did not explain the association between gender composition and psychological distress. Conclusions The association between gender composition and psychological distress cannot be explained by differences in the perception of the psychosocial work environment and thus the work environment hypothesis is not supported. Workplaces with a mixed gender composition needs further research attention to explain the negative development of psychological distress during working life for both women and men at these workplaces. PMID:24612791
Do aftercare services reduce inpatient psychiatric readmissions?
Foster, E M
1999-01-01
OBJECTIVE: To determine whether aftercare services reduce the likelihood that children and adolescents will be readmitted to inpatient psychiatric facilities. DATA SOURCES/STUDY SETTING: Analyses of data from the Fort Bragg Demonstration. Data were based on 204 sample individuals (children and adolescents), all of whom were discharged from inpatient facilities during the study period. STUDY DESIGN: These analyses use hazard modeling to examine the impact of aftercare services on the likelihood of readmission. Comparisons of individuals for whom the timing of aftercare services differ are adjusted for a wide range of individual characteristics, including client demographics, diagnosis, symptomatology, and psychosocial functioning. DATA COLLECTION/EXTRACTION METHODS: Detailed data on psychopathology, symptomatology, and psychosocial functioning were collected on individuals included in these analyses. This information was taken from structured diagnostic interviews and behavior checklists, including the Child Behavior Checklist and Diagnostic Interview Schedule for Children, completed by the child and his or her caretaker. Information on the use of mental health services was taken from insurance claims and a management information system, and was used to identify the period from discharge to readmission and to describe the client's use of outpatient therapy, case management, intermediate (or stepdown) services, and residential treatment centers during this period. PRINCIPAL FINDINGS/CONCLUSIONS: Using Cox models that allow for censoring and that include the use of aftercare services as time-varying covariates, we find that aftercare services generally do not influence the likelihood of inpatient readmission. For the lower middle class families included in this study, the estimated effect of aftercare is not statistically significant and has limited practical significance. When we look at specific forms of aftercare, we find that outpatient therapy has the largest effect and that stepdown services in intermediate settings have the smallest. We also identify family and individual characteristics that influence the likelihood of readmission. PMID:10445899
NASA Technical Reports Server (NTRS)
Pierson, W. J., Jr.
1984-01-01
Backscatter measurements at upwind and crosswind are simulated for five incidence angles by means of the SASS-1 model function. The effects of communication noise and attitude errors are simulated by Monte Carlo methods, and the winds are recovered by both the Sum of Square (SOS) algorithm and a Maximum Likelihood Estimater (MLE). The SOS algorithm is shown to fail for light enough winds at all incidence angles and to fail to show areas of calm because backscatter estimates that were negative or that produced incorrect values of K sub p greater than one were discarded. The MLE performs well for all input backscatter estimates and returns calm when both are negative. The use of the SOS algorithm is shown to have introduced errors in the SASS-1 model function that, in part, cancel out the errors that result from using it, but that also cause disagreement with other data sources such as the AAFE circle flight data at light winds. Implications for future scatterometer systems are given.
An exploration of adolescent nonsuicidal self-injury and religious coping.
Westers, Nicholas J; Rehfuss, Mark; Olson, Lynn; Wiemann, Constance M
2014-01-01
Many adolescents who engage in nonsuicidal self-injury (NSSI) self-identify as religious, but the role of religion in their NSSI is not known. This exploratory study examined the relationship between religious coping and religiousness among adolescents who self-injure and the function of their NSSI. Thirty adolescents aged 12-19 years who had engaged in NSSI participated in an interview and completed questionnaires. Multiple regressions were used to examine the relationship between religious coping and NSSI, and Pearson correlations were used to assess the relationship between religiousness and function of NSSI. Greater use of positive religious coping was associated with lower likelihood of engaging in NSSI to rid oneself of unwanted emotions, whereas greater use of negative religious coping was associated with greater likelihood of engaging in NSSI for this reason as well as to avoid punishment or unwanted responsibility. Higher religiousness was associated with greater use of NSSI to communicate with or gain attention from others, whereas lower religiousness was associated with greater use of NSSI to relieve unwanted emotions. Having a greater understanding of how religious constructs are related to the various functions served by NSSI may inform treatment of this population, particularly among religious youth who self-injure.
Influence of gender on Tourette syndrome beyond adolescence.
Lichter, D G; Finnegan, S G
2015-02-01
Although boys are disproportionately affected by tics in Tourette syndrome (TS), this gender bias is attenuated in adulthood and a recent study has suggested that women may experience greater functional interference from tics than men. The authors assessed the gender distribution of adults in a tertiary University-based TS clinic population and the relative influence of gender and other variables on adult tic severity (YGTSS score) and psychosocial functioning (GAF score). We also determined retrospectively the influence of gender on change in global tic severity and overall TS impairment (YGTSS) since adolescence. Females were over-represented in relation to previously published epidemiologic surveys of both TS children and adults. Female gender was associated with a greater likelihood of tic worsening as opposed to tic improvement in adulthood; a greater likelihood of expansion as opposed to contraction of motor tic distribution; and with increased current motor tic severity and tic-related impairment. However, gender explained only a small percentage of the variance of the YGTSS global severity score and none of the variance of the GAF scale score. Psychosocial functioning was influenced most strongly by tic severity but also by a variety of comorbid neuropsychiatric disorders. Published by Elsevier Masson SAS.
An optimal algorithm for reconstructing images from binary measurements
NASA Astrophysics Data System (ADS)
Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin
2010-01-01
We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Brandon C.; Becker, Andrew C.; Sobolewska, Malgosia
2014-06-10
We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placingmore » them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.« less
Yap, John Stephen; Fan, Jianqing; Wu, Rongling
2009-12-01
Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.
Modeling regional variation in riverine fish biodiversity in the Arkansas-White-Red River basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schweizer, Peter E; Jager, Yetta
The patterns of biodiversity in freshwater systems are shaped by biogeography, environmental gradients, and human-induced factors. In this study, we developed empirical models to explain fish species richness in subbasins of the Arkansas White Red River basin as a function of discharge, elevation, climate, land cover, water quality, dams, and longitudinal position. We used information-theoretic criteria to compare generalized linear mixed models and identified well-supported models. Subbasin attributes that were retained as predictors included discharge, elevation, number of downstream dams, percent forest, percent shrubland, nitrate, total phosphorus, and sediment. The random component of our models, which assumed a negative binomialmore » distribution, included spatial correlation within larger river basins and overdispersed residual variance. This study differs from previous biodiversity modeling efforts in several ways. First, obtaining likelihoods for negative binomial mixed models, and thereby avoiding reliance on quasi-likelihoods, has only recently become practical. We found the ranking of models based on these likelihood estimates to be more believable than that produced using quasi-likelihoods. Second, because we had access to a regional-scale watershed model for this river basin, we were able to include model-estimated water quality attributes as predictors. Thus, the resulting models have potential value as tools with which to evaluate the benefits of water quality improvements to fish.« less
Turesky, Ted K.; Turkeltaub, Peter E.; Eden, Guinevere F.
2016-01-01
The functional neuroanatomy of finger movements has been characterized with neuroimaging in young adults. However, less is known about the aging motor system. Several studies have contrasted movement-related activity in older versus young adults, but there is inconsistency among their findings. To address this, we conducted an activation likelihood estimation (ALE) meta-analysis on within-group data from older adults and young adults performing regularly paced right-hand finger movement tasks in response to external stimuli. We hypothesized that older adults would show a greater likelihood of activation in right cortical motor areas (i.e., ipsilateral to the side of movement) compared to young adults. ALE maps were examined for conjunction and between-group differences. Older adults showed overlapping likelihoods of activation with young adults in left primary sensorimotor cortex (SM1), bilateral supplementary motor area, bilateral insula, left thalamus, and right anterior cerebellum. Their ALE map differed from that of the young adults in right SM1 (extending into dorsal premotor cortex), right supramarginal gyrus, medial premotor cortex, and right posterior cerebellum. The finding that older adults uniquely use ipsilateral regions for right-hand finger movements and show age-dependent modulations in regions recruited by both age groups provides a foundation by which to understand age-related motor decline and motor disorders. PMID:27799910
Seymour, Jane W; Polsky, Daniel E; Brown, Elizabeth J; Barbu, Corentin M; Grande, David
2017-07-01
Racial minorities are more likely to live in primary care shortage areas. We sought to understand community health centers' (CHCs) role in reducing disparities. We surveyed all primary care practices in an urban area, identified low access areas, and examined how CHCs influence spatial accessibility. Census tracts with higher rates of public insurance (≥40% vs <10%, odds ratio [OR] = 31.06, P < .001; 30-39% vs 10%, OR = 7.84, P = 0.001) were more likely to be near a CHC and those with moderate rates of uninsurance (10%-19% vs <10%, OR = 0.42, P = .045) were less likely. Racial composition was not associated with proximity. Tracts close to a CHC were less likely (OR = 0.11, P < .0001) to be in a low access area. This association did not differ based on racial composition. Although CHCs were more likely to be in areas with a greater fraction of racial minorities, location was more strongly influenced by public insurance rates. CHCs reduced the likelihood of being in low access areas but the effect did not vary by tract racial composition.
ERIC Educational Resources Information Center
Claus, Christopher J.; Chory, Rebecca M.; Malachowski, Colleen C.
2012-01-01
This study investigated students' perceptions of their instructors' argumentativeness and verbal aggressiveness, classroom justice, and effectiveness of and likelihood of communicating student antisocial behavior alteration techniques (BATs). Results indicate that student perceptions of instructor argumentativeness were not related to their…
CONSTRUCTING A FLEXIBLE LIKELIHOOD FUNCTION FOR SPECTROSCOPIC INFERENCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.
2015-10-20
We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectralmore » line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf.« less
Dvorak, Robert D; Simons, Jeffrey S
2014-12-01
Many theories of emotion regulation and alcohol use posit that alcohol is consumed as a way to regulate negative mood. However, the literature has conflicting evidence on mood-alcohol use associations. Understanding how individual differences affect mood-alcohol use associations remains an important area of study. Previous research has suggested that cognitive abilities may affect the relationship between mood and alcohol. The current ecological momentary study examined associations between daytime anxious and positive mood and both (a) the likelihood of alcohol use and (b) the intensity of use on drinking nights as a function of sustained attention, set shifting, and gender. Participants (n = 100) completed assessments of sustained attention and set shifting, then carried palmtop computers for 21 days, reporting mood and alcohol use up to 8 times per day. Results showed that positive mood was consistently associated with both likelihood and intensity of alcohol use, but the association between positive mood and alcohol outcomes was not affected by cognitive abilities. Anxious mood was positively associated with the likelihood of drinking for men with high cognitive abilities. Anxious mood was positively associated with intoxication on drinking nights for men with high sustained attention, but inversely associated with intoxication on drinking nights for women with high sustained attention. Results suggest that variation in mood, executive functioning, and gender interact to contribute to observed differences in drinking behavior. These differences may be the result of gender-specific coping strategies in response to negative emotion.
Vaz, Sharmila; Cordier, Reinie; Boyes, Mark; Parsons, Richard; Joosten, Annette; Ciccarelli, Marina; Falkmer, Marita; Falkmer, Torbjorn
2016-01-01
An important characteristic of a screening tool is its discriminant ability or the measure's accuracy to distinguish between those with and without mental health problems. The current study examined the inter-rater agreement and screening concordance of the parent and teacher versions of SDQ at scale, subscale and item-levels, with the view of identifying the items that have the most informant discrepancies; and determining whether the concordance between parent and teacher reports on some items has the potential to influence decision making. Cross-sectional data from parent and teacher reports of the mental health functioning of a community sample of 299 students with and without disabilities from 75 different primary schools in Perth, Western Australia were analysed. The study found that: a) Intraclass correlations between parent and teacher ratings of children's mental health using the SDQ at person level was fair on individual child level; b) The SDQ only demonstrated clinical utility when there was agreement between teacher and parent reports using the possible or 90% dichotomisation system; and c) Three individual items had positive likelihood ratio scores indicating clinical utility. Of note was the finding that the negative likelihood ratio or likelihood of disregarding the absence of a condition when both parents and teachers rate the item as absent was not significant. Taken together, these findings suggest that the SDQ is not optimised for use in community samples and that further psychometric evaluation of the SDQ in this context is clearly warranted.
Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-04-01
Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.
Horsch, Karla; Pesce, Lorenzo L.; Giger, Maryellen L.; Metz, Charles E.; Jiang, Yulei
2012-01-01
Purpose: The authors developed scaling methods that monotonically transform the output of one classifier to the “scale” of another. Such transformations affect the distribution of classifier output while leaving the ROC curve unchanged. In particular, they investigated transformations between radiologists and computer classifiers, with the goal of addressing the problem of comparing and interpreting case-specific values of output from two classifiers. Methods: Using both simulated and radiologists’ rating data of breast imaging cases, the authors investigated a likelihood-ratio-scaling transformation, based on “matching” classifier likelihood ratios. For comparison, three other scaling transformations were investigated that were based on matching classifier true positive fraction, false positive fraction, or cumulative distribution function, respectively. The authors explored modifying the computer output to reflect the scale of the radiologist, as well as modifying the radiologist’s ratings to reflect the scale of the computer. They also evaluated how dataset size affects the transformations. Results: When ROC curves of two classifiers differed substantially, the four transformations were found to be quite different. The likelihood-ratio scaling transformation was found to vary widely from radiologist to radiologist. Similar results were found for the other transformations. Our simulations explored the effect of database sizes on the accuracy of the estimation of our scaling transformations. Conclusions: The likelihood-ratio-scaling transformation that the authors have developed and evaluated was shown to be capable of transforming computer and radiologist outputs to a common scale reliably, thereby allowing the comparison of the computer and radiologist outputs on the basis of a clinically relevant statistic. PMID:22559651
Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier
2010-05-01
PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.
A Well-Resolved Phylogeny of the Trees of Puerto Rico Based on DNA Barcode Sequence Data
Muscarella, Robert; Uriarte, María; Erickson, David L.; Swenson, Nathan G.; Zimmerman, Jess K.; Kress, W. John
2014-01-01
Background The use of phylogenetic information in community ecology and conservation has grown in recent years. Two key issues for community phylogenetics studies, however, are (i) low terminal phylogenetic resolution and (ii) arbitrarily defined species pools. Methodology/principal findings We used three DNA barcodes (plastid DNA regions rbcL, matK, and trnH-psbA) to infer a phylogeny for 527 native and naturalized trees of Puerto Rico, representing the vast majority of the entire tree flora of the island (89%). We used a maximum likelihood (ML) approach with and without a constraint tree that enforced monophyly of recognized plant orders. Based on 50% consensus trees, the ML analyses improved phylogenetic resolution relative to a comparable phylogeny generated with Phylomatic (proportion of internal nodes resolved: constrained ML = 74%, unconstrained ML = 68%, Phylomatic = 52%). We quantified the phylogenetic composition of 15 protected forests in Puerto Rico using the constrained ML and Phylomatic phylogenies. We found some evidence that tree communities in areas of high water stress were relatively phylogenetically clustered. Reducing the scale at which the species pool was defined (from island to soil types) changed some of our results depending on which phylogeny (ML vs. Phylomatic) was used. Overall, the increased terminal resolution provided by the ML phylogeny revealed additional patterns that were not observed with a less-resolved phylogeny. Conclusions/significance With the DNA barcode phylogeny presented here (based on an island-wide species pool), we show that a more fully resolved phylogeny increases power to detect nonrandom patterns of community composition in several Puerto Rican tree communities. Especially if combined with additional information on species functional traits and geographic distributions, this phylogeny will (i) facilitate stronger inferences about the role of historical processes in governing the assembly and composition of Puerto Rican forests, (ii) provide insight into Caribbean biogeography, and (iii) aid in incorporating evolutionary history into conservation planning. PMID:25386879
A well-resolved phylogeny of the trees of Puerto Rico based on DNA barcode sequence data.
Muscarella, Robert; Uriarte, María; Erickson, David L; Swenson, Nathan G; Zimmerman, Jess K; Kress, W John
2014-01-01
The use of phylogenetic information in community ecology and conservation has grown in recent years. Two key issues for community phylogenetics studies, however, are (i) low terminal phylogenetic resolution and (ii) arbitrarily defined species pools. We used three DNA barcodes (plastid DNA regions rbcL, matK, and trnH-psbA) to infer a phylogeny for 527 native and naturalized trees of Puerto Rico, representing the vast majority of the entire tree flora of the island (89%). We used a maximum likelihood (ML) approach with and without a constraint tree that enforced monophyly of recognized plant orders. Based on 50% consensus trees, the ML analyses improved phylogenetic resolution relative to a comparable phylogeny generated with Phylomatic (proportion of internal nodes resolved: constrained ML = 74%, unconstrained ML = 68%, Phylomatic = 52%). We quantified the phylogenetic composition of 15 protected forests in Puerto Rico using the constrained ML and Phylomatic phylogenies. We found some evidence that tree communities in areas of high water stress were relatively phylogenetically clustered. Reducing the scale at which the species pool was defined (from island to soil types) changed some of our results depending on which phylogeny (ML vs. Phylomatic) was used. Overall, the increased terminal resolution provided by the ML phylogeny revealed additional patterns that were not observed with a less-resolved phylogeny. With the DNA barcode phylogeny presented here (based on an island-wide species pool), we show that a more fully resolved phylogeny increases power to detect nonrandom patterns of community composition in several Puerto Rican tree communities. Especially if combined with additional information on species functional traits and geographic distributions, this phylogeny will (i) facilitate stronger inferences about the role of historical processes in governing the assembly and composition of Puerto Rican forests, (ii) provide insight into Caribbean biogeography, and (iii) aid in incorporating evolutionary history into conservation planning.
NASA Astrophysics Data System (ADS)
Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro
2016-04-01
The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.
A Selective Overview of Variable Selection in High Dimensional Feature Space
Fan, Jianqing
2010-01-01
High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods. PMID:21572976
Welmer, Anna-Karin; Liang, Yajun; Angleman, Sara; Santoni, Giola; Yan, Zhongrui; Cai, Chuanzhu; Qiu, Chengxuan
2014-08-01
Vascular risk factors such as hypertension and obesity have been associated with physical limitations among older adults. The purpose of this study is to examine whether individual and aggregated vascular risk factors (VRFs) are associated with functional dependence and to what extent carotid atherosclerosis (CAS) or peripheral artery disease (PAD) may mediate the possible associations of aggregated VRFs with functional dependence. This cross-sectional study included 1,451 community-living participants aged ≥60 years in the Confucius Hometown Aging Project of China. Data on demographic features, hypertension, high total cholesterol, obesity, smoking, physical inactivity, diabetes, CAS, PAD, and cardiovascular diseases (CVDs) were collected through an interview, a clinical examination, and laboratory tests. Functional dependence was defined as being dependent in at least one activity in the personal or instrumental activities of daily living. Data were analyzed using multiple logistic models controlling for potential confounders. We used the mediation model to explore the potential mediating effect of CAS and PAD on the associations of aggregated VRFs with functional dependence. Of the 1,451 participants, 222 (15.3%) had functional dependence. The likelihood of functional dependence increased linearly with increasing number of VRFs (hypertension, high total cholesterol, abdominal obesity, and physical inactivity) (p for trend <0.002). Mediation analysis showed that controlling for demographics and CVDs up to 11% of the total association of functional dependence with clustering VRFs was mediated by CAS and PAD. Aggregation of multiple VRFs is associated with an increased likelihood of functional dependence among Chinese older adults; the association is partially mediated by carotid and peripheral artery atherosclerosis independently of CVDs.
Cui, Jiangyu; Zhou, Yumin; Tian, Jia; Wang, Xinwang; Zheng, Jingping; Zhong, Nanshan; Ran, Pixin
2012-12-01
COPD is often underdiagnosed in a primary care setting where the spirometry is unavailable. This study was aimed to develop a simple, economical and applicable model for COPD screening in those settings. First we established a discriminant function model based on Bayes' Rule by stepwise discriminant analysis, using the data from 243 COPD patients and 112 non-COPD subjects from our COPD survey in urban and rural communities and local primary care settings in Guangdong Province, China. We then used this model to discriminate COPD in additional 150 subjects (50 non-COPD and 100 COPD ones) who had been recruited by the same methods as used to have established the model. All participants completed pre- and post-bronchodilator spirometry and questionnaires. COPD was diagnosed according to the Global Initiative for Chronic Obstructive Lung Disease criteria. The sensitivity and specificity of the discriminant function model was assessed. THE ESTABLISHED DISCRIMINANT FUNCTION MODEL INCLUDED NINE VARIABLES: age, gender, smoking index, body mass index, occupational exposure, living environment, wheezing, cough and dyspnoea. The sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, accuracy and error rate of the function model to discriminate COPD were 89.00%, 82.00%, 4.94, 0.13, 86.66% and 13.34%, respectively. The accuracy and Kappa value of the function model to predict COPD stages were 70% and 0.61 (95% CI, 0.50 to 0.71). This discriminant function model may be used for COPD screening in primary care settings in China as an alternative option instead of spirometry.
Predictors of healthy ageing: public health policy targets.
Sowa, Agnieszka; Tobiasz-Adamczyk, Beata; Topór-Mądry, Roman; Poscia, Andrea; la Milia, Daniele Ignazio
2016-09-05
The public health policy agenda oriented towards healthy ageing becomes the highest priority for the European countries. The article discusses the healthy ageing concept and its possible determinants with an aim to identify behavioral patterns related to healthy ageing in selected European countries. The healthy ageing is assessed based on a composite indicator of self-assessed health, functional capabilities and life meaningfulness. The logistic regression models are used to assess the impact of the healthy lifestyle index, psycho-social index and socio-economic status on the probability of healthy ageing (i.e. being healthy at older age). The lifestyle and psychosocial indexes are created as a sum of behaviors that might be important for healthy ageing. Models are analyzed for three age groups of older people: 60-67, 68-79 and 80+ as well as for three groups of countries representing Western, Southern and Central-Eastern Europe. The lifestyle index covering vigorous and moderate physical activity, consumption of vegetables and fruits, regular consumption of meals and adequate consumption of liquids is positively related to healthy ageing, increasing the likelihood of being healthy at older age with each of the items specified in the index. The score of the index is found to be significantly higher (on average by 1 point for men and 1.1 for women) for individuals ageing healthily. The psychosocial index covering employment, outdoor social participation, indoor activities and life satisfaction is also found to be significantly related to health increasing the likelihood of healthy ageing with each point of the index score. There is an educational gradient in healthy ageing in the population below the age of 68 and in Southern and Central-Eastern European countries. In Western European countries, income is positively related to healthy ageing for females. Stimulation physical activity and adequate nutrition are crucial domains for a well-defined public health policy oriented towards healthy ageing. The psychosocial elements related to social participation, engagement, networking and life satisfaction are also found to be health beneficial.
Some New Estimation Methods for Weighted Regression When There are Possible Outliers.
1985-01-01
about influential points, and to add to our understanding of the structure of the data In Section 2 we show, by considering the influence function , why... influence function lampel; 1968, 1974) for the maximum likelihood esti- mator is proportional to (EP-l)h(x), where £= (y-x’B)exp[-h’(x)e], and is thus...unbounded. Since the influence function for the MLE is quadratic in the residual c, in theory a point with a sufficiently large residual can have an
Modularity-like objective function in annotated networks
NASA Astrophysics Data System (ADS)
Xie, Jia-Rong; Wang, Bing-Hong
2017-12-01
We ascertain the modularity-like objective function whose optimization is equivalent to the maximum likelihood in annotated networks. We demonstrate that the modularity-like objective function is a linear combination of modularity and conditional entropy. In contrast with statistical inference methods, in our method, the influence of the metadata is adjustable; when its influence is strong enough, the metadata can be recovered. Conversely, when it is weak, the detection may correspond to another partition. Between the two, there is a transition. This paper provides a concept for expanding the scope of modularity methods.
Chen, Shuhang; Liu, Huafeng; Shi, Pengcheng; Chen, Yunmei
2015-01-21
Accurate and robust reconstruction of the radioactivity concentration is of great importance in positron emission tomography (PET) imaging. Given the Poisson nature of photo-counting measurements, we present a reconstruction framework that integrates sparsity penalty on a dictionary into a maximum likelihood estimator. Patch-sparsity on a dictionary provides the regularization for our effort, and iterative procedures are used to solve the maximum likelihood function formulated on Poisson statistics. Specifically, in our formulation, a dictionary could be trained on CT images, to provide intrinsic anatomical structures for the reconstructed images, or adaptively learned from the noisy measurements of PET. Accuracy of the strategy with very promising application results from Monte-Carlo simulations, and real data are demonstrated.
Anatomy of the ATLAS diboson anomaly
NASA Astrophysics Data System (ADS)
Allanach, B. C.; Gripaios, Ben; Sutherland, Dave
2015-09-01
We perform a general analysis of new physics interpretations of the recent ATLAS diboson excesses over standard model expectations in LHC Run I collisions. First, we estimate a likelihood function in terms of the truth signal in the W W , W Z , and Z Z channels, finding that the maximum has zero events in the W Z channel, though the likelihood is sufficiently flat to allow other scenarios. Second, we survey the possible effective field theories containing the standard model plus a new resonance that could explain the data, identifying two possibilities, viz. a vector that is either a left- or right-handed S U (2 ) triplet. Finally, we compare these models with other experimental data and determine the parameter regions in which they provide a consistent explanation.
A parametric method for determining the number of signals in narrow-band direction finding
NASA Astrophysics Data System (ADS)
Wu, Qiang; Fuhrmann, Daniel R.
1991-08-01
A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).
Local Solutions in the Estimation of Growth Mixture Models
ERIC Educational Resources Information Center
Hipp, John R.; Bauer, Daniel J.
2006-01-01
Finite mixture models are well known to have poorly behaved likelihood functions featuring singularities and multiple optima. Growth mixture models may suffer from fewer of these problems, potentially benefiting from the structure imposed on the estimated class means and covariances by the specified growth model. As demonstrated here, however,…
USDA-ARS?s Scientific Manuscript database
This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...
USDA-ARS?s Scientific Manuscript database
Data assimilation and regression are two commonly used methods for predicting agricultural yield from remote sensing observations. Data assimilation is a generative approach because it requires explicit approximations of the Bayesian prior and likelihood to compute the probability density function...
Effects of Differential Family Acculturation on Latino Adolescent Substance Use
ERIC Educational Resources Information Center
Martinez, Charles R., Jr.
2006-01-01
This study examined links between parent-youth differential acculturation and youth substance-use likelihood in a sample of 73 recently immigrated Latino families with middle-school-aged youth. Multiple agents were utilized to assess family functioning and youth outcomes. Findings suggested that a greater level of differential acculturation…
Local Influence Analysis of Nonlinear Structural Equation Models
ERIC Educational Resources Information Center
Lee, Sik-Yum; Tang, Nian-Sheng
2004-01-01
By regarding the latent random vectors as hypothetical missing data and based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm, we investigate assessment of local influence of various perturbation schemes in a nonlinear structural equation model. The basic building blocks of local influence analysis…
ERIC Educational Resources Information Center
Formann, Anton K.
1986-01-01
It is shown that for equal parameters explicit formulas exist, facilitating the application of the Newton-Raphson procedure to estimate the parameters in the Rasch model and related models according to the conditional maximum likelihood principle. (Author/LMO)
Data Mining and Knowledge Management in Higher Education -Potential Applications.
ERIC Educational Resources Information Center
Luan, Jing
This paper introduces a new decision support tool, data mining, in the context of knowledge management. The most striking features of data mining techniques are clustering and prediction. The clustering aspect of data mining offers comprehensive characteristics analysis of students, while the predicting function estimates the likelihood for a…
Age-Related Macular Degeneration.
Mehta, Sonia
2015-09-01
Age-related macular degeneration (AMD) is the leading cause of vision loss in the elderly. AMD is diagnosed based on characteristic retinal findings in individuals older than 50. Early detection and treatment are critical in increasing the likelihood of retaining good and functional vision. Copyright © 2015 Elsevier Inc. All rights reserved.
Natale, Livia C; Rodrigues, Marcela C; Alania, Yvette; Chiari, Marina D S; Boaro, Leticia C C; Cotrim, Marycel; Vega, Oscar; Braga, Roberto R
2018-08-01
to verify the effect of the addition of dicalcium phosphate dihydrate (DCPD) particles functionalized with di- or triethylene glycol dimethacrylate (DEGDMA or TEGDMA) on the degree of conversion (DC), post-gel shrinkage (PS), mechanical properties, and ion release of experimental composites. Four composites were prepared containing a BisGMA/TEGDMA matrix and 60 vol% of fillers. The positive control contained only barium glass fillers, while in the other composites 15 vol% of the barium was replaced by DCPD. Besides the functionalized particles, non-functionalized DCPD was also tested. DC after 24 h (n = 3) was determined by FTIR spectroscopy. The strain gage method was used to obtain PS 5 min after photoactivation (n = 5). Flexural strength and modulus (n = 10) were calculated based on the biaxial flexural test results, after specimen storage for 24 h or 60 days in water. The same storage times were used for fracture toughness testing (FT, n = 10). Calcium and phosphate release up to 60 days was quantified by ICP-OES (n = 3). Data were analyzed by ANOVA/Tukey test (alpha: 5%). Composites containing functionalized DCPD presented higher DC than the control (p < 0.001). The material containing DEGDMA-functionalized particles showed higher PS than the other composites (p < 0.001). After 60 days, only the composite with DEGDMA-functionalized DCPD presented fracture strength similar to the control, while for flexural modulus only the composite with TEGDMA-functionalized particles was lower than the control (p < 0.001). FT of all composites containing DCPD was higher than the control after 60 days (p < 0.005). Calcium release was higher for the composite with non-functionalized DCPD at 15 days and no significant reductions were observed for composites with functionalized DCPD during the observation period (p < 0.001). For all the tested composites, phosphate release was higher at 15 days than in the subsequent periods, and no difference among them was recorded at 45 and 60 days (p < 0.001). DCPD functionalization affected all the studied variables. The composite with DEGDMA-functionalized particles was the only material with strength similar to the control after 60 days in water; however, it also presented the highest shrinkage. The presence of DCPD improved FT, regardless of functionalization. DCPD functionalization reduced ion release only during the first 15 days. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ergonomic Redesign of an Industrial Control Panel.
Raeisi, S; Osqueizadeh, R; Maghsoudipour, M; Jafarpisheh, A S
2016-07-01
Operator's role in industrial control centers takes place in time, which is one of the most important determinants of whether an expected action is going to be successful or not. In certain situations, due to the complex nature of the work, the existing interfaces and already prepared procedures do not meet the dynamic requirements of operator's cognitive demands, making the control tasks unnecessarily difficult. This study was conducted to identify ergonomic issues with a specific industrial control panel, and redesign its layout and elements to enhance its usability. Task and link analysis methodologies were implemented. All essential functions and supporting operations were identified at the required trivial levels. Next, the weight of any possible link between the elements of the panel was computed as a composite index of frequency and importance. Finally, all components were rearranged within a new layout, and a computerized mockup was generated. A total of 8 primary tasks was identified, including 4 system failure handling tasks, switching between manual and automated modes, and 3 types of routine vigilance and control tasks. These tasks were broken down into 28 functions and 145 supporting operations, accordingly. Higher link values were observed between hand rest position and 2 elements. Also, 6 other components showed robust linkages. In conclusion, computer modeling can reduce the likelihood of accidents and near misses in industrial control rooms by considering the operators' misperception or mental burden and correcting poor design of the panels and inappropriate task allocation.
See, Emily J; Hawley, Carmel M; Cho, Yeoungjee; Toussaint, Nigel D; Agar, John Wm; Pascoe, Elaine M; Lim, Wai H; Francis, Ross S; Collins, Michael G; Johnson, David W
2018-01-08
Differences in early graft function between kidney transplant recipients previously managed with either haemodialysis (HD) or peritoneal dialysis are well described. However, only two single-centre studies have compared graft and patient outcomes between extended hour and conventional HD patients, with conflicting results. This study compared the outcomes of all extended hour (≥24 hours/week) and conventional HD patients transplanted in Australia and New Zealand between 2000 and 2014. The primary outcome was delayed graft function (DGF), defined in an ordinal manner as either a spontaneous fall in serum creatinine of less than 10% within 24 hours, or the need for dialysis within 72 hours following transplantation. Secondary outcomes included the requirement for dialysis within 72 hours post-transplant, acute rejection, estimated glomerular filtration rate at 12 months, death-censored graft failure, all-cause and cardiovascular mortality, and a composite of graft failure and mortality. A total of 4,935 HD patients (378 extended hour HD, 4,557 conventional HD) received a kidney transplant during the study period. Extended hour HD was associated with an increased likelihood of DGF compared with conventional HD (adjusted proportional odds ratio 1.33; 95% confidence interval 1.06-1.67). There was no significant difference between extended hour and conventional HD in terms of any of the secondary outcomes. Compared to conventional HD, extended hour HD was associated with DGF, although long-term graft and patient outcomes were not different. This article is protected by copyright. All rights reserved.
Estimation of correlation functions by stochastic approximation.
NASA Technical Reports Server (NTRS)
Habibi, A.; Wintz, P. A.
1972-01-01
Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.
Menz, Hylton B.; Dufour, Alyssa B.; Riskowski, Jody L.; Hillstrom, Howard J.; Hannan, Marian T.
2014-01-01
Objective To examine the associations of foot posture and foot function to foot pain. Methods Data were collected on 3,378 members of the Framingham Study who completed foot examinations in 2002–2008. Foot pain (generalized and at six locations) was based on the response to the question “On most days, do you have pain, aching or stiffness in either foot?” Foot posture was categorized as normal, planus or cavus using static pressure measurements of the arch index. Foot function was categorized as normal, pronated or supinated using the center of pressure excursion index from dynamic pressure measurements. Sex-specific multivariate logistic regression models were used to examine the effect of foot posture and function on generalized and location-specific foot pain, adjusting for age and weight. Results Planus foot posture was significantly associated with an increased likelihood of arch pain in men (odds ratio [OR] 1.38, 95% confidence interval [CI] 1.01 – 1.90), while cavus foot posture was protective against ball of foot pain (OR 0.74, 95% CI 0.55 – 1.00) and arch pain (OR 0.64, 95% CI 0.48 – 0.85) in women. Pronated foot function was significantly associated with an increased likelihood of generalized foot pain (OR 1.28, 95% CI 1.04 – 1.56) and heel pain (OR 1.54, 95% CI 1.04 – 2.27) in men, while supinated foot function was protective against hindfoot pain in women (OR 0.74, 95% CI 0.55 – 1.00). Conclusion Planus foot posture and pronated foot function are associated with foot symptoms. Interventions that modify abnormal foot posture and function may therefore have a role in the prevention and treatment of foot pain. PMID:23861176
Drake, Birger; Nádai, Béla
1970-03-01
An empirical measure of viscosity, which is often far from being a linear function of composition, was used together with refractive index to build up a function which bears a linear relationship to the composition of tomato paste-water-sucrose mixtures. The new function can be used directly for rapid composition control by linear vector-vector transformation.
A NEW METHOD FOR DERIVING THE STELLAR BIRTH FUNCTION OF RESOLVED STELLAR POPULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gennaro, M.; Brown, T. M.; Gordon, K. D.
We present a new method for deriving the stellar birth function (SBF) of resolved stellar populations. The SBF (stars born per unit mass, time, and metallicity) is the combination of the initial mass function (IMF), the star formation history (SFH), and the metallicity distribution function (MDF). The framework of our analysis is that of Poisson Point Processes (PPPs), a class of statistical models suitable when dealing with points (stars) in a multidimensional space (the measurement space of multiple photometric bands). The theory of PPPs easily accommodates the modeling of measurement errors as well as that of incompleteness. Our method avoidsmore » binning stars in the color–magnitude diagram and uses the whole likelihood function for each data point; combining the individual likelihoods allows the computation of the posterior probability for the population's SBF. Within the proposed framework it is possible to include nuisance parameters, such as distance and extinction, by specifying their prior distributions and marginalizing over them. The aim of this paper is to assess the validity of this new approach under a range of assumptions, using only simulated data. Forthcoming work will show applications to real data. Although it has a broad scope of possible applications, we have developed this method to study multi-band Hubble Space Telescope observations of the Milky Way Bulge. Therefore we will focus on simulations with characteristics similar to those of the Galactic Bulge.« less
Enabling complex queries to drug information sources through functional composition.
Peters, Lee; Mortensen, Jonathan; Nguyen, Thang; Bodenreider, Olivier
2013-01-01
Our objective was to enable an end-user to create complex queries to drug information sources through functional composition, by creating sequences of functions from application program interfaces (API) to drug terminologies. The development of a functional composition model seeks to link functions from two distinct APIs. An ontology was developed using Protégé to model the functions of the RxNorm and NDF-RT APIs by describing the semantics of their input and output. A set of rules were developed to define the interoperable conditions for functional composition. The operational definition of interoperability between function pairs is established by executing the rules on the ontology. We illustrate that the functional composition model supports common use cases, including checking interactions for RxNorm drugs and deploying allergy lists defined in reference to drug properties in NDF-RT. This model supports the RxMix application (http://mor.nlm.nih.gov/RxMix/), an application we developed for enabling complex queries to the RxNorm and NDF-RT APIs.
Edwards, James P; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel
2018-04-01
We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).
NASA Astrophysics Data System (ADS)
Edwards, James P.; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel
2018-04-01
We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).
Hollenbeak, Christopher S
2005-10-15
While risk-adjusted outcomes are often used to compare the performance of hospitals and physicians, the most appropriate functional form for the risk adjustment process is not always obvious for continuous outcomes such as costs. Semi-log models are used most often to correct skewness in cost data, but there has been limited research to determine whether the log transformation is sufficient or whether another transformation is more appropriate. This study explores the most appropriate functional form for risk-adjusting the cost of coronary artery bypass graft (CABG) surgery. Data included patients undergoing CABG surgery at four hospitals in the midwest and were fit to a Box-Cox model with random coefficients (BCRC) using Markov chain Monte Carlo methods. Marginal likelihoods and Bayes factors were computed to perform model comparison of alternative model specifications. Rankings of hospital performance were created from the simulation output and the rankings produced by Bayesian estimates were compared to rankings produced by standard models fit using classical methods. Results suggest that, for these data, the most appropriate functional form is not logarithmic, but corresponds to a Box-Cox transformation of -1. Furthermore, Bayes factors overwhelmingly rejected the natural log transformation. However, the hospital ranking induced by the BCRC model was not different from the ranking produced by maximum likelihood estimates of either the linear or semi-log model. Copyright (c) 2005 John Wiley & Sons, Ltd.
Whatman, Chris; Hing, Wayne; Hume, Patria
2012-05-01
To investigate physiotherapist agreement in rating movement quality during lower extremity functional tests using two visual rating methods and physiotherapists with differing clinical experience. Clinical measurement. Six healthy individuals were rated by 44 physiotherapists. These raters were in three groups (inexperienced, novice, experienced). Video recordings of all six individuals performing four lower extremity functional tests were visually rated (dichotomous or ordinal scale) using two rating methods (overall or segment) on two occasions separated by 3-4 weeks. Intra and inter-rater agreement for physiotherapists was determined using overall percentage agreement (OPA) and the first order agreement coefficient (AC1). Intra-rater agreement for overall and segment methods ranged from slight to almost perfect (OPA: 29-96%, AC1: 0.01 to 0.96). AC1 agreement was better in the experienced group (84-99% likelihood) and for dichotomous rating (97-100% likelihood). Inter-rater agreement ranged from fair to good (OPA: 45-79%; AC1: 0.22-0.71). AC1 agreement was not influenced by clinical experience but was again better using dichotomous rating. Physiotherapists' visual rating of movement quality during lower extremity functional tests resulted in slight to almost perfect intra-rater agreement and fair to good inter-rater agreement. Agreement improved with increased level of clinical experience and use of dichotomous rating. Copyright © 2011 Elsevier Ltd. All rights reserved.
Chen, Minhui; Wang, Jiying; Wang, Yanping; Wu, Ying; Fu, Jinluan; Liu, Jian-Feng
2018-05-18
Currently, genome-wide scans for positive selection signatures in commercial breed have been investigated. However, few studies have focused on selection footprints of indigenous breeds. Laiwu pig is an invaluable Chinese indigenous pig breed with extremely high proportion of intramuscular fat (IMF), and an excellent model to detect footprint as the result of natural and artificial selection for fat deposition in muscle. In this study, based on GeneSeek Genomic profiler Porcine HD data, three complementary methods, F ST , iHS (integrated haplotype homozygosity score) and CLR (composite likelihood ratio), were implemented to detect selection signatures in the whole genome of Laiwu pigs. Totally, 175 candidate selected regions were obtained by at least two of the three methods, which covered 43.75 Mb genomic regions and corresponded to 1.79% of the genome sequence. Gene annotation of the selected regions revealed a list of functionally important genes for feed intake and fat deposition, reproduction, and immune response. Especially, in accordance to the phenotypic features of Laiwu pigs, among the candidate genes, we identified several genes, NPY1R, NPY5R, PIK3R1 and JAKMIP1, involved in the actions of two sets of neurons, which are central regulators in maintaining the balance between food intake and energy expenditure. Our results identified a number of regions showing signatures of selection, as well as a list of functionally candidate genes with potential effect on phenotypic traits, especially fat deposition in muscle. Our findings provide insights into the mechanisms of artificial selection of fat deposition and further facilitate follow-up functional studies.
Empirical likelihood inference in randomized clinical trials.
Zhang, Biao
2017-01-01
In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.
Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay
2013-12-01
Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.
Ramírez-Vélez, Robinson; García-Hermoso, Antonio; Agostinis-Sobrinho, Cesar; Mota, Jorge; Santos, Rute; Correa-Bautista, Jorge Enrique; Amaya-Tambo, Deisy Constanza; Villa-González, Emilio
2017-09-01
To evaluate the association between cycling to/from school and body composition, physical fitness, and metabolic syndrome among a sample of Colombian children and adolescents. During the 2014-2015 school year, we examined a cross-sectional component of the Association for muscular strength with early manifestation of cardiovascular disease risk factors among Colombian children and adolescents (FUPRECOL) study. Participants included 2877 youths (54.5% girls) from Bogota, Colombia. A self-reported questionnaire was used to measure the frequency and mode of commuting to school. Four components of physical fitness were measured: (1) anthropometric (height, weight, body mass index, and waist circumference); (2) musculoskeletal (handgrip and standing long jump test); (3) motor (speed-agility test; 4 × 10-meter shuttle run); and (4) cardiorespiratory (20-m shuttle run test [20mSRT]). The prevalence of metabolic syndrome was determined by the definitions provided by the International Diabetes Federation. Twenty-three percent of the sample reported commuting by cycle. Active commuting boys had a likelihood of having an unhealthy 4 × 10 m value (OR, 0.72; 95% CI, 0.53-0.98; P = .038) compared with the reference group (passive commuters). Active commuting girls showed a lower likelihood of having unhealthy a 20mSRT value (OR, 0.81; 95% CI, 0.56-0.99; P = .047) and metabolic syndrome (OR, 0.61; 95% CI, 0.35-0.99; P = .048) compared with passive commuters. Regular cycling to school may to be associated with better physical fitness and a lower incidence of metabolic syndrome than passive transport, especially in girls. Copyright © 2017 Elsevier Inc. All rights reserved.
Communicating likelihoods and probabilities in forecasts of volcanic eruptions
NASA Astrophysics Data System (ADS)
Doyle, Emma E. H.; McClure, John; Johnston, David M.; Paton, Douglas
2014-02-01
The issuing of forecasts and warnings of natural hazard events, such as volcanic eruptions, earthquake aftershock sequences and extreme weather often involves the use of probabilistic terms, particularly when communicated by scientific advisory groups to key decision-makers, who can differ greatly in relative expertise and function in the decision making process. Recipients may also differ in their perception of relative importance of political and economic influences on interpretation. Consequently, the interpretation of these probabilistic terms can vary greatly due to the framing of the statements, and whether verbal or numerical terms are used. We present a review from the psychology literature on how the framing of information influences communication of these probability terms. It is also unclear as to how people rate their perception of an event's likelihood throughout a time frame when a forecast time window is stated. Previous research has identified that, when presented with a 10-year time window forecast, participants viewed the likelihood of an event occurring ‘today’ as being of less than that in year 10. Here we show that this skew in perception also occurs for short-term time windows (under one week) that are of most relevance for emergency warnings. In addition, unlike the long-time window statements, the use of the phrasing “within the next…” instead of “in the next…” does not mitigate this skew, nor do we observe significant differences between the perceived likelihoods of scientists and non-scientists. This finding suggests that effects occurring due to the shorter time window may be ‘masking’ any differences in perception due to wording or career background observed for long-time window forecasts. These results have implications for scientific advice, warning forecasts, emergency management decision-making, and public information as any skew in perceived event likelihood towards the end of a forecast time window may result in an underestimate of the likelihood of an event occurring ‘today’ leading to potentially inappropriate action choices. We thus present some initial guidelines for communicating such eruption forecasts.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.