Sample records for background likelihood approximation

  1. Approximate likelihood approaches for detecting the influence of primordial gravitational waves in cosmic microwave background polarization

    NASA Astrophysics Data System (ADS)

    Pan, Zhen; Anderes, Ethan; Knox, Lloyd

    2018-05-01

    One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.

  2. COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION FOR BANDED PROBABILITY DISTRIBUTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gjerløw, E.; Mikkelsen, K.; Eriksen, H. K.

    We investigate sets of random variables that can be arranged sequentially such that a given variable only depends conditionally on its immediate predecessor. For such sets, we show that the full joint probability distribution may be expressed exclusively in terms of uni- and bivariate marginals. Under the assumption that the cosmic microwave background (CMB) power spectrum likelihood only exhibits correlations within a banded multipole range, Δl{sub C}, we apply this expression to two outstanding problems in CMB likelihood analysis. First, we derive a statistically well-defined hybrid likelihood estimator, merging two independent (e.g., low- and high-l) likelihoods into a single expressionmore » that properly accounts for correlations between the two. Applying this expression to the Wilkinson Microwave Anisotropy Probe (WMAP) likelihood, we verify that the effect of correlations on cosmological parameters in the transition region is negligible in terms of cosmological parameters for WMAP; the largest relative shift seen for any parameter is 0.06σ. However, because this may not hold for other experimental setups (e.g., for different instrumental noise properties or analysis masks), but must rather be verified on a case-by-case basis, we recommend our new hybridization scheme for future experiments for statistical self-consistency reasons. Second, we use the same expression to improve the convergence rate of the Blackwell-Rao likelihood estimator, reducing the required number of Monte Carlo samples by several orders of magnitude, and thereby extend it to high-l applications.« less

  3. Detection and Estimation of an Optical Image by Photon-Counting Techniques. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, Lily Lee

    1973-01-01

    Statistical description of a photoelectric detector is given. The photosensitive surface of the detector is divided into many small areas, and the moment generating function of the photo-counting statistic is derived for large time-bandwidth product. The detection of a specified optical image in the presence of the background light by using the hypothesis test is discussed. The ideal detector based on the likelihood ratio from a set of numbers of photoelectrons ejected from many small areas of the photosensitive surface is studied and compared with the threshold detector and a simple detector which is based on the likelihood ratio by counting the total number of photoelectrons from a finite area of the surface. The intensity of the image is assumed to be Gaussian distributed spatially against the uniformly distributed background light. The numerical approximation by the method of steepest descent is used, and the calculations of the reliabilities for the detectors are carried out by a digital computer.

  4. Workplace air measurements and likelihood of exposure to manufactured nano-objects, agglomerates, and aggregates

    NASA Astrophysics Data System (ADS)

    Brouwer, Derk H.; van Duuren-Stuurman, Birgit; Berges, Markus; Bard, Delphine; Jankowska, Elzbieta; Moehlmann, Carsten; Pelzer, Johannes; Mark, Dave

    2013-11-01

    Manufactured nano-objects, agglomerates, and aggregates (NOAA) may have adverse effect on human health, but little is known about occupational risks since actual estimates of exposure are lacking. In a large-scale workplace air-monitoring campaign, 19 enterprises were visited and 120 potential exposure scenarios were measured. A multi-metric exposure assessment approach was followed and a decision logic was developed to afford analysis of all results in concert. The overall evaluation was classified by categories of likelihood of exposure. At task level about 53 % showed increased particle number or surface area concentration compared to "background" level, whereas 72 % of the TEM samples revealed an indication that NOAA were present in the workplace. For 54 out of the 120 task-based exposure scenarios, an overall evaluation could be made based on all parameters of the decision logic. For only 1 exposure scenario (approximately 2 %), the highest level of potential likelihood was assigned, whereas in total in 56 % of the exposure scenarios the overall evaluation revealed the lowest level of likelihood. However, for the remaining 42 % exposure to NOAA could not be excluded.

  5. Tests for detecting overdispersion in models with measurement error in covariates.

    PubMed

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Modeling gene expression measurement error: a quasi-likelihood approach

    PubMed Central

    Strimmer, Korbinian

    2003-01-01

    Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637

  7. Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.

    PubMed

    dos Reis, Mario; Yang, Ziheng

    2011-07-01

    The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.

  8. Approximated maximum likelihood estimation in multifractal random walks

    NASA Astrophysics Data System (ADS)

    Løvsletten, O.; Rypdal, M.

    2012-04-01

    We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.

  9. Low-complexity approximations to maximum likelihood MPSK modulation classification

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2004-01-01

    We present a new approximation to the maximum likelihood classifier to discriminate between M-ary and M'-ary phase-shift-keying transmitted on an additive white Gaussian noise (AWGN) channel and received noncoherentl, partially coherently, or coherently.

  10. New estimates of the CMB angular power spectra from the WMAP 5 year low-resolution data

    NASA Astrophysics Data System (ADS)

    Gruppuso, A.; de Rosa, A.; Cabella, P.; Paci, F.; Finelli, F.; Natoli, P.; de Gasperis, G.; Mandolesi, N.

    2009-11-01

    A quadratic maximum likelihood (QML) estimator is applied to the Wilkinson Microwave Anisotropy Probe (WMAP) 5 year low-resolution maps to compute the cosmic microwave background angular power spectra (APS) at large scales for both temperature and polarization. Estimates and error bars for the six APS are provided up to l = 32 and compared, when possible, to those obtained by the WMAP team, without finding any inconsistency. The conditional likelihood slices are also computed for the Cl of all the six power spectra from l = 2 to 10 through a pixel-based likelihood code. Both the codes treat the covariance for (T, Q, U) in a single matrix without employing any approximation. The inputs of both the codes (foreground-reduced maps, related covariances and masks) are provided by the WMAP team. The peaks of the likelihood slices are always consistent with the QML estimates within the error bars; however, an excellent agreement occurs when the QML estimates are used as a fiducial power spectrum instead of the best-fitting theoretical power spectrum. By the full computation of the conditional likelihood on the estimated spectra, the value of the temperature quadrupole CTTl=2 is found to be less than 2σ away from the WMAP 5 year Λ cold dark matter best-fitting value. The BB spectrum is found to be well consistent with zero, and upper limits on the B modes are provided. The parity odd signals TB and EB are found to be consistent with zero.

  11. Multiple continental radiations and correlates of diversification in Lupinus (Leguminosae): testing for key innovation with incomplete taxon sampling.

    PubMed

    Drummond, Christopher S; Eastwood, Ruth J; Miotto, Silvia T S; Hughes, Colin E

    2012-05-01

    Replicate radiations provide powerful comparative systems to address questions about the interplay between opportunity and innovation in driving episodes of diversification and the factors limiting their subsequent progression. However, such systems have been rarely documented at intercontinental scales. Here, we evaluate the hypothesis of multiple radiations in the genus Lupinus (Leguminosae), which exhibits some of the highest known rates of net diversification in plants. Given that incomplete taxon sampling, background extinction, and lineage-specific variation in diversification rates can confound macroevolutionary inferences regarding the timing and mechanisms of cladogenesis, we used Bayesian relaxed clock phylogenetic analyses as well as MEDUSA and BiSSE birth-death likelihood models of diversification, to evaluate the evolutionary patterns of lineage accumulation in Lupinus. We identified 3 significant shifts to increased rates of net diversification (r) relative to background levels in the genus (r = 0.18-0.48 lineages/myr). The primary shift occurred approximately 4.6 Ma (r = 0.48-1.76) in the montane regions of western North America, followed by a secondary shift approximately 2.7 Ma (r = 0.89-3.33) associated with range expansion and diversification of allopatrically distributed sister clades in the Mexican highlands and Andes. We also recovered evidence for a third independent shift approximately 6.5 Ma at the base of a lower elevation eastern South American grassland and campo rupestre clade (r = 0.36-1.33). Bayesian ancestral state reconstructions and BiSSE likelihood analyses of correlated diversification indicated that increased rates of speciation are strongly associated with the derived evolution of perennial life history and invasion of montane ecosystems. Although we currently lack hard evidence for "replicate adaptive radiations" in the sense of convergent morphological and ecological trajectories among species in different clades, these results are consistent with the hypothesis that iteroparity functioned as an adaptive key innovation, providing a mechanism for range expansion and rapid divergence in upper elevation regions across much of the New World.

  12. Prolonged Operative Duration Increases Risk of Surgical Site Infections: A Systematic Review

    PubMed Central

    Chen, Brian Po-Han; Soleas, Ireena M.; Ferko, Nicole C.; Cameron, Chris G.; Hinoul, Piet

    2017-01-01

    Abstract Background: The incidence of surgical site infection (SSI) across surgical procedures, specialties, and conditions is reported to vary from 0.1% to 50%. Operative duration is often cited as an independent and potentially modifiable risk factor for SSI. The objective of this systematic review was to provide an in-depth understanding of the relation between operating time and SSI. Patients and Methods: This review included 81 prospective and retrospective studies. Along with study design, likelihood of SSI, mean operative times, time thresholds, effect measures, confidence intervals, and p values were extracted. Three meta-analyses were conducted, whereby odds ratios were pooled by hourly operative time thresholds, increments of increasing operative time, and surgical specialty. Results: Pooled analyses demonstrated that the association between extended operative time and SSI typically remained statistically significant, with close to twice the likelihood of SSI observed across various time thresholds. The likelihood of SSI increased with increasing time increments; for example, a 13%, 17%, and 37% increased likelihood for every 15 min, 30 min, and 60 min of surgery, respectively. On average, across various procedures, the mean operative time was approximately 30 min longer in patients with SSIs compared with those patients without. Conclusions: Prolonged operative time can increase the risk of SSI. Given the importance of SSIs on patient outcomes and health care economics, hospitals should focus efforts to reduce operative time. PMID:28832271

  13. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  14. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  15. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.

  16. Measurement of the mass difference between t and t quarks.

    PubMed

    Aaltonen, T; Álvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Appel, J A; Apresyan, A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Auerbach, B; Aurisano, A; Azfar, F; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauce, M; Bauer, G; Bedeschi, F; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Bland, K R; Blumenfeld, B; Bocci, A; Bodek, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Brigliadori, L; Brisuda, A; Bromberg, C; Brucken, E; Bucciantonio, M; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Compostella, G; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Dagenhart, D; d'Ascenzo, N; Datta, M; de Barbaro, P; De Cecco, S; De Lorenzo, G; Dell'Orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Devoto, F; d'Errico, M; Di Canto, A; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, M; Dorigo, T; Ebina, K; Elagin, A; Eppig, A; Erbacher, R; Errede, D; Errede, S; Ershaidat, N; Eusebi, R; Fang, H C; Farrington, S; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Funakoshi, Y; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerchtein, E; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Ginsburg, C M; Giokaris, N; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldin, D; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Group, R C; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, S R; Halkiadakis, E; Hamaguchi, A; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Hewamanage, S; Hidas, D; Hocker, A; Hopkins, W; Horn, D; Hou, S; Hughes, R E; Hurwitz, M; Husemann, U; Hussain, N; Hussein, M; Huston, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Karchin, P E; Kato, Y; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, H W; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirby, M; Klimenko, S; Kondo, K; Kong, D J; Konigsberg, J; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kuhr, T; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leo, S; Leone, S; Lewis, J D; Lin, C-J; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, Q; Liu, T; Lockwitz, S; Lockyer, N S; Loginov, A; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; Madrak, R; Maeshima, K; Makhoul, K; Maksimovic, P; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Martínez, M; Martínez-Ballarín, R; Mastrandrea, P; Mathis, M; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Mesropian, C; Miao, T; Mietlicki, D; Mitra, A; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Movilla Fernandez, P; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Ortolan, L; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pilot, J; Pitts, K; Plager, C; Pondrom, L; Potamianos, K; Poukhov, O; Prokoshin, F; Pronko, A; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Rescigno, M; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rubbo, F; Ruffini, F; Ruiz, A; Russ, J; Rusu, V; Safonov, A; Sakumoto, W K; Sakurai, Y; Santi, L; Sartori, L; Sato, K; Saveliev, V; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sforza, F; Sfyrla, A; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shiraishi, S; Shochet, M; Shreyber, I; Simonenko, A; Sinervo, P; Sissakian, A; Sliwa, K; Smith, J R; Snider, F D; Soha, A; Somalwar, S; Sorin, V; Squillacioti, P; Stancari, M; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Sudo, Y; Sukhanov, A; Suslov, I; Takemasa, K; Takeuchi, Y; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Ttito-Guzmán, P; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Trovato, M; Tu, Y; Ukegawa, F; Uozumi, S; Varganov, A; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Vizán, J; Vogel, M; Volpi, G; Wagner, P; Wagner, R L; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Wick, F; Williams, H H; Wilson, J S; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, H; Wright, T; Wu, X; Wu, Z; Yamamoto, K; Yamaoka, J; Yang, T; Yang, U K; Yang, Y C; Yao, W-M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zucchelli, S

    2011-04-15

    We present a direct measurement of the mass difference between t and t quarks using tt candidate events in the lepton+jets channel, collected with the CDF II detector at Fermilab's 1.96 TeV Tevatron pp Collider. We make an event by event estimate of the mass difference to construct templates for top quark pair signal events and background events. The resulting mass difference distribution of data is compared to templates of signals and background using a maximum likelihood fit. From a sample corresponding to an integrated luminosity of 5.6  fb(-1), we measure a mass difference, ΔM(top) = M(t) - M(t) = -3.3 ± 1.4(stat) ± 1.0(syst)  GeV/c2, approximately 2 standard deviations away from the CPT hypothesis of zero mass difference.

  17. On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood

    ERIC Educational Resources Information Center

    Karabatsos, George

    2017-01-01

    This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon…

  18. Scanning linear estimation: improvements over region of interest (ROI) methods

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.

    2013-03-01

    In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.

  19. Atmospheric effects on cluster analyses. [for remote sensing application

    NASA Technical Reports Server (NTRS)

    Kiang, R. K.

    1979-01-01

    Ground reflected radiance, from which information is extracted through techniques of cluster analyses for remote sensing application, is altered by the atmosphere when it reaches the satellite. Therefore it is essential to understand the effects of the atmosphere on Landsat measurements, cluster characteristics and analysis accuracy. A doubling model is employed to compute the effective reflectivity, observed from the satellite, as a function of ground reflectivity, solar zenith angle and aerosol optical thickness for standard atmosphere. The relation between the effective reflectivity and ground reflectivity is approximately linear. It is shown that for a horizontally homogeneous atmosphere, the classification statistics from a maximum likelihood classifier remains unchanged under these transforms. If inhomogeneity is present, the divergence between clusters is reduced, and correlation between spectral bands increases. Radiance reflected by the background area surrounding the target may also reach the satellite. The influence of background reflectivity on effective reflectivity is discussed.

  20. Clarifying the Hubble constant tension with a Bayesian hierarchical model of the local distance ladder

    NASA Astrophysics Data System (ADS)

    Feeney, Stephen M.; Mortlock, Daniel J.; Dalmasso, Niccolò

    2018-05-01

    Estimates of the Hubble constant, H0, from the local distance ladder and from the cosmic microwave background (CMB) are discrepant at the ˜3σ level, indicating a potential issue with the standard Λ cold dark matter (ΛCDM) cosmology. A probabilistic (i.e. Bayesian) interpretation of this tension requires a model comparison calculation, which in turn depends strongly on the tails of the H0 likelihoods. Evaluating the tails of the local H0 likelihood requires the use of non-Gaussian distributions to faithfully represent anchor likelihoods and outliers, and simultaneous fitting of the complete distance-ladder data set to ensure correct uncertainty propagation. We have hence developed a Bayesian hierarchical model of the full distance ladder that does not rely on Gaussian distributions and allows outliers to be modelled without arbitrary data cuts. Marginalizing over the full ˜3000-parameter joint posterior distribution, we find H0 = (72.72 ± 1.67) km s-1 Mpc-1 when applied to the outlier-cleaned Riess et al. data, and (73.15 ± 1.78) km s-1 Mpc-1 with supernova outliers reintroduced (the pre-cut Cepheid data set is not available). Using our precise evaluation of the tails of the H0 likelihood, we apply Bayesian model comparison to assess the evidence for deviation from ΛCDM given the distance-ladder and CMB data. The odds against ΛCDM are at worst ˜10:1 when considering the Planck 2015 XIII data, regardless of outlier treatment, considerably less dramatic than naïvely implied by the 2.8σ discrepancy. These odds become ˜60:1 when an approximation to the more-discrepant Planck Intermediate XLVI likelihood is included.

  1. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  2. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  3. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  4. Multiple Continental Radiations and Correlates of Diversification in Lupinus (Leguminosae): Testing for Key Innovation with Incomplete Taxon Sampling

    PubMed Central

    Drummond, Christopher S.; Eastwood, Ruth J.; Miotto, Silvia T. S.; Hughes, Colin E.

    2012-01-01

    Replicate radiations provide powerful comparative systems to address questions about the interplay between opportunity and innovation in driving episodes of diversification and the factors limiting their subsequent progression. However, such systems have been rarely documented at intercontinental scales. Here, we evaluate the hypothesis of multiple radiations in the genus Lupinus (Leguminosae), which exhibits some of the highest known rates of net diversification in plants. Given that incomplete taxon sampling, background extinction, and lineage-specific variation in diversification rates can confound macroevolutionary inferences regarding the timing and mechanisms of cladogenesis, we used Bayesian relaxed clock phylogenetic analyses as well as MEDUSA and BiSSE birth–death likelihood models of diversification, to evaluate the evolutionary patterns of lineage accumulation in Lupinus. We identified 3 significant shifts to increased rates of net diversification (r) relative to background levels in the genus (r = 0.18–0.48 lineages/myr). The primary shift occurred approximately 4.6 Ma (r = 0.48–1.76) in the montane regions of western North America, followed by a secondary shift approximately 2.7 Ma (r = 0.89–3.33) associated with range expansion and diversification of allopatrically distributed sister clades in the Mexican highlands and Andes. We also recovered evidence for a third independent shift approximately 6.5 Ma at the base of a lower elevation eastern South American grassland and campo rupestre clade (r = 0.36–1.33). Bayesian ancestral state reconstructions and BiSSE likelihood analyses of correlated diversification indicated that increased rates of speciation are strongly associated with the derived evolution of perennial life history and invasion of montane ecosystems. Although we currently lack hard evidence for “replicate adaptive radiations” in the sense of convergent morphological and ecological trajectories among species in different clades, these results are consistent with the hypothesis that iteroparity functioned as an adaptive key innovation, providing a mechanism for range expansion and rapid divergence in upper elevation regions across much of the New World. PMID:22228799

  5. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  6. IS TREATMENT ADHERENCE CONSISTENT ACROSS TIME, ACROSS DIFFERENT TREATMENTS, AND ACROSS DIAGNOSES?

    PubMed Central

    Simon, Gregory E; Peterson, Do; Hubbard, Rebecca

    2012-01-01

    Objective Examine consistency of adherence across depression treatments and consistency of adherence between depression treatments and treatments for chronic medical illness. Methods For 25,456 health plan members beginning psychotherapy for depression between 2003 and 2008, health plan records were used to examine adherence to all episodes of psychotherapy, antidepressant medication, antihypertensive medication, and lipid-lowering medication. Results Within treatments, adherence to psychotherapy in one episode predicted approximately 20% greater likelihood of subsequent psychotherapy adherence (OR 2.20, 95% CI 1.83 to 2.64). Similarly, adherence to antidepressant medication in one episode predicted approximately 20% greater likelihood of subsequent antidepressant adherence (OR 1.99, 95% CI 1.74 to 2.28). Across treatments, adherence to antidepressant medication predicted approximately 10% greater likelihood of concurrent or subsequent adherence to psychotherapy (OR 1.52, 95% CI 1.42 to 1.63), a 4% greater likelihood of adherence to antihypertensive medication (OR 1.24, 95% CI 1.14 to 1.37) and a 3% greater likelihood of adherence to lipid-lowering medication (OR 1.16, 95% CI 1.03 to 1.32). Adherence to psychotherapy predicted a 2% greater likelihood of concurrent or subsequent adherence to antihypertensive medication (OR 1.11, 95% CI 1.04 to 1.19) and was not a significant predictor of adherence to lipid-lowering medication (OR 0.99, 95% CI 0.90 to 1.18). Conclusions Adherence is moderately consistent across episodes of depression treatment. Depression treatment adherence is a statistically significant, but relatively weak, predictor of adherence to antihypertensive or lipid-lowering medication. PMID:23141589

  7. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-07-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.

  8. Unified framework to evaluate panmixia and migration direction among multiple sampling locations.

    PubMed

    Beerli, Peter; Palczewski, Michal

    2010-05-01

    For many biological investigations, groups of individuals are genetically sampled from several geographic locations. These sampling locations often do not reflect the genetic population structure. We describe a framework using marginal likelihoods to compare and order structured population models, such as testing whether the sampling locations belong to the same randomly mating population or comparing unidirectional and multidirectional gene flow models. In the context of inferences employing Markov chain Monte Carlo methods, the accuracy of the marginal likelihoods depends heavily on the approximation method used to calculate the marginal likelihood. Two methods, modified thermodynamic integration and a stabilized harmonic mean estimator, are compared. With finite Markov chain Monte Carlo run lengths, the harmonic mean estimator may not be consistent. Thermodynamic integration, in contrast, delivers considerably better estimates of the marginal likelihood. The choice of prior distributions does not influence the order and choice of the better models when the marginal likelihood is estimated using thermodynamic integration, whereas with the harmonic mean estimator the influence of the prior is pronounced and the order of the models changes. The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.

  9. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  10. Cluster and propensity based approximation of a network

    PubMed Central

    2013-01-01

    Background The models in this article generalize current models for both correlation networks and multigraph networks. Correlation networks are widely applied in genomics research. In contrast to general networks, it is straightforward to test the statistical significance of an edge in a correlation network. It is also easy to decompose the underlying correlation matrix and generate informative network statistics such as the module eigenvector. However, correlation networks only capture the connections between numeric variables. An open question is whether one can find suitable decompositions of the similarity measures employed in constructing general networks. Multigraph networks are attractive because they support likelihood based inference. Unfortunately, it is unclear how to adjust current statistical methods to detect the clusters inherent in many data sets. Results Here we present an intuitive and parsimonious parametrization of a general similarity measure such as a network adjacency matrix. The cluster and propensity based approximation (CPBA) of a network not only generalizes correlation network methods but also multigraph methods. In particular, it gives rise to a novel and more realistic multigraph model that accounts for clustering and provides likelihood based tests for assessing the significance of an edge after controlling for clustering. We present a novel Majorization-Minimization (MM) algorithm for estimating the parameters of the CPBA. To illustrate the practical utility of the CPBA of a network, we apply it to gene expression data and to a bi-partite network model for diseases and disease genes from the Online Mendelian Inheritance in Man (OMIM). Conclusions The CPBA of a network is theoretically appealing since a) it generalizes correlation and multigraph network methods, b) it improves likelihood based significance tests for edge counts, c) it directly models higher-order relationships between clusters, and d) it suggests novel clustering algorithms. The CPBA of a network is implemented in Fortran 95 and bundled in the freely available R package PropClust. PMID:23497424

  11. Analysis of crackling noise using the maximum-likelihood method: Power-law mixing and exponential damping.

    PubMed

    Salje, Ekhard K H; Planes, Antoni; Vives, Eduard

    2017-10-01

    Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.

  12. L.U.St: a tool for approximated maximum likelihood supertree reconstruction.

    PubMed

    Akanni, Wasiu A; Creevey, Christopher J; Wilkinson, Mark; Pisani, Davide

    2014-06-12

    Supertrees combine disparate, partially overlapping trees to generate a synthesis that provides a high level perspective that cannot be attained from the inspection of individual phylogenies. Supertrees can be seen as meta-analytical tools that can be used to make inferences based on results of previous scientific studies. Their meta-analytical application has increased in popularity since it was realised that the power of statistical tests for the study of evolutionary trends critically depends on the use of taxon-dense phylogenies. Further to that, supertrees have found applications in phylogenomics where they are used to combine gene trees and recover species phylogenies based on genome-scale data sets. Here, we present the L.U.St package, a python tool for approximate maximum likelihood supertree inference and illustrate its application using a genomic data set for the placental mammals. L.U.St allows the calculation of the approximate likelihood of a supertree, given a set of input trees, performs heuristic searches to look for the supertree of highest likelihood, and performs statistical tests of two or more supertrees. To this end, L.U.St implements a winning sites test allowing ranking of a collection of a-priori selected hypotheses, given as a collection of input supertree topologies. It also outputs a file of input-tree-wise likelihood scores that can be used as input to CONSEL for calculation of standard tests of two trees (e.g. Kishino-Hasegawa, Shimidoara-Hasegawa and Approximately Unbiased tests). This is the first fully parametric implementation of a supertree method, it has clearly understood properties, and provides several advantages over currently available supertree approaches. It is easy to implement and works on any platform that has python installed. bitBucket page - https://afro-juju@bitbucket.org/afro-juju/l.u.st.git. Davide.Pisani@bristol.ac.uk.

  13. Consumer perceptions of specific design characteristics for front-of-package nutrition labels.

    PubMed

    Acton, R B; Vanderlee, L; Roberto, C A; Hammond, D

    2018-04-01

    An increasing number of countries are developing front-of-package (FOP) labels; however, there is limited evidence examining the impact of specific design characteristics for these labels. The current study investigated consumer perceptions of several FOP label design characteristics, including potential differences among sociodemographic sub-groups. Two hundred and thirty-four participants aged 16 years or older completed nine label rating tasks on a laptop at a local shopping mall in Canada. The rating tasks asked participants to rate five primary design characteristics (border, background presence, background colour, 'caution' symbol and government attribution) on their noticeability, readability, believability and likelihood of changing their beverage choice. FOP labels with a border, solid background and contrasting colours increased noticeability. A solid background increased readability, while a contrasting background colour reduced it. Both a 'caution' symbol and a government attribution increased the believability of the labels and the perceived likelihood of influencing beverage choice. The effect of the design characteristics was generally similar across sociodemographic groups, with modest differences in five of the nine outcomes. Label design characteristics, such as the use of a border, colour and symbols can enhance the salience of FOP nutrition labels and may increase the likelihood that FOP labels are used by consumers.

  14. Robust Likelihoods for Inflationary Gravitational Waves from Maps of Cosmic Microwave Background Polarization

    NASA Technical Reports Server (NTRS)

    Switzer, Eric Ryan; Watts, Duncan J.

    2016-01-01

    The B-mode polarization of the cosmic microwave background provides a unique window into tensor perturbations from inflationary gravitational waves. Survey effects complicate the estimation and description of the power spectrum on the largest angular scales. The pixel-space likelihood yields parameter distributions without the power spectrum as an intermediate step, but it does not have the large suite of tests available to power spectral methods. Searches for primordial B-modes must rigorously reject and rule out contamination. Many forms of contamination vary or are uncorrelated across epochs, frequencies, surveys, or other data treatment subsets. The cross power and the power spectrum of the difference of subset maps provide approaches to reject and isolate excess variance. We develop an analogous joint pixel-space likelihood. Contamination not modeled in the likelihood produces parameter-dependent bias and complicates the interpretation of the difference map. We describe a null test that consistently weights the difference map. Excess variance should either be explicitly modeled in the covariance or be removed through reprocessing the data.

  15. Computational tools for exact conditional logistic regression.

    PubMed

    Corcoran, C; Mehta, C; Patel, N; Senchaudhuri, P

    Logistic regression analyses are often challenged by the inability of unconditional likelihood-based approximations to yield consistent, valid estimates and p-values for model parameters. This can be due to sparseness or separability in the data. Conditional logistic regression, though useful in such situations, can also be computationally unfeasible when the sample size or number of explanatory covariates is large. We review recent developments that allow efficient approximate conditional inference, including Monte Carlo sampling and saddlepoint approximations. We demonstrate through real examples that these methods enable the analysis of significantly larger and more complex data sets. We find in this investigation that for these moderately large data sets Monte Carlo seems a better alternative, as it provides unbiased estimates of the exact results and can be executed in less CPU time than can the single saddlepoint approximation. Moreover, the double saddlepoint approximation, while computationally the easiest to obtain, offers little practical advantage. It produces unreliable results and cannot be computed when a maximum likelihood solution does not exist. Copyright 2001 John Wiley & Sons, Ltd.

  16. A 3D approximate maximum likelihood localization solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-09-23

    A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with acoustic transmitters and vocalizing marine mammals to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives and support Marine Renewable Energy. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  17. Discoveries far from the lamppost with matrix elements and ranking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Debnath, Dipsikha; Gainer, James S.; Matchev, Konstantin T.

    2015-04-01

    The prevalence of null results in searches for new physics at the LHC motivates the effort to make these searches as model-independent as possible. We describe procedures for adapting the Matrix Element Method for situations where the signal hypothesis is not known a priori. We also present general and intuitive approaches for performing analyses and presenting results, which involve the flattening of background distributions using likelihood information. The first flattening method involves ranking events by background matrix element, the second involves quantile binning with respect to likelihood (and other) variables, and the third method involves reweighting histograms by the inversemore » of the background distribution.« less

  18. Multiparameter linear least-squares fitting to Poisson data one count at a time

    NASA Technical Reports Server (NTRS)

    Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.

    1995-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 keV, HEAO 3) energy channels of a Ge spectrometer, where the expected number of counts obtained per scan may be very low. Such an analysis system is discussed and compared to the method previously used.

  19. Maximum-likelihood estimation of channel-dependent trial-to-trial variability of auditory evoked brain responses in MEG

    PubMed Central

    2014-01-01

    Background We propose a mathematical model for multichannel assessment of the trial-to-trial variability of auditory evoked brain responses in magnetoencephalography (MEG). Methods Following the work of de Munck et al., our approach is based on the maximum likelihood estimation and involves an approximation of the spatio-temporal covariance of the contaminating background noise by means of the Kronecker product of its spatial and temporal covariance matrices. Extending the work of de Munck et al., where the trial-to-trial variability of the responses was considered identical to all channels, we evaluate it for each individual channel. Results Simulations with two equivalent current dipoles (ECDs) with different trial-to-trial variability, one seeded in each of the auditory cortices, were used to study the applicability of the proposed methodology on the sensor level and revealed spatial selectivity of the trial-to-trial estimates. In addition, we simulated a scenario with neighboring ECDs, to show limitations of the method. We also present an illustrative example of the application of this methodology to real MEG data taken from an auditory experimental paradigm, where we found hemispheric lateralization of the habituation effect to multiple stimulus presentation. Conclusions The proposed algorithm is capable of reconstructing lateralization effects of the trial-to-trial variability of evoked responses, i.e. when an ECD of only one hemisphere habituates, whereas the activity of the other hemisphere is not subject to habituation. Hence, it may be a useful tool in paradigms that assume lateralization effects, like, e.g., those involving language processing. PMID:24939398

  20. Model selection and parameter estimation in structural dynamics using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Ben Abdessalem, Anis; Dervilis, Nikolaos; Wagg, David; Worden, Keith

    2018-01-01

    This paper will introduce the use of the approximate Bayesian computation (ABC) algorithm for model selection and parameter estimation in structural dynamics. ABC is a likelihood-free method typically used when the likelihood function is either intractable or cannot be approached in a closed form. To circumvent the evaluation of the likelihood function, simulation from a forward model is at the core of the ABC algorithm. The algorithm offers the possibility to use different metrics and summary statistics representative of the data to carry out Bayesian inference. The efficacy of the algorithm in structural dynamics is demonstrated through three different illustrative examples of nonlinear system identification: cubic and cubic-quintic models, the Bouc-Wen model and the Duffing oscillator. The obtained results suggest that ABC is a promising alternative to deal with model selection and parameter estimation issues, specifically for systems with complex behaviours.

  1. Approximated mutual information training for speech recognition using myoelectric signals.

    PubMed

    Guo, Hua J; Chan, A D C

    2006-01-01

    A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.

  2. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  3. Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data

    DOE PAGES

    Agnese, R.

    2015-03-30

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from Pb210decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we also perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in ourmore » data. Finally, we confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.« less

  4. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    ERIC Educational Resources Information Center

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  5. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  6. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  7. A close examination of double filtering with fold change and t test in microarray analysis

    PubMed Central

    2009-01-01

    Background Many researchers use the double filtering procedure with fold change and t test to identify differentially expressed genes, in the hope that the double filtering will provide extra confidence in the results. Due to its simplicity, the double filtering procedure has been popular with applied researchers despite the development of more sophisticated methods. Results This paper, for the first time to our knowledge, provides theoretical insight on the drawback of the double filtering procedure. We show that fold change assumes all genes to have a common variance while t statistic assumes gene-specific variances. The two statistics are based on contradicting assumptions. Under the assumption that gene variances arise from a mixture of a common variance and gene-specific variances, we develop the theoretically most powerful likelihood ratio test statistic. We further demonstrate that the posterior inference based on a Bayesian mixture model and the widely used significance analysis of microarrays (SAM) statistic are better approximations to the likelihood ratio test than the double filtering procedure. Conclusion We demonstrate through hypothesis testing theory, simulation studies and real data examples, that well constructed shrinkage testing methods, which can be united under the mixture gene variance assumption, can considerably outperform the double filtering procedure. PMID:19995439

  8. Multidimensional stochastic approximation using locally contractive functions

    NASA Technical Reports Server (NTRS)

    Lawton, W. M.

    1975-01-01

    A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.

  9. Is Immigrant Status Relevant in School Violence Research? An Analysis with Latino Students

    ERIC Educational Resources Information Center

    Peguero, Anthony A.

    2008-01-01

    Background: The role of race and ethnicity is consistently found to be linked to the likelihood of students experiencing school violence-related outcomes; however, the findings are not always consistent. The variation of likelihood, as well as the type, of student-related school violence outcome among the Latino student population may be…

  10. SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.

    PubMed

    Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman

    2017-03-01

    We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).

  11. COSMOABC: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Ishida, E. E. O.; Vitenti, S. D. P.; Penna-Lima, M.; Cisewski, J.; de Souza, R. S.; Trindade, A. M. M.; Cameron, E.; Busti, V. C.; COIN Collaboration

    2015-11-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present COSMOABC, a Python ABC sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled COSMOABC with the NUMCOSMO library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. COSMOABC is published under the GPLv3 license on PyPI and GitHub and documentation is available at http://goo.gl/SmB8EX.

  12. Approximate maximum likelihood decoding of block codes

    NASA Technical Reports Server (NTRS)

    Greenberger, H. J.

    1979-01-01

    Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.

  13. Maximum likelihood clustering with dependent feature trees

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.

  14. Information loss in approximately bayesian data assimilation: a comparison of generative and discriminative approaches to estimating agricultural yield

    USDA-ARS?s Scientific Manuscript database

    Data assimilation and regression are two commonly used methods for predicting agricultural yield from remote sensing observations. Data assimilation is a generative approach because it requires explicit approximations of the Bayesian prior and likelihood to compute the probability density function...

  15. Logistic Approximation to the Normal: The KL Rationale

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2006-01-01

    A rationale is proposed for approximating the normal distribution with a logistic distribution using a scaling constant based on minimizing the Kullback-Leibler (KL) information, that is, the expected amount of information available in a sample to distinguish between two competing distributions using a likelihood ratio (LR) test, assuming one of…

  16. Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models

    ERIC Educational Resources Information Center

    Chun, So Yeon; Shapiro, Alexander

    2009-01-01

    The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…

  17. The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions

    NASA Astrophysics Data System (ADS)

    Loaiciga, Hugo A.; MariñO, Miguel A.

    1987-01-01

    The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.

  18. Age of onset group characteristics in forensic patients with schizophrenia.

    PubMed

    Vinokur, D; Levine, S Z; Roe, D; Krivoy, A; Fischel, T

    2014-03-01

    This study aims to empirically identify age of onset groups and their clinical and background characteristics in forensic patients with schizophrenia. Hospital charts were reviewed of all 138 forensic patients with schizophrenia admitted to Geha Psychiatric Hospital that serves a catchment area of approximately 500,000 people, from 2000 to 2009 inclusive. Admixture analysis empirically identified early- (M=19.99, SD=3.31) and late-onset groups (M=36.13, SD=9.25). Early-onset was associated with more suicide attempts, violence before the age of 15, and early conduct problems, whereas late-onset was associated with a greater likelihood of violence after the age of 18 and marriage (P<0.01). The current findings provide clinicians with a unique direction for risk assessment and indicate differences in violence between early- and late-onset schizophrenia, particularly co-occurrence of harmful behavioral phenotypes. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  19. Gaussian Decomposition of Laser Altimeter Waveforms

    NASA Technical Reports Server (NTRS)

    Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan

    1999-01-01

    We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.

  20. The amplitude and spectral index of the large angular scale anisotropy in the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Ganga, Ken; Page, Lyman; Cheng, Edward; Meyer, Stephan

    1994-01-01

    In many cosmological models, the large angular scale anisotropy in the cosmic microwave background is parameterized by a spectral index, n, and a quadrupolar amplitude, Q. For a Harrison-Peebles-Zel'dovich spectrum, n = 1. Using data from the Far Infrared Survey (FIRS) and a new statistical measure, a contour plot of the likelihood for cosmological models for which -1 less than n less than 3 and 0 equal to or less than Q equal to or less than 50 micro K is obtained. Depending upon the details of the analysis, the maximum likelihood occurs at n between 0.8 and 1.4 and Q between 18 and 21 micro K. Regardless of Q, the likelihood is always less than half its maximum for n less than -0.4 and for n greater than 2.2, as it is for Q less than 8 micro K and Q greater than 44 micro K.

  1. Less-Complex Method of Classifying MPSK

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2006-01-01

    An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may "know" little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term "modulation classification" or simply "classification" signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum- likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of carrier phase, of a complicated sum of functions of the baseband sample values, the carrier phase, the carrier-signal and noise magnitudes, and M or M'. Then the likelihood ratio, defined as the ratio between the likelihood functions, is computed, leading to the choice of whichever hypothesis - M or M'- is more likely. In the alternative method, the integral in each likelihood function is approximated by a sum over values of the integrand sampled at a number, 1, of equally spaced values of carrier phase. Used in this way, 1 is a parameter that can be adjusted to trade computational complexity against the probability of misclassification. In the limit as 1 approaches infinity, one obtains the integral form of the likelihood function and thus recovers the ML classification. The present approximate method has been tested in comparison with the ML method by means of computational simulations. The results of the simulations have shown that the performance (as quantified by probability of misclassification) of the approximate method is nearly indistinguishable from that of the ML method (see figure).

  2. Bilayer segmentation of webcam videos using tree-based classifiers.

    PubMed

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.

  3. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  4. Gaussian statistics of the cosmic microwave background: Correlation of temperature extrema in the COBE DMR two-year sky maps

    NASA Technical Reports Server (NTRS)

    Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.

    1995-01-01

    We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.

  5. The Atacama Cosmology Telescope: Likelihood for Small-Scale CMB Data

    NASA Technical Reports Server (NTRS)

    Dunkley, J.; Calabrese, E.; Sievers, J.; Addison, G. E.; Battaglia, N.; Battistelli, E. S.; Bond, J. R.; Das, S.; Devlin, M. J.; Dunner, R.; hide

    2013-01-01

    The Atacama Cosmology Telescope has measured the angular power spectra of microwave fluctuations to arcminute scales at frequencies of 148 and 218 GHz, from three seasons of data. At small scales the fluctuations in the primordial Cosmic Microwave Background (CMB) become increasingly obscured by extragalactic foregounds and secondary CMB signals. We present results from a nine-parameter model describing these secondary effects, including the thermal and kinematic Sunyaev-Zel'dovich (tSZ and kSZ) power; the clustered and Poisson-like power from Cosmic Infrared Background (CIB) sources, and their frequency scaling; the tSZ-CIB correlation coefficient; the extragalactic radio source power; and thermal dust emission from Galactic cirrus in two different regions of the sky. In order to extract cosmological parameters, we describe a likelihood function for the ACT data, fitting this model to the multi-frequency spectra in the multipole range 500 < l < 10000. We extend the likelihood to include spectra from the South Pole Telescope at frequencies of 95, 150, and 220 GHz. Accounting for different radio source levels and Galactic cirrus emission, the same model provides an excellent fit to both datasets simultaneously, with ?2/dof= 675/697 for ACT, and 96/107 for SPT. We then use the multi-frequency likelihood to estimate the CMB power spectrum from ACT in bandpowers, marginalizing over the secondary parameters. This provides a simplified 'CMB-only' likelihood in the range 500 < l < 3500 for use in cosmological parameter estimation

  6. SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction

    PubMed Central

    Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.

    2015-01-01

    Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831

  7. Predicting likelihood of seeking help through the employee assistance program among salaried and union hourly employees.

    PubMed

    Delaney, W; Grube, J W; Ames, G M

    1998-03-01

    This research investigated belief, social support and background predictors of employee likelihood to use an Employee Assistance Program (EAP) for a drinking problem. An anonymous cross-sectional survey was administered in the home. Bivariate analyses and simultaneous equations path analysis were used to explore a model of EAP use. Survey and ethnographic research were conducted in a unionized heavy machinery manufacturing plant in the central states of the United States. A random sample of 852 hourly and salaried employees was selected. In addition to background variables, measures included: likelihood of going to an EAP for a drinking problem, belief the EAP can help, social support for the EAP from co-workers/others, belief that EAP use will harm employment, and supervisor encourages the EAP for potential drinking problems. Belief in EAP efficacy directly increased the likelihood of going to an EAP. Greater perceived social support and supervisor encouragement increased the likelihood of going to an EAP both directly and indirectly through perceived EAP efficacy. Black and union hourly employees were more likely to say they would use an EAP. Males and those who reported drinking during working hours were less likely to say they would use an EAP for a drinking problem. EAP beliefs and social support have significant effects on likelihood to go to an EAP for a drinking problem. EAPs may wish to focus their efforts on creating an environment where there is social support from coworkers and encouragement from supervisors for using EAP services. Union networks and team members have an important role to play in addition to conventional supervisor intervention.

  8. Measurement of CIB power spectra with CAM-SPEC from Planck HFI maps

    NASA Astrophysics Data System (ADS)

    Mak, Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine

    2015-08-01

    We present new measurements of the cosmic infrared background (CIB) anisotropies and its first likelihood using Planck HFI data at 353, 545, and 857 GHz. The measurements are based on cross-frequency power spectra and likelihood analysis using the CAM-SPEC package, rather than map based template removal of foregrounds as done in previous Planck CIB analysis. We construct the likelihood of the CIB temperature fluctuations, an extension of CAM-SPEC likelihood as used in CMB analysis to higher frequency, and use it to drive the best estimate of the CIB power spectrum over three decades in multiple moment, l, covering 50 ≤ l ≤ 2500. We adopt parametric models of the CIB and foreground contaminants (Galactic cirrus, infrared point sources, and cosmic microwave background anisotropies), and calibrate the dataset uniformly across frequencies with known Planck beam and noise properties in the likelihood construction. We validate our likelihood through simulations and extensive suite of consistency tests, and assess the impact of instrumental and data selection effects on the final CIB power spectrum constraints. Two approaches are developed for interpreting the CIB power spectrum. The first approach is based on simple parametric model which model the cross frequency power using amplitudes, correlation coefficients, and known multipole dependence. The second approach is based on the physical models for galaxy clustering and the evolution of infrared emission of galaxies. The new approaches fit all auto- and cross- power spectra very well, with the best fit of χ2ν = 1.04 (parametric model). Using the best foreground solution, we find that the cleaned CIB power spectra are in good agreement with previous Planck and Herschel measurements.

  9. Algorithms of maximum likelihood data clustering with applications

    NASA Astrophysics Data System (ADS)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  10. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  11. Gaussianization for fast and accurate inference from cosmological data

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2016-06-01

    We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.

  12. Average Likelihood Methods for Code Division Multiple Access (CDMA)

    DTIC Science & Technology

    2014-05-01

    lengths in the range of 22 to 213 and possibly higher. Keywords: DS / CDMA signals, classification, balanced CDMA load, synchronous CDMA , decision...likelihood ratio test (ALRT). We begin this classification problem by finding the size of the spreading matrix that generated the DS - CDMA signal. As...Theoretical Background The classification of DS / CDMA signals should not be confused with the problem of multiuser detection. The multiuser detection deals

  13. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    DTIC Science & Technology

    1990-11-01

    1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and

  14. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution

    PubMed Central

    2013-01-01

    Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171

  15. Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection

    NASA Astrophysics Data System (ADS)

    Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan

    2017-08-01

    Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro

    We propose a safeguard procedure for statistical inference that provides universal protection against mismodeling of the background. The method quantifies and incorporates the signal-like residuals of the background model into the likelihood function, using information available in a calibration dataset. This prevents possible false discovery claims that may arise through unknown mismodeling, and corrects the bias in limit setting created by overestimated or underestimated background. We demonstrate how the method removes the bias created by an incomplete background model using three realistic case studies.

  17. On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood.

    PubMed

    Karabatsos, George

    2018-06-01

    This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon previous methods because it provides an omnibus test of the entire hierarchy of cancellation axioms, beyond double cancellation. It does so while accounting for the posterior uncertainty that is inherent in the empirical orderings that are implied by these axioms, together. The new method is illustrated through a test of the cancellation axioms on a classic survey data set, and through the analysis of simulated data.

  18. Evidence and Clinical Trials.

    NASA Astrophysics Data System (ADS)

    Goodman, Steven N.

    1989-11-01

    This dissertation explores the use of a mathematical measure of statistical evidence, the log likelihood ratio, in clinical trials. The methods and thinking behind the use of an evidential measure are contrasted with traditional methods of analyzing data, which depend primarily on a p-value as an estimate of the statistical strength of an observed data pattern. It is contended that neither the behavioral dictates of Neyman-Pearson hypothesis testing methods, nor the coherency dictates of Bayesian methods are realistic models on which to base inference. The use of the likelihood alone is applied to four aspects of trial design or conduct: the calculation of sample size, the monitoring of data, testing for the equivalence of two treatments, and meta-analysis--the combining of results from different trials. Finally, a more general model of statistical inference, using belief functions, is used to see if it is possible to separate the assessment of evidence from our background knowledge. It is shown that traditional and Bayesian methods can be modeled as two ends of a continuum of structured background knowledge, methods which summarize evidence at the point of maximum likelihood assuming no structure, and Bayesian methods assuming complete knowledge. Both schools are seen to be missing a concept of ignorance- -uncommitted belief. This concept provides the key to understanding the problem of sampling to a foregone conclusion and the role of frequency properties in statistical inference. The conclusion is that statistical evidence cannot be defined independently of background knowledge, and that frequency properties of an estimator are an indirect measure of uncommitted belief. Several likelihood summaries need to be used in clinical trials, with the quantitative disparity between summaries being an indirect measure of our ignorance. This conclusion is linked with parallel ideas in the philosophy of science and cognitive psychology.

  19. A fast Bayesian approach to discrete object detection in astronomical data sets - PowellSnakes I

    NASA Astrophysics Data System (ADS)

    Carvalho, Pedro; Rocha, Graça; Hobson, M. P.

    2009-03-01

    A new fast Bayesian approach is introduced for the detection of discrete objects immersed in a diffuse background. This new method, called PowellSnakes, speeds up traditional Bayesian techniques by (i) replacing the standard form of the likelihood for the parameters characterizing the discrete objects by an alternative exact form that is much quicker to evaluate; (ii) using a simultaneous multiple minimization code based on Powell's direction set algorithm to locate rapidly the local maxima in the posterior and (iii) deciding whether each located posterior peak corresponds to a real object by performing a Bayesian model selection using an approximate evidence value based on a local Gaussian approximation to the peak. The construction of this Gaussian approximation also provides the covariance matrix of the uncertainties in the derived parameter values for the object in question. This new approach provides a speed up in performance by a factor of `100' as compared to existing Bayesian source extraction methods that use Monte Carlo Markov chain to explore the parameter space, such as that presented by Hobson & McLachlan. The method can be implemented in either real or Fourier space. In the case of objects embedded in a homogeneous random field, working in Fourier space provides a further speed up that takes advantage of the fact that the correlation matrix of the background is circulant. We illustrate the capabilities of the method by applying to some simplified toy models. Furthermore, PowellSnakes has the advantage of consistently defining the threshold for acceptance/rejection based on priors which cannot be said of the frequentist methods. We present here the first implementation of this technique (version I). Further improvements to this implementation are currently under investigation and will be published shortly. The application of the method to realistic simulated Planck observations will be presented in a forthcoming publication.

  20. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  1. A new model to predict weak-lensing peak counts. II. Parameter constraint strategies

    NASA Astrophysics Data System (ADS)

    Lin, Chieh-An; Kilbinger, Martin

    2015-11-01

    Context. Peak counts have been shown to be an excellent tool for extracting the non-Gaussian part of the weak lensing signal. Recently, we developed a fast stochastic forward model to predict weak-lensing peak counts. Our model is able to reconstruct the underlying distribution of observables for analysis. Aims: In this work, we explore and compare various strategies for constraining a parameter using our model, focusing on the matter density Ωm and the density fluctuation amplitude σ8. Methods: First, we examine the impact from the cosmological dependency of covariances (CDC). Second, we perform the analysis with the copula likelihood, a technique that makes a weaker assumption than does the Gaussian likelihood. Third, direct, non-analytic parameter estimations are applied using the full information of the distribution. Fourth, we obtain constraints with approximate Bayesian computation (ABC), an efficient, robust, and likelihood-free algorithm based on accept-reject sampling. Results: We find that neglecting the CDC effect enlarges parameter contours by 22% and that the covariance-varying copula likelihood is a very good approximation to the true likelihood. The direct techniques work well in spite of noisier contours. Concerning ABC, the iterative process converges quickly to a posterior distribution that is in excellent agreement with results from our other analyses. The time cost for ABC is reduced by two orders of magnitude. Conclusions: The stochastic nature of our weak-lensing peak count model allows us to use various techniques that approach the true underlying probability distribution of observables, without making simplifying assumptions. Our work can be generalized to other observables where forward simulations provide samples of the underlying distribution.

  2. The skewed weak lensing likelihood: why biases arise, despite data and theory being sound

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim

    2018-07-01

    We derive the essentials of the skewed weak lensing likelihood via a simple hierarchical forward model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of Lambda cold dark matter. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from cosmic microwave background analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30 per cent of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.

  3. Predicting Rural Practice Using Different Definitions to Classify Medical School Applicants as Having a Rural Upbringing

    ERIC Educational Resources Information Center

    Owen, John A.; Conaway, Mark R.; Bailey, Beth A.; Hayden, Gregory F.

    2007-01-01

    Purpose: This study determines the relationship between a medical school applicant's rural background and the likelihood of rural practice using different definitions of rural background. Methods: Cohort study of 599 physicians who entered the University of Virginia School of Medicine in 1990-1995 and graduated in 1994-1999. The…

  4. An improved approximate-Bayesian model-choice method for estimating shared evolutionary history

    PubMed Central

    2014-01-01

    Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937

  5. Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics.

    PubMed

    Sokoloski, Sacha

    2017-09-01

    In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli that caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' rule to combine population responses with the predictions of an internal model. The internal model of the Bayes filter is based on the true stimulus dynamics, and in this note, we present a method for training a theoretical neural circuit to approximately implement a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of linear probabilistic population codes to compute Bayes' rule and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood of the neural network parameters with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem and show how the hidden layer of the neural network develops tuning curves consistent with findings in experimental neuroscience.

  6. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    USGS Publications Warehouse

    Langbein, John O.

    2017-01-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  7. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  8. Effect of formal and informal likelihood functions on uncertainty assessment in a single event rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran

    2016-09-01

    In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.

  9. Robust Visual Tracking via Online Discriminative and Low-Rank Dictionary Learning.

    PubMed

    Zhou, Tao; Liu, Fanghui; Bhaskar, Harish; Yang, Jie

    2017-09-12

    In this paper, we propose a novel and robust tracking framework based on online discriminative and low-rank dictionary learning. The primary aim of this paper is to obtain compact and low-rank dictionaries that can provide good discriminative representations of both target and background. We accomplish this by exploiting the recovery ability of low-rank matrices. That is if we assume that the data from the same class are linearly correlated, then the corresponding basis vectors learned from the training set of each class shall render the dictionary to become approximately low-rank. The proposed dictionary learning technique incorporates a reconstruction error that improves the reliability of classification. Also, a multiconstraint objective function is designed to enable active learning of a discriminative and robust dictionary. Further, an optimal solution is obtained by iteratively computing the dictionary, coefficients, and by simultaneously learning the classifier parameters. Finally, a simple yet effective likelihood function is implemented to estimate the optimal state of the target during tracking. Moreover, to make the dictionary adaptive to the variations of the target and background during tracking, an online update criterion is employed while learning the new dictionary. Experimental results on a publicly available benchmark dataset have demonstrated that the proposed tracking algorithm performs better than other state-of-the-art trackers.

  10. The retention of health human resources in primary healthcare centers in Lebanon: a national survey

    PubMed Central

    2012-01-01

    Background Critical shortages of health human resources (HHR), associated with high turnover rates, have been a concern in many countries around the globe. Of particular interest is the effect of such a trend on the primary healthcare (PHC) sector; considered a cornerstone in any effective healthcare system. This study is a rare attempt to investigate PHC HHR work characteristics, level of burnout and likelihood to quit as well as the factors significantly associated with staff retention at PHC centers in Lebanon. Methods A cross-sectional design was utilized to survey all health providers at 81 PHC centers dispersed in all districts of Lebanon. The questionnaire consisted of four sections: socio-demographic/ professional background, organizational/institutional characteristics, likelihood to quit and level of professional burnout (using the Maslach-Burnout Inventory). A total of 755 providers completed the questionnaire (60.5% response rate). Bivariate analyses and multinomial logistic regression were used to determine factors associated with likelihood to quit. Results Two out of five respondents indicated likelihood to quit their jobs within the next 1–3 years and an additional 13.4% were not sure about quitting. The top three reasons behind likelihood to quit were poor salary (54.4%), better job opportunities outside the country (35.1%) and lack of professional development (33.7%). A U-shaped relationship was observed between age and likelihood to quit. Regression analysis revealed that high levels of burnout, lower level of education and low tenure were all associated with increased likelihood to quit. Conclusions The study findings reflect an unstable workforce and are not conducive to supporting an expanded role for PHC in the Lebanese healthcare system. While strategies aiming at improving staff retention would be important to develop and implement for all PHC HHR; targeted retention initiatives should focus on the young-new recruits and allied health professionals. Particular attention should be dedicated to enhancing providers’ role satisfaction and sense of job security. Such initiatives are of pivotal importance to stabilize the workforce and ensure its longevity. PMID:23173905

  11. First in line: Prioritizing receipt of Social Security disability benefits based on likelihood of death during adjudication

    PubMed Central

    Rasch, Elizabeth K.; Huynh, Minh; Ho, Pei-Shu; Heuser, Aaron; Houtenville, Andrew; Chan, Leighton

    2014-01-01

    Background: Given the complexity of the adjudication process and volume of applications to Social Security Administration’s (SSA) disability programs, many individuals with serious medical conditions die while awaiting an application decision. Limitations of traditional survival methods called for a new empirical approach to identify conditions resulting in rapid mortality. Objective: To identify health conditions associated with significantly higher mortality than a key reference group among applicants for SSA disability programs. Research design: We identified mortality patterns and generated a survival surface for a reference group using conditions already designated for expedited processing. We identified conditions associated with significantly higher mortality than the reference group and prioritized them by the expected likelihood of death during the adjudication process. Subjects: Administrative records of 29 million Social Security disability applicants, who applied for benefits from 1996 – 2007, were analyzed. Measures: We computed survival spells from time of onset of disability to death, and from date of application to death. Survival data were organized by entry cohort. Results: In our sample, we observed that approximately 42,000 applicants died before a decision was made on their disability claims. We identified 24 conditions with survival profiles comparable to the reference group. Applicants with these conditions were not likely to survive adjudication. Conclusions: Our approach facilitates ongoing revision of the conditions SSA designates for expedited awards and has applicability to other programs where survival profiles are a consideration. PMID:25310524

  12. Fundamentals and Recent Developments in Approximate Bayesian Computation

    PubMed Central

    Lintusaari, Jarno; Gutmann, Michael U.; Dutta, Ritabrata; Kaski, Samuel; Corander, Jukka

    2017-01-01

    Abstract Bayesian inference plays an important role in phylogenetics, evolutionary biology, and in many other branches of science. It provides a principled framework for dealing with uncertainty and quantifying how it changes in the light of new evidence. For many complex models and inference problems, however, only approximate quantitative answers are obtainable. Approximate Bayesian computation (ABC) refers to a family of algorithms for approximate inference that makes a minimal set of assumptions by only requiring that sampling from a model is possible. We explain here the fundamentals of ABC, review the classical algorithms, and highlight recent developments. [ABC; approximate Bayesian computation; Bayesian inference; likelihood-free inference; phylogenetics; simulator-based models; stochastic simulation models; tree-based models.] PMID:28175922

  13. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function

    NASA Astrophysics Data System (ADS)

    Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.

    2017-06-01

    This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.

  15. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    DOE PAGES

    Li, Xinya; Deng, Z. Daniel; USA, Richland Washington; ...

    2014-11-27

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developedmore » using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.« less

  16. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    NASA Astrophysics Data System (ADS)

    Li, Xinya; Deng, Z. Daniel; Sun, Yannan; Martinez, Jayson J.; Fu, Tao; McMichael, Geoffrey A.; Carlson, Thomas J.

    2014-11-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  17. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    PubMed Central

    Li, Xinya; Deng, Z. Daniel; Sun, Yannan; Martinez, Jayson J.; Fu, Tao; McMichael, Geoffrey A.; Carlson, Thomas J.

    2014-01-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature. PMID:25427517

  18. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters.

    PubMed

    Li, Xinya; Deng, Z Daniel; Sun, Yannan; Martinez, Jayson J; Fu, Tao; McMichael, Geoffrey A; Carlson, Thomas J

    2014-11-27

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  19. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xinya; Deng, Z. Daniel; USA, Richland Washington

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developedmore » using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.« less

  20. Identification of simple objects in image sequences

    NASA Astrophysics Data System (ADS)

    Geiselmann, Christoph; Hahn, Michael

    1994-08-01

    We present an investigation in the identification and location of simple objects in color image sequences. As an example the identification of traffic signs is discussed. Three aspects are of special interest. First regions have to be detected which may contain the object. The separation of those regions from the background can be based on color, motion, and contours. In the experiments all three possibilities are investigated. The second aspect focuses on the extraction of suitable features for the identification of the objects. For that purpose the border line of the region of interest is used. For planar objects a sufficient approximation of perspective projection is affine mapping. In consequence, it is near at hand to extract affine-invariant features from the border line. The investigation includes invariant features based on Fourier descriptors and moments. Finally, the object is identified by maximum likelihood classification. In the experiments all three basic object types are correctly identified. The probabilities for misclassification have been found to be below 1%

  1. An Adaptive Buddy Check for Observational Quality Control

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.; Rukhovets, Leonid; Todling, Ricardo; DaSilva, Arlindo M.; Larson, Jay W.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    An adaptive buddy check algorithm is presented that adjusts tolerances for outlier observations based on the variability of surrounding data. The algorithm derives from a statistical hypothesis test combined with maximum-likelihood covariance estimation. Its stability is shown to depend on the initial identification of outliers by a simple background check. The adaptive feature ensures that the final quality control decisions are not very sensitive to prescribed statistics of first-guess and observation errors, nor on other approximations introduced into the algorithm. The implementation of the algorithm in a global atmospheric data assimilation is described. Its performance is contrasted with that of a non-adaptive buddy check, for the surface analysis of an extreme storm that took place in Europe on 27 December 1999. The adaptive algorithm allowed the inclusion of many important observations that differed greatly from the first guess and that would have been excluded on the basis of prescribed statistics. The analysis of the storm development was much improved as a result of these additional observations.

  2. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  3. Field Experiments on Real-Time 1-Gbps High-Speed Packet Transmission in MIMO-OFDM Broadband Packet Radio Access

    NASA Astrophysics Data System (ADS)

    Taoka, Hidekazu; Higuchi, Kenichi; Sawahashi, Mamoru

    This paper presents experimental results in real propagation channel environments of real-time 1-Gbps packet transmission using antenna-dependent adaptive modulation and channel coding (AMC) with 4-by-4 MIMO multiplexing in the downlink Orthogonal Frequency Division Multiplexing (OFDM) radio access. In the experiment, Maximum Likelihood Detection employing QR decomposition and the M-algorithm (QRM-MLD) with adaptive selection of the surviving symbol replica candidates (ASESS) is employed to achieve such a high data rate at a lower received signal-to-interference plus background noise power ratio (SINR). The field experiments, which are conducted at the average moving speed of 30km/h, show that real-time packet transmission of greater than 1Gbps in a 100-MHz channel bandwidth (i.e., 10bits/second/Hz) is achieved at the average received SINR of approximately 13.5dB using 16QAM modulation and turbo coding with the coding rate of 8/9. Furthermore, we show that the measured throughput of greater than 1Gbps is achieved at the probability of approximately 98% in a measurement course, where the maximum distance from the cell site was approximately 300m with the respective transmitter and receiver antenna separation of 1.5m and 40cm with the total transmission power of 10W. The results also clarify that the minimum required receiver antenna spacing is approximately 10cm (1.5 carrier wave length) to suppress the loss in the required received SINR at 1-Gbps throughput to within 1dB compared to that assuming the fading correlation between antennas of zero both under non-line-of-sight (NLOS) and line-of-sight (LOS) conditions.

  4. Bayesian experimental design for models with intractable likelihoods.

    PubMed

    Drovandi, Christopher C; Pettitt, Anthony N

    2013-12-01

    In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.

  5. Cramer-Rao Bound for Gaussian Random Processes and Applications to Radar Processing of Atmospheric Signals

    NASA Technical Reports Server (NTRS)

    Frehlich, Rod

    1993-01-01

    Calculations of the exact Cramer-Rao Bound (CRB) for unbiased estimates of the mean frequency, signal power, and spectral width of Doppler radar/lidar signals (a Gaussian random process) are presented. Approximate CRB's are derived using the Discrete Fourier Transform (DFT). These approximate results are equal to the exact CRB when the DFT coefficients are mutually uncorrelated. Previous high SNR limits for CRB's are shown to be inaccurate because the discrete summations cannot be approximated with integration. The performance of an approximate maximum likelihood estimator for mean frequency approaches the exact CRB for moderate signal to noise ratio and moderate spectral width.

  6. Risk factors for father-daughter incest: data from an anonymous computerized survey.

    PubMed

    Stroebel, Sandra S; Kuo, Shih-Ya; O'Keefe, Stephen L; Beard, Keith W; Swindell, Sam; Kommor, Martin J

    2013-12-01

    Retrospective data from 2,034 female participants, provided anonymously using a computer-assisted self-interview, were used to identify risk factors for father-daughter incest (FDI). A total of 51 participants had reported having experienced FDI. The risk factors identified within the nuclear family by the multiple logistic regression analysis included the following: (a) Having parents whose relationship included verbal or physical fighting or brutality increased the likelihood of FDI by approximately 5 times; (b) families accepting father-daughter nudity as measured by a scale with values ranging from 0 to 4 increased the likelihood of FDI by approximately 2 times for each unit value increase of 1 above 0; (c) demonstrating maternal affection protected against FDI. The likelihood of being a victim of FDI was highest if the participant's mother never kissed or hugged her; it decreased by 0.44 for a 1-unit increase in affection and by 0.19 times for a 2-unit increase; and (d) being in homes headed by single-parent mothers or where divorce or death of the father had resulted in a man other than the biological father living in the home increased the risk of FDI by approximately 3.2 times. The results were consistent with the idea that FDI in many families was the cumulative result of a circular pattern of interactions, a finding that has implications for treatment of the perpetrator, the victim, and the families. The data also suggested it may be possible to design an information program for parents that will result in reducing the risk of FDI in families implementing the program's recommendations.

  7. Extending the BEAGLE library to a multi-FPGA platform

    PubMed Central

    2013-01-01

    Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707

  8. Inverse Ising problem in continuous time: A latent variable approach

    NASA Astrophysics Data System (ADS)

    Donner, Christian; Opper, Manfred

    2017-12-01

    We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.

  9. Mixed effects versus fixed effects modelling of binary data with inter-subject variability.

    PubMed

    Murphy, Valda; Dunne, Adrian

    2005-04-01

    The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model. This is borne out by the results of a further simulation experiment with an increased number of subjects in each set of data. The difference in the interpretation of the parameters of the fixed and mixed effects models is discussed. It is demonstrated that the mixed effects model and parameter estimates can be used to estimate the parameters of the fixed effects model but not vice versa.

  10. PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.

  11. Planck intermediate results. XVI. Profile likelihoods for cosmological parameters

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bonaldi, A.; Bond, J. R.; Bouchet, F. R.; Burigana, C.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Couchot, F.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Liddle, A.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski∗, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rouillé d'Orfeuil, B.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Savelainen, M.; Savini, G.; Spencer, L. D.; Spinelli, M.; Starck, J.-L.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; White, M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-06-01

    We explore the 2013 Planck likelihood function with a high-precision multi-dimensional minimizer (Minuit). This allows a refinement of the ΛCDM best-fit solution with respect to previously-released results, and the construction of frequentist confidence intervals using profile likelihoods. The agreement with the cosmological results from the Bayesian framework is excellent, demonstrating the robustness of the Planck results to the statistical methodology. We investigate the inclusion of neutrino masses, where more significant differences may appear due to the non-Gaussian nature of the posterior mass distribution. By applying the Feldman-Cousins prescription, we again obtain results very similar to those of the Bayesian methodology. However, the profile-likelihood analysis of the cosmic microwave background (CMB) combination (Planck+WP+highL) reveals a minimum well within the unphysical negative-mass region. We show that inclusion of the Planck CMB-lensing information regularizes this issue, and provide a robust frequentist upper limit ∑ mν ≤ 0.26 eV (95% confidence) from the CMB+lensing+BAO data combination.

  12. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  13. Preserving Flow Variability in Watershed Model Calibrations

    EPA Science Inventory

    Background/Question/Methods Although watershed modeling flow calibration techniques often emphasize a specific flow mode, ecological conditions that depend on flow-ecology relationships often emphasize a range of flow conditions. We used informal likelihood methods to investig...

  14. Maximum Likelihood Detection of Electro-Optic Moving Targets

    DTIC Science & Technology

    1992-01-16

    indicates intensity. The Infrared Measurements Sensor (IRMS) is a scanning sensor that collects both long wave- length infrared ( LWIR , 8 to 12 fim...moving clutter. Nonstationary spatial statistics correspond to the nonuniform intensity of the background scene. An equivalent viewpoint is to...Figure 6 compares theory and experiment for 10 frames of the Longjump LWIR data obtained from the IRMS scanning sensor, which is looking at a background

  15. Probabilistic fusion of stereo with color and contrast for bilayer segmentation.

    PubMed

    Kolmogorov, Vladimir; Criminisi, Antonio; Blake, Andrew; Cross, Geoffrey; Rother, Carsten

    2006-09-01

    This paper describes models and algorithms for the real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from color/contrast or from stereo alone is known to be error-prone. Here, color, contrast, and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, Layered Dynamic Programming (LDP), solves stereo in an extended six-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive color model that is learned on-the-fly and stereo disparities are obtained by dynamic programming. The second algorithm, Layered Graph Cut (LGC), does not directly solve stereo. Instead, the stereo match likelihood is marginalized over disparities to evaluate foreground and background hypotheses and then fused with a contrast-sensitive color model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar performance, substantially better than either stereo or color/ contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output.

  16. Planck 2015 results. XV. Gravitational lensing

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Basak, S.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Lewis, A.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; White, M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    We present the most significant measurement of the cosmic microwave background (CMB) lensing potential to date (at a level of 40σ), using temperature and polarization data from the Planck 2015 full-mission release. Using a polarization-only estimator, we detect lensing at a significance of 5σ. We cross-check the accuracy of our measurement using the wide frequency coverage and complementarity of the temperature and polarization measurements. Public products based on this measurement include an estimate of the lensing potential over approximately 70% of the sky, an estimate of the lensing potential power spectrum in bandpowers for the multipole range 40 ≤ L ≤ 400, and an associated likelihood for cosmological parameter constraints. We find good agreement between our measurement of the lensing potential power spectrum and that found in the ΛCDM model that best fits the Planck temperature and polarization power spectra. Using the lensing likelihood alone we obtain a percent-level measurement of the parameter combination σ8Ω0.25m = 0.591 ± 0.021. We combine our determination of the lensing potential with the E-mode polarization, also measured by Planck, to generate an estimate of the lensing B-mode. We show that this lensing B-mode estimate is correlated with the B-modes observed directly by Planck at the expected level and with a statistical significance of 10σ, confirming Planck's sensitivity to this known sky signal. We also correlate our lensing potential estimate with the large-scale temperature anisotropies, detecting a cross-correlation at the 3σ level, as expected because of dark energy in the concordance ΛCDM model.

  17. Factors Linked to Substance Use Disorder Counselors’ (Non)Implementation Likelihood of Tobacco Cessation 5 A's, Counseling, and Pharmacotherapy

    PubMed Central

    Laschober, Tanja C.; Muilenburg, Jessica L.; Eby, Lillian T.

    2015-01-01

    Study Background Despite efforts to promote the use of tobacco cessation services (TCS), implementation extensiveness remains limited. This study investigated three factors (cognitive, behavioral, environmental) identified by social cognitive theory as predictors of substance use disorder counselors’ likelihood of use versus non-use of tobacco cessation (TC) 5 A's (ask patients about tobacco use, advise to quit, assess willingness to quit, assist in quitting, arrange for follow-up contact), counseling, and pharmacotherapy with their patients who smoke cigarettes. Methods Data were collected in 2010 from 942 counselors working in 257 treatment programs that offered TCS. Cognitive factors included perceived job competence and TC attitudes. Behavioral factors encompassed TC-related skills and general training. External factors consisted of TC financial resource availability and coworker TC attitudes. Data were analyzed using logistic regression models with nested data. Results Approximately 86% of counselors used the 5 A's, 76% used counseling, and 53% used pharmacotherapy. When counselors had greater TC-related skills and greater general training they were more likely to implement the 5 A's. Implementation of counseling was more likely when counselors had more positive attitudes toward TC treatment, greater general training, greater financial resource availability, and when coworkers had more positive attitudes toward TC treatment. Implementation of pharmacotherapy was more likely when counselors had more positive attitudes toward TC treatment, greater general training, and greater financial resource availability. Conclusion Findings indicate that interventions to promote TCS implementation should consider all three factors simultaneously as suggested by social cognitive theory. PMID:26005696

  18. Descriptive and Injunctive Network Norms Associated with Non Medical Use of Prescription Drugs among Homeless Youth

    PubMed Central

    Barman-Adhikari, Anamika; Al Tayyib, Alia; Begun, Stephanie; Bowen, Elizabeth; Rice, Eric

    2016-01-01

    Background Nonmedical use of prescription drugs (NMUPD) among youth and young adults is being increasingly recognized as a significant public health problem. Homeless youth in particular are more likely to engage in NMUPD compared to housed youth. Studies suggest that network norms are strongly associated with a range of substance use behaviors. However, evidence regarding the association between network norms and NMUPD is scarce. We sought to understand whether social network norms of NMUPD are associated with engagement in NMUPD among homeless youth. Methods 1,046 homeless youth were recruited from three drop-in centers in Los Angeles, CA and were interviewed regarding their individual and social network characteristics. Multivariate logistic regression was employed to evaluate the significance of associations between social norms (descriptive and injunctive) and self-reported NMUPD. Results Approximately 25% of youth reported past 30-day NMUPD. However, more youth (32.28%) of youth believed that their network members engage in NMUPD, perhaps suggesting some pluralistic ignorance bias. Both descriptive and injunctive norms were associated with self-reported NMUPD among homeless youth. However, these varied by network type, with presence of NMUPD engaged street-based and home-based peers (descriptive norm) increasing the likelihood of NMUPD, while objections from family-members (injunctive norm) decreasing that likelihood. Conclusions Our findings suggest that, like other substance use behaviors, NMUPD is also influenced by youths’ perceptions of the behaviors of their social network members. Therefore, prevention and interventions programs designed to influence NMUPD might benefit from taking a social network norms approach. PMID:27563741

  19. Fragment assignment in the cloud with eXpress-D

    PubMed Central

    2013-01-01

    Background Probabilistic assignment of ambiguously mapped fragments produced by high-throughput sequencing experiments has been demonstrated to greatly improve accuracy in the analysis of RNA-Seq and ChIP-Seq, and is an essential step in many other sequence census experiments. A maximum likelihood method using the expectation-maximization (EM) algorithm for optimization is commonly used to solve this problem. However, batch EM-based approaches do not scale well with the size of sequencing datasets, which have been increasing dramatically over the past few years. Thus, current approaches to fragment assignment rely on heuristics or approximations for tractability. Results We present an implementation of a distributed EM solution to the fragment assignment problem using Spark, a data analytics framework that can scale by leveraging compute clusters within datacenters–“the cloud”. We demonstrate that our implementation easily scales to billions of sequenced fragments, while providing the exact maximum likelihood assignment of ambiguous fragments. The accuracy of the method is shown to be an improvement over the most widely used tools available and can be run in a constant amount of time when cluster resources are scaled linearly with the amount of input data. Conclusions The cloud offers one solution for the difficulties faced in the analysis of massive high-thoughput sequencing data, which continue to grow rapidly. Researchers in bioinformatics must follow developments in distributed systems–such as new frameworks like Spark–for ways to port existing methods to the cloud and help them scale to the datasets of the future. Our software, eXpress-D, is freely available at: http://github.com/adarob/express-d. PMID:24314033

  20. Effect of radiance-to-reflectance transformation and atmosphere removal on maximum likelihood classification accuracy of high-dimensional remote sensing data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1994-01-01

    Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.

  1. Cramer-Rao bound analysis of wideband source localization and DOA estimation

    NASA Astrophysics Data System (ADS)

    Yip, Lean; Chen, Joe C.; Hudson, Ralph E.; Yao, Kung

    2002-12-01

    In this paper, we derive the Cramér-Rao Bound (CRB) for wideband source localization and DOA estimation. The resulting CRB formula can be decomposed into two terms: one that depends on the signal characteristic and one that depends on the array geometry. For a uniformly spaced circular array (UCA), a concise analytical form of the CRB can be given by using some algebraic approximation. We further define a DOA beamwidth based on the resulting CRB formula. The DOA beamwidth can be used to design the sampling angular spacing for the Maximum-likelihood (ML) algorithm. For a randomly distributed array, we use an elliptical model to determine the largest and smallest effective beamwidth. The effective beamwidth and the CRB analysis of source localization allow us to design an efficient algorithm for the ML estimator. Finally, our simulation results of the Approximated Maximum Likelihood (AML) algorithm are demonstrated to match well to the CRB analysis at high SNR.

  2. Markov-modulated Markov chains and the covarion process of molecular evolution.

    PubMed

    Galtier, N; Jean-Marie, A

    2004-01-01

    The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.

  3. Basics of Bayesian methods.

    PubMed

    Ghosh, Sujit K

    2010-01-01

    Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.

  4. Evaluation of risk from acts of terrorism :the adversary/defender model using belief and fuzzy sets.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.

    Risk from an act of terrorism is a combination of the likelihood of an attack, the likelihood of success of the attack, and the consequences of the attack. The considerable epistemic uncertainty in each of these three factors can be addressed using the belief/plausibility measure of uncertainty from the Dempster/Shafer theory of evidence. The adversary determines the likelihood of the attack. The success of the attack and the consequences of the attack are determined by the security system and mitigation measures put in place by the defender. This report documents a process for evaluating risk of terrorist acts using anmore » adversary/defender model with belief/plausibility as the measure of uncertainty. Also, the adversary model is a linguistic model that applies belief/plausibility to fuzzy sets used in an approximate reasoning rule base.« less

  5. Accounting for informatively missing data in logistic regression by means of reassessment sampling.

    PubMed

    Lin, Ji; Lyles, Robert H

    2015-05-20

    We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  7. Population genetics inference for longitudinally-sampled mutants under strong selection.

    PubMed

    Lacerda, Miguel; Seoighe, Cathal

    2014-11-01

    Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.

  8. Evidence-Based Secondary Transition Practices for Enhancing School Completion

    ERIC Educational Resources Information Center

    Test, David W.; Fowler, Catherine H.; White, James; Richter, Sharon; Walker, Allison

    2009-01-01

    Approximately 28% of students with disabilities do not complete high school (National Longitudinal Transition Study-2, 2005). This increases the likelihood that these students will experience low wages, high rates of incarceration, and limited access to postsecondary education. This article reviews evidence-based secondary transition practices…

  9. ABrox-A user-friendly Python module for approximate Bayesian computation with a focus on model comparison.

    PubMed

    Mertens, Ulf Kai; Voss, Andreas; Radev, Stefan

    2018-01-01

    We give an overview of the basic principles of approximate Bayesian computation (ABC), a class of stochastic methods that enable flexible and likelihood-free model comparison and parameter estimation. Our new open-source software called ABrox is used to illustrate ABC for model comparison on two prominent statistical tests, the two-sample t-test and the Levene-Test. We further highlight the flexibility of ABC compared to classical Bayesian hypothesis testing by computing an approximate Bayes factor for two multinomial processing tree models. Last but not least, throughout the paper, we introduce ABrox using the accompanied graphical user interface.

  10. Multifrequency InSAR height reconstruction through maximum likelihood estimation of local planes parameters.

    PubMed

    Pascazio, Vito; Schirinzi, Gilda

    2002-01-01

    In this paper, a technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities.

  11. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    PubMed

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  12. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso

    PubMed Central

    Kong, Shengchun; Nan, Bin

    2013-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328

  13. Soft decoding a self-dual (48, 24; 12) code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  14. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  15. Model criticism based on likelihood-free inference, with an application to protein network evolution.

    PubMed

    Ratmann, Oliver; Andrieu, Christophe; Wiuf, Carsten; Richardson, Sylvia

    2009-06-30

    Mathematical models are an important tool to explain and comprehend complex phenomena, and unparalleled computational advances enable us to easily explore them without any or little understanding of their global properties. In fact, the likelihood of the data under complex stochastic models is often analytically or numerically intractable in many areas of sciences. This makes it even more important to simultaneously investigate the adequacy of these models-in absolute terms, against the data, rather than relative to the performance of other models-but no such procedure has been formally discussed when the likelihood is intractable. We provide a statistical interpretation to current developments in likelihood-free Bayesian inference that explicitly accounts for discrepancies between the model and the data, termed Approximate Bayesian Computation under model uncertainty (ABCmicro). We augment the likelihood of the data with unknown error terms that correspond to freely chosen checking functions, and provide Monte Carlo strategies for sampling from the associated joint posterior distribution without the need of evaluating the likelihood. We discuss the benefit of incorporating model diagnostics within an ABC framework, and demonstrate how this method diagnoses model mismatch and guides model refinement by contrasting three qualitative models of protein network evolution to the protein interaction datasets of Helicobacter pylori and Treponema pallidum. Our results make a number of model deficiencies explicit, and suggest that the T. pallidum network topology is inconsistent with evolution dominated by link turnover or lateral gene transfer alone.

  16. Retaining the mental health nursing workforce: early indicators of retention and attrition.

    PubMed

    Robinson, Sarah; Murrells, Trevor; Smith, Elizabeth M

    2005-12-01

    In the UK, strategies to improve retention of the mental health workforce feature prominently in health policy. This paper reports on a longitudinal national study into the careers of mental health nurses in the UK. The findings reveal little attrition during the first 6 months after qualification. Investigation of career experiences showed that the main sources of job satisfaction were caregiving opportunities and supportive working relationships. The main sources of dissatisfaction were pay in relation to responsibility, paperwork, continuing education opportunities, and career guidance. Participants were asked whether they predicted being in nursing in the future. Gender and ethnicity were related to likelihood to remain in nursing in 5 years time. Age, having children, educational background, ethnic background, and time in first job were associated with likelihood of remaining in nursing at 10 years. Associations between elements of job satisfaction (quality of clinical supervision, ratio of qualified to unqualified staff, support from immediate line manager, and paperwork) and anticipated retention are complex and there are likely to be interaction effects because of the complexity of the issues. Sustaining positive experiences, remedying sources of dissatisfaction, and supporting diplomats from all backgrounds should be central to the development of retention strategies.

  17. Estimating the Parameters of the Beta-Binomial Distribution.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1979-01-01

    For some situations the beta-binomial distribution might be used to describe the marginal distribution of test scores for a particular population of examinees. Several different methods of approximating the maximum likelihood estimate were investigated, and it was found that the Newton-Raphson method should be used when it yields admissable…

  18. The Robustness of LISREL Estimates in Structural Equation Models with Categorical Variables.

    ERIC Educational Resources Information Center

    Ethington, Corinna A.

    1987-01-01

    This study examined the effect of type of correlation matrix on the robustness of LISREL maximum likelihood and unweighted least squares structural parameter estimates for models with categorical variables. The analysis of mixed matrices produced estimates that closely approximated the model parameters except where dichotomous variables were…

  19. Robust Estimation of Latent Ability in Item Response Models

    ERIC Educational Resources Information Center

    Schuster, Christof; Yuan, Ke-Hai

    2011-01-01

    Because of response disturbances such as guessing, cheating, or carelessness, item response models often can only approximate the "true" individual response probabilities. As a consequence, maximum-likelihood estimates of ability will be biased. Typically, the nature and extent to which response disturbances are present is unknown, and, therefore,…

  20. Assessing the risks posed by goldspotted oak borer to California and beyond

    Treesearch

    Robert C. Venette; Tom W. Coleman; Steve Seybold

    2015-01-01

    Goldspotted oak borer, Agrilus auroguttatus, has killed approximately 27,000 mature oaks in southern California. Consequently, the future spread of this insect is a significant concern to many oak woodland managers in California and across the United States. "Risk" reflects the likelihood that A. auroguttatus will...

  1. Old Disease, New Threat: TB Makes an Unwelcome Comeback.

    ERIC Educational Resources Information Center

    Pyszczynski, Dennis R.

    1992-01-01

    Tuberculosis is on the rise in the United States. The article discusses its background, factors contributing to its increase, the likelihood of children and teens developing tuberculosis, how to prevent the disease from spreading, and what schools and states can do. (SM)

  2. Spectral analysis of the gamma-ray background near the dwarf Milky Way satellite Segue 1: Improved limits on the cross section of neutralino dark matter annihilation

    DOE PAGES

    Baushev, A. N.; Federici, S.; Pohl, M.

    2012-09-20

    The indirect detection of dark matter requires that dark matter annihilation products be discriminated from conventional astrophysical backgrounds. We re-analyze GeV-band gamma-ray observations of the prominent Milky Way dwarf satellite galaxy Segue 1, for which the expected astrophysical background is minimal. Here, we explicitly account for the angular extent of the conservatively expected gamma-ray signal and keep the uncertainty in the dark-matter profile external to the likelihood analysis of the gamma-ray data.

  3. Noise correlations in cosmic microwave background experiments

    NASA Technical Reports Server (NTRS)

    Dodelson, Scott; Kosowsky, Arthur; Myers, Steven T.

    1995-01-01

    Many analysis of microwave background experiments neglect the correlation of noise in different frequency of polarization channels. We show that these correlations, should they be present, can lead to serve misinterpretation of an experiment. In particular, correlated noise arising from either electronics or atmosphere may mimic a cosmic signal. We quantify how the likelihood function for a given experiment varies with noise correlation, using both simple analytic models and actual data. For a typical microwave background anisotropy experiment, noise correlations at the level of 1% of the overall noise can seriously reduce the significance of a given detection.

  4. Math-oriented fields of study and the race gap in graduation likelihoods at elite colleges.

    PubMed

    Gelbgiser, Dafna; Alon, Sigal

    2016-07-01

    This study examines the relationship between chosen field of study and the race gap in college completion among students at elite colleges. Fields of study are characterized by varying institutional arrangements, which impact the academic performance of students in higher education. If the effect of fields on graduation likelihoods is unequal across racial groups, then this may account for part of the overall race gap in college completion. Results from a large sample of students attending elite colleges confirm that fields of study influence the graduation likelihoods of all students, above and beyond factors such as students' academic and social backgrounds. This effect, however, is asymmetrical: relative to white students, the negative effect of the institutional arrangements of math-oriented fields on graduation likelihood is greater for black students. Therefore, the race gap is larger within math-oriented fields than in other fields, which contributes to the overall race gap in graduation likelihoods at these selective colleges. These results indicate that a nontrivial share of the race gap in college completion is generated after matriculation, by the environments that students encounter in college. Consequently, policy interventions that target field of study environments can substantially mitigate racial disparities in college graduation rates. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Among-character rate variation distributions in phylogenetic analysis of discrete morphological characters.

    PubMed

    Harrison, Luke B; Larsson, Hans C E

    2015-03-01

    Likelihood-based methods are commonplace in phylogenetic systematics. Although much effort has been directed toward likelihood-based models for molecular data, comparatively less work has addressed models for discrete morphological character (DMC) data. Among-character rate variation (ACRV) may confound phylogenetic analysis, but there have been few analyses of the magnitude and distribution of rate heterogeneity among DMCs. Using 76 data sets covering a range of plants, invertebrate, and vertebrate animals, we used a modified version of MrBayes to test equal, gamma-distributed and lognormally distributed models of ACRV, integrating across phylogenetic uncertainty using Bayesian model selection. We found that in approximately 80% of data sets, unequal-rates models outperformed equal-rates models, especially among larger data sets. Moreover, although most data sets were equivocal, more data sets favored the lognormal rate distribution relative to the gamma rate distribution, lending some support for more complex character correlations than in molecular data. Parsimony estimation of the underlying rate distributions in several data sets suggests that the lognormal distribution is preferred when there are many slowly evolving characters and fewer quickly evolving characters. The commonly adopted four rate category discrete approximation used for molecular data was found to be sufficient to approximate a gamma rate distribution with discrete characters. However, among the two data sets tested that favored a lognormal rate distribution, the continuous distribution was better approximated with at least eight discrete rate categories. Although the effect of rate model on the estimation of topology was difficult to assess across all data sets, it appeared relatively minor between the unequal-rates models for the one data set examined carefully. As in molecular analyses, we argue that researchers should test and adopt the most appropriate model of rate variation for the data set in question. As discrete characters are increasingly used in more sophisticated likelihood-based phylogenetic analyses, it is important that these studies be built on the most appropriate and carefully selected underlying models of evolution. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  7. Reference analysis of the signal + background model in counting experiments II. Approximate reference prior

    NASA Astrophysics Data System (ADS)

    Casadei, D.

    2014-10-01

    The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) can be well approximated by the widely (ab)used flat prior only when the expected background is very high. On the other hand, a very simple approximation (the limiting form of the reference prior for perfect prior background knowledge) can be safely used over a large portion of the background parameters space. The resulting approximate reference posterior is a Gamma density whose parameters are related to the observed counts. This limiting form is simpler than the result obtained with a flat prior, with the additional advantage of representing a much closer approximation to the reference posterior in all cases. Hence such limiting prior should be considered a better default or conventional prior than the uniform prior. On the computing side, it is shown that a 2-parameter fitting function is able to reproduce extremely well the reference prior for any background prior. Thus, it can be useful in applications requiring the evaluation of the reference prior for a very large number of times.

  8. Prioritizing the School Environment in School Violence Prevention Efforts

    ERIC Educational Resources Information Center

    Johnson, Sarah Lindstrom; Burke, Jessica G.; Gielen, Andrea C.

    2011-01-01

    Background: Numerous studies have demonstrated an association between characteristics of the school environment and the likelihood of school violence. However, little is known about the relative importance of various characteristics of the school environment or their differential impact on multiple violence outcomes. Methods: Primarily…

  9. Efficient computation of the phylogenetic likelihood function on multi-gene alignments and multi-core architectures.

    PubMed

    Stamatakis, Alexandros; Ott, Michael

    2008-12-27

    The continuous accumulation of sequence data, for example, due to novel wet-laboratory techniques such as pyrosequencing, coupled with the increasing popularity of multi-gene phylogenies and emerging multi-core processor architectures that face problems of cache congestion, poses new challenges with respect to the efficient computation of the phylogenetic maximum-likelihood (ML) function. Here, we propose two approaches that can significantly speed up likelihood computations that typically represent over 95 per cent of the computational effort conducted by current ML or Bayesian inference programs. Initially, we present a method and an appropriate data structure to efficiently compute the likelihood score on 'gappy' multi-gene alignments. By 'gappy' we denote sampling-induced gaps owing to missing sequences in individual genes (partitions), i.e. not real alignment gaps. A first proof-of-concept implementation in RAXML indicates that this approach can accelerate inferences on large and gappy alignments by approximately one order of magnitude. Moreover, we present insights and initial performance results on multi-core architectures obtained during the transition from an OpenMP-based to a Pthreads-based fine-grained parallelization of the ML function.

  10. Stable indications of relic gravitational waves in Wilkinson Microwave Anisotropy Probe data and forecasts for the Planck mission

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Baskaran, D.; Grishchuk, L. P.

    2009-10-01

    The relic gravitational waves are the cleanest probe of the violent times in the very early history of the Universe. They are expected to leave signatures in the observed cosmic microwave background anisotropies. We significantly improved our previous analysis [W. Zhao, D. Baskaran, and L. P. Grishchuk, Phys. Rev. DPRVDAQ1550-7998 79, 023002 (2009)10.1103/PhysRevD.79.023002] of the 5-year WMAP TT and TE data at lower multipoles ℓ. This more general analysis returned essentially the same maximum likelihood result (unfortunately, surrounded by large remaining uncertainties): The relic gravitational waves are present and they are responsible for approximately 20% of the temperature quadrupole. We identify and discuss the reasons by which the contribution of gravitational waves can be overlooked in a data analysis. One of the reasons is a misleading reliance on data from very high multipoles ℓ and another a too narrow understanding of the problem as the search for B modes of polarization, rather than the detection of relic gravitational waves with the help of all correlation functions. Our analysis of WMAP5 data has led to the identification of a whole family of models characterized by relatively high values of the likelihood function. Using the Fisher matrix formalism we formulated forecasts for Planck mission in the context of this family of models. We explore in detail various “optimistic,” “pessimistic,” and “dream case” scenarios. We show that in some circumstances the B-mode detection may be very inconclusive, at the level of signal-to-noise ratio S/N=1.75, whereas a smarter data analysis can reveal the same gravitational wave signal at S/N=6.48. The final result is encouraging. Even under unfavorable conditions in terms of instrumental noises and foregrounds, the relic gravitational waves, if they are characterized by the maximum likelihood parameters that we found from WMAP5 data, will be detected by Planck at the level S/N=3.65.

  11. Analysis of the Hessian for Inverse Scattering Problems. Part 3. Inverse Medium Scattering of Electromagnetic Waves in Three Dimensions

    DTIC Science & Technology

    2012-08-01

    small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this

  12. A computer program for estimation from incomplete multinomial data

    NASA Technical Reports Server (NTRS)

    Credeur, K. R.

    1978-01-01

    Coding is given for maximum likelihood and Bayesian estimation of the vector p of multinomial cell probabilities from incomplete data. Also included is coding to calculate and approximate elements of the posterior mean and covariance matrices. The program is written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system (NOS) 1.1. The program requires approximately 44000 octal locations of core storage. A typical case requires from 72 seconds to 92 seconds on CYBER 175 depending on the value of the prior parameter.

  13. Modelling default and likelihood reasoning as probabilistic

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.

  14. Case-Deletion Diagnostics for Nonlinear Structural Equation Models

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Lu, Bin

    2003-01-01

    In this article, a case-deletion procedure is proposed to detect influential observations in a nonlinear structural equation model. The key idea is to develop the diagnostic measures based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm. An one-step pseudo approximation is proposed to reduce the…

  15. Resource Guide: Accountability for English Learners under the ESEA. Non-Regulatory Guidance

    ERIC Educational Resources Information Center

    Office of Elementary and Secondary Education, US Department of Education, 2017

    2017-01-01

    English learners (ELs) are among the fastest-growing populations of students in our nation's public schools. This diverse subgroup of approximately 4.5 million students brings important cultural and linguistic assets to the public education system, but also faces a greater likelihood of lower graduation rates, academic achievement, and college…

  16. Examining Proportional Representation of Ethnic Groups within the SWPBIS Model

    ERIC Educational Resources Information Center

    Jewell, Kelly

    2012-01-01

    The quantitative study seeks to analyze if School-wide Positive Behavior Intervention and Support (SWPBIS) model reduces the likelihood that minority students will receive more individualized supports due to behavior problems. In theory, the SWPBIS model should reflect a 3-tier system with tier 1 representing approximately 80%, tier 2 representing…

  17. Major Accidents (Gray Swans) Likelihood Modeling Using Accident Precursors and Approximate Reasoning.

    PubMed

    Khakzad, Nima; Khan, Faisal; Amyotte, Paul

    2015-07-01

    Compared to the remarkable progress in risk analysis of normal accidents, the risk analysis of major accidents has not been so well-established, partly due to the complexity of such accidents and partly due to low probabilities involved. The issue of low probabilities normally arises from the scarcity of major accidents' relevant data since such accidents are few and far between. In this work, knowing that major accidents are frequently preceded by accident precursors, a novel precursor-based methodology has been developed for likelihood modeling of major accidents in critical infrastructures based on a unique combination of accident precursor data, information theory, and approximate reasoning. For this purpose, we have introduced an innovative application of information analysis to identify the most informative near accident of a major accident. The observed data of the near accident were then used to establish predictive scenarios to foresee the occurrence of the major accident. We verified the methodology using offshore blowouts in the Gulf of Mexico, and then demonstrated its application to dam breaches in the United Sates. © 2015 Society for Risk Analysis.

  18. Family Caregiver Identity: A Literature Review

    ERIC Educational Resources Information Center

    Eifert, Elise K.; Adams, Rebecca; Dudley, William; Perko, Michael

    2015-01-01

    Background: Despite the multitude of available resources, family caregivers of those with chronic disease continually underutilize support services to cope with the demands of caregiving. Several studies have linked self-identification as a caregiver to the increased likelihood of support service use. Purpose: The present study reviewed the…

  19. The Forgotten 90%: Adult Nonparticipation in Education

    ERIC Educational Resources Information Center

    Patterson, Margaret Becker

    2018-01-01

    Despite a highly developed U.S. adult education system, 90% of adults aged 20 years and older considered the least educated did not participate recently in formal or nonformal education. What are nonparticipants' characteristics, learning backgrounds, and skill levels? What predicts their likelihood of "not" participating in recent…

  20. The Effects of Alcohol on Spontaneous Clearance of Acute Hepatitis C Virus Infection in Females versus Males

    PubMed Central

    Tsui, Judith I.; Mirzazadeh, Ali; Hahn, Judith A.; Maher, Lisa; Bruneau, Julie; Grebely, Jason; Hellard, Margaret; Kim, Arthur Y.; Shoukry, Naglaa H.; Cox, Andrea L.; Prins, Maria; Dore, Gregory; Lauer, Georg; Lloyd, Andrew; Page, Kimberly

    2016-01-01

    Background Approximately one quarter of persons exposed to hepatitis C virus (HCV) will spontaneously clear infection. We undertook this study to investigate the impact of alcohol on likelihood of HCV spontaneous viral clearance stratified by sex groups. Methods Pooled data from an international collaboration of prospective observational studies of incident HIV and HCV infection in high-risk cohorts (the InC3 Study) was restricted to 411 persons (or 560.7 person-years of observation) with documented acute HCV infection and data regarding alcohol use. The predictor of interest was self-reported alcohol use at or after estimated date of incident HCV infection and the outcome was HCV spontaneous clearance. Sex stratified Cox proportional hazards models were used to evaluate the association between alcohol and spontaneous clearance, adjusting for age, race/ethnicity, and IFNL4 genotype. Results The median age was 28.5 years, 30.4% were women, 87.2% were white, and 71.8% reported alcohol use at or after incident infection. There were 89 (21.6%) cases of spontaneous clearance observed, 39 (31.2%) among women and 50 (17.5%) in men (p<0.01). Overall, spontaneous clearance occurred less frequently among participants who drank alcohol compared to those who did not drink (18.9% v. 28.5%, p=0.03). After adjustment for other covariates, alcohol was significantly and independently associated with lower relative hazards for spontaneous clearance of HCV in women (AHR=0.35; 95% CI: 0.19-0.66; p=0.001) but not in men (AHR=0.63; 95% CI: 0.36-1.09; p=0.10). Conclusion Results indicate that abstaining from drinking alcohol may increase the likelihood of spontaneous clearance among women. PMID:27816863

  1. Use of Multiple Imputation Method to Improve Estimation of Missing Baseline Serum Creatinine in Acute Kidney Injury Research

    PubMed Central

    Peterson, Josh F.; Eden, Svetlana K.; Moons, Karel G.; Ikizler, T. Alp; Matheny, Michael E.

    2013-01-01

    Summary Background and objectives Baseline creatinine (BCr) is frequently missing in AKI studies. Common surrogate estimates can misclassify AKI and adversely affect the study of related outcomes. This study examined whether multiple imputation improved accuracy of estimating missing BCr beyond current recommendations to apply assumed estimated GFR (eGFR) of 75 ml/min per 1.73 m2 (eGFR 75). Design, setting, participants, & measurements From 41,114 unique adult admissions (13,003 with and 28,111 without BCr data) at Vanderbilt University Hospital between 2006 and 2008, a propensity score model was developed to predict likelihood of missing BCr. Propensity scoring identified 6502 patients with highest likelihood of missing BCr among 13,003 patients with known BCr to simulate a “missing” data scenario while preserving actual reference BCr. Within this cohort (n=6502), the ability of various multiple-imputation approaches to estimate BCr and classify AKI were compared with that of eGFR 75. Results All multiple-imputation methods except the basic one more closely approximated actual BCr than did eGFR 75. Total AKI misclassification was lower with multiple imputation (full multiple imputation + serum creatinine) (9.0%) than with eGFR 75 (12.3%; P<0.001). Improvements in misclassification were greater in patients with impaired kidney function (full multiple imputation + serum creatinine) (15.3%) versus eGFR 75 (40.5%; P<0.001). Multiple imputation improved specificity and positive predictive value for detecting AKI at the expense of modestly decreasing sensitivity relative to eGFR 75. Conclusions Multiple imputation can improve accuracy in estimating missing BCr and reduce misclassification of AKI beyond currently proposed methods. PMID:23037980

  2. Contagion in Mass Killings and School Shootings

    PubMed Central

    Towers, Sherry; Gomez-Lievano, Andres; Khan, Maryam; Mubayi, Anuj; Castillo-Chavez, Carlos

    2015-01-01

    Background Several past studies have found that media reports of suicides and homicides appear to subsequently increase the incidence of similar events in the community, apparently due to the coverage planting the seeds of ideation in at-risk individuals to commit similar acts. Methods Here we explore whether or not contagion is evident in more high-profile incidents, such as school shootings and mass killings (incidents with four or more people killed). We fit a contagion model to recent data sets related to such incidents in the US, with terms that take into account the fact that a school shooting or mass murder may temporarily increase the probability of a similar event in the immediate future, by assuming an exponential decay in contagiousness after an event. Conclusions We find significant evidence that mass killings involving firearms are incented by similar events in the immediate past. On average, this temporary increase in probability lasts 13 days, and each incident incites at least 0.30 new incidents (p = 0.0015). We also find significant evidence of contagion in school shootings, for which an incident is contagious for an average of 13 days, and incites an average of at least 0.22 new incidents (p = 0.0001). All p-values are assessed based on a likelihood ratio test comparing the likelihood of a contagion model to that of a null model with no contagion. On average, mass killings involving firearms occur approximately every two weeks in the US, while school shootings occur on average monthly. We find that state prevalence of firearm ownership is significantly associated with the state incidence of mass killings with firearms, school shootings, and mass shootings. PMID:26135941

  3. Planck 2015 results: XV. Gravitational lensing

    DOE PAGES

    Ade, P. A. R.; Aghanim, N.; Arnaud, M.; ...

    2016-09-20

    Here, we present the most significant measurement of the cosmic microwave background (CMB) lensing potential to date (at a level of 40σ), using temperature and polarization data from the Planck 2015 full-mission release. Using a polarization-only estimator, we detect lensing at a significance of 5σ. We cross-check the accuracy of our measurement using the wide frequency coverage and complementarity of the temperature and polarization measurements. Public products based on this measurement include an estimate of the lensing potential over approximately 70% of the sky, an estimate of the lensing potential power spectrum in bandpowers for the multipole range 40 ≤more » L ≤ 400, and an associated likelihood for cosmological parameter constraints. We find good agreement between our measurement of the lensing potential power spectrum and that found in the ΛCDM model that best fits the Planck temperature and polarization power spectra. Using the lensing likelihood alone we obtain a percent-level measurement of the parameter combination σ 8Ω 0.25 m = 0.591 ± 0.021. We combine our determination of the lensing potential with the E-mode polarization, also measured by Planck, to generate an estimate of the lensing B-mode. We show that this lensing B-mode estimate is correlated with the B-modes observed directly by Planck at the expected level and with a statistical significance of 10σ, confirming Planck’s sensitivity to this known sky signal. Finally, we also correlate our lensing potential estimate with the large-scale temperature anisotropies, detecting a cross-correlation at the 3σ level, as expected because of dark energy in the concordance ΛCDM model.« less

  4. Nature or nurture: the effect of undergraduate rural clinical rotations on pre-existent rural career choice likelihood as measured by the SOMERS Index.

    PubMed

    Somers, George T; Spencer, Ryan J

    2012-04-01

    Do undergraduate rural clinical rotations increase the likelihood of medical students to choose a rural career once pre-existent likelihood is accounted for? A prospective, controlled quasi-experiment using self-paired scores on the SOMERS Index of rural career choice likelihood, before and after 3 years of clinical rotations in either mainly rural or mainly urban locations. Monash University medical school, Australia. Fifty-eight undergraduate-entry medical students (35% of the 2002 entry class). The SOMERS Index of rural career choice likelihood and its component indicators. There was an overall decline in SOMERS Index score (22%) and in each of its components (12-41%). Graduating students who attended rural rotations were more likely to choose a rural career on graduation (difference in SOMERS score: 24.1 (95% CI, 15.0-33.3) P<0.0001); however, at entry, students choosing rural rotations had an even greater SOMERS score (difference: 27.1 (95% CI, 18.2-36.1) P<0.0001). Self-paired pre-post reductions in likelihood were not affected by attending mainly rural or urban rotations, nor were there differences based on rural background alone or sex. While rural rotations are an important component of undergraduate medical training, it is the nature of the students choosing to study in rural locations rather than experiences during the course that is the greater influence on rural career choice. In order to improve the rural medical workforce crisis, medical schools should attract more students with pre-existent likelihood to choose a rural career. The SOMERS Index was found to be a useful tool for this quantitative analysis. © 2012 The Authors. Australian Journal of Rural Health © 2012 National Rural Health Alliance Inc.

  5. Nicotine dependence, "background" and cue-induced craving and smoking in the laboratory.

    PubMed

    Dunbar, Michael S; Shiffman, Saul; Kirchner, Thomas R; Tindle, Hilary A; Scholl, Sarah M

    2014-09-01

    Nicotine dependence has been associated with higher "background" craving and smoking, independent of situational cues. Due in part to conceptual and methodological differences across past studies, the relationship between dependence and cue-reactivity (CR; e.g., cue-induced craving and smoking) remains unclear. 207 daily smokers completed six pictorial CR sessions (smoking, negative affect, positive affect, alcohol, smoking prohibitions, and neutral). Individuals rated craving before (background craving) and after cues, and could smoke following cue exposure. Session videos were coded to assess smoking. Participants completed four nicotine dependence measures. Regression models assessed the relationship of dependence to cue-independent (i.e., pre-cue) and cue-specific (i.e., pre-post cue change for each cue, relative to neutral) craving and smoking (likelihood of smoking, latency to smoke, puff count). Dependence was associated with background craving and smoking, but did not predict change in craving across the entire sample for any cue. Among alcohol drinkers, dependence was associated with greater increases in craving following the alcohol cue. Only one dependence measure (Wisconsin Inventory of Smoking Dependence Motives) was consistently associated with smoking reactivity (higher likelihood of smoking, shorter latency to smoke, greater puff count) in response to cues. While related to cue-independent background craving and smoking, dependence is not strongly associated with laboratory cue-induced craving under conditions of minimal deprivation. Dependence measures that incorporate situational influences on smoking correlate with greater cue-provoked smoking. This may suggest independent roles for CR and traditional dependence as determinants of smoking, and highlights the importance of assessing behavioral CR outcomes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: towards a computationally efficient analysis without informative priors

    NASA Astrophysics Data System (ADS)

    Pellejero-Ibanez, Marcos; Chuang, Chia-Hsun; Rubiño-Martín, J. A.; Cuesta, Antonio J.; Wang, Yuting; Zhao, Gongbo; Ross, Ashley J.; Rodríguez-Torres, Sergio; Prada, Francisco; Slosar, Anže; Vazquez, Jose A.; Alam, Shadab; Beutler, Florian; Eisenstein, Daniel J.; Gil-Marín, Héctor; Grieb, Jan Niklas; Ho, Shirley; Kitaura, Francisco-Shu; Percival, Will J.; Rossi, Graziano; Salazar-Albornoz, Salvador; Samushia, Lado; Sánchez, Ariel G.; Satpathy, Siddharth; Seo, Hee-Jong; Tinker, Jeremy L.; Tojeiro, Rita; Vargas-Magaña, Mariana; Brownstein, Joel R.; Nichol, Robert C.; Olmstead, Matthew D.

    2017-07-01

    We develop a new computationally efficient methodology called double-probe analysis with the aim of minimizing informative priors (those coming from extra probes) in the estimation of cosmological parameters. Using our new methodology, we extract the dark energy model-independent cosmological constraints from the joint data sets of the Baryon Oscillation Spectroscopic Survey (BOSS) galaxy sample and Planck cosmic microwave background (CMB) measurements. We measure the mean values and covariance matrix of {R, la, Ωbh2, ns, log(As), Ωk, H(z), DA(z), f(z)σ8(z)}, which give an efficient summary of the Planck data and two-point statistics from the BOSS galaxy sample. The CMB shift parameters are R=√{Ω _m H_0^2} r(z_*) and la = πr(z*)/rs(z*), where z* is the redshift at the last scattering surface, and r(z*) and rs(z*) denote our comoving distance to the z* and sound horizon at z*, respectively; Ωb is the baryon fraction at z = 0. This approximate methodology guarantees that we will not need to put informative priors on the cosmological parameters that galaxy clustering is unable to constrain, I.e. Ωbh2 and ns. The main advantage is that the computational time required for extracting these parameters is decreased by a factor of 60 with respect to exact full-likelihood analyses. The results obtained show no tension with the flat Λ cold dark matter (ΛCDM) cosmological paradigm. By comparing with the full-likelihood exact analysis with fixed dark energy models, on one hand we demonstrate that the double-probe method provides robust cosmological parameter constraints that can be conveniently used to study dark energy models, and on the other hand we provide a reliable set of measurements assuming dark energy models to be used, for example, in distance estimations. We extend our study to measure the sum of the neutrino mass using different methodologies, including double-probe analysis (introduced in this study), full-likelihood analysis and single-probe analysis. From full-likelihood analysis, we obtain Σmν < 0.12 (68 per cent), assuming ΛCDM and Σmν < 0.20 (68 per cent) assuming owCDM. We also find that there is degeneracy between observational systematics and neutrino masses, which suggests that one should take great care when estimating these parameters in the case of not having control over the systematics of a given sample.

  7. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    PubMed

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-05-01

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Oncology Social Workers' Attitudes toward Hospice Care and Referral Behavior

    ERIC Educational Resources Information Center

    Becker, Janet E.

    2004-01-01

    Members of the Association of Oncology Social Workers completed a survey, which included the Hospice Philosophy Scale (HPS) assessing the likelihood of the worker referring a terminally ill patient to hospice, background and experience, and demographics. The respondents held overwhelmingly favorable attitudes toward hospice philosophy and care,…

  9. School Injury among Ottawa-Area Children: A Population-Based Study

    ERIC Educational Resources Information Center

    Josse, Jonathan M.; MacKay, Morag; Osmond, Martin H.; MacPherson, Alison K.

    2009-01-01

    Background: Injuries are the leading cause of death among Canadian children and are responsible for a substantial proportion of hospitalizations and emergency department visits. This investigation sought to identify the factors associated with the likelihood of sustaining an injury at school among Ottawa-area children. Methods: Children presenting…

  10. Determinants of Student Wastage in Higher Education.

    ERIC Educational Resources Information Center

    Johnes, Jill

    1990-01-01

    Statistical analysis of a sample of the 1979 entry cohort to Lancaster University indicates that the likelihood of non-completion is determined by various characteristics including the student's academic ability, gender, marital status, work experience prior to university, school background, and location of home in relation to university.…

  11. Factors Associated with Asian American Students' Choice of STEM Major

    ERIC Educational Resources Information Center

    Lowinger, Robert; Song, Hyun-a

    2017-01-01

    This study explored Asian American students' likelihood of selecting STEM over liberal arts or business college majors using the Education Longitudinal Study of 2002. Student-level variables were the strongest predictors of college major, followed by parent-level variables, and background variables. Academic achievement and interest were the…

  12. Social Background and School Continuation Decisions. Discussion Papers No. 462.

    ERIC Educational Resources Information Center

    Mare, Robert D.

    In this paper, logistic response models of the effects of parental socioeconomic characteristics and family structure on the probability of making selected school transitions for white American males are estimated by maximum likelihood. Transition levels examined include: (l) completion of elementary school; (2) attendance at high school; (3)…

  13. Planck 2015 results: XI. CMB power spectra, likelihoods, and robustness of parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghanim, N.; Arnaud, M.; Ashdown, M.

    This study presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlationfunctions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties, both instrumental and astrophysical in nature. They are based on the same hybrid approach used for the previous release, i.e., a pixel-based likelihood at low multipoles (ℓ< 30) and a Gaussian approximation to the distribution of cross-power spectra at higher multipoles. The main improvements are the use of more and better processed data and of Planck polarization information, along with more detailed models of foregrounds and instrumental uncertainties. The increased redundancy broughtmore » by more than doubling the amount of data analysed enables further consistency checks and enhanced immunity to systematic effects. It also improves the constraining power of Planck, in particular with regard to small-scale foreground properties. Progress in the modelling of foreground emission enables the retention of a larger fraction of the sky to determine the properties of the CMB, which also contributes to the enhanced precision of the spectra. Improvements in data processing and instrumental modelling further reduce uncertainties. Extensive tests establish the robustness and accuracy of the likelihood results, from temperature alone, from polarization alone, and from their combination. For temperature, we also perform a full likelihood analysis of realistic end-to-end simulations of the instrumental response to the sky, which were fed into the actual data processing pipeline; this does not reveal biases from residual low-level instrumental systematics. Even with the increase in precision and robustness, the ΛCDM cosmological model continues to offer a very good fit to the Planck data. The slope of the primordial scalar fluctuations, n s, is confirmed smaller than unity at more than 5σ from Planck alone. We further validate the robustness of the likelihood results against specific extensions to the baseline cosmology, which are particularly sensitive to data at high multipoles. For instance, the effective number of neutrino species remains compatible with the canonical value of 3.046. For this first detailed analysis of Planck polarization spectra, we concentrate at high multipoles on the E modes, leaving the analysis of the weaker B modes to future work. At low multipoles we use temperature maps at all Planck frequencies along with a subset of polarization data. These data take advantage of Planck’s wide frequency coverage to improve the separation of CMB and foreground emission. Within the baseline ΛCDM cosmology this requires τ = 0.078 ± 0.019 for the reionization optical depth, which is significantly lower than estimates without the use of high-frequency data for explicit monitoring of dust emission. At high multipoles we detect residual systematic errors in E polarization, typically at the μK 2 level; we therefore choose to retain temperature information alone for high multipoles as the recommended baseline, in particular for testing non-minimal models. Finally and nevertheless, the high-multipole polarization spectra from Planck are already good enough to enable a separate high-precision determination of the parameters of the ΛCDM model, showing consistency with those established independently from temperature information alone.« less

  14. Planck 2015 results. XI. CMB power spectra, likelihoods, and robustness of parameters

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombo, L. P. L.; Combet, C.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Di Valentino, E.; Dickinson, C.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Gauthier, C.; Gerbino, M.; Giard, M.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hamann, J.; Hansen, F. K.; Harrison, D. L.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Holmes, W. A.; Hornstrup, A.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Lewis, A.; Liguori, M.; Lilje, P. B.; Lilley, M.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Macías-Pérez, J. F.; Maffei, B.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Meinhold, P. R.; Melchiorri, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Mottet, S.; Munshi, D.; Murphy, J. A.; Narimani, A.; Naselsky, P.; Nati, F.; Natoli, P.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rossetti, M.; Roudier, G.; Rouillé d'Orfeuil, B.; Rubiño-Martín, J. A.; Rusholme, B.; Salvati, L.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Serra, P.; Spencer, L. D.; Spinelli, M.; Stolyarov, V.; Stompor, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Tucci, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    This paper presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlationfunctions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties, both instrumental and astrophysical in nature. They are based on the same hybrid approach used for the previous release, I.e., a pixel-based likelihood at low multipoles (ℓ< 30) and a Gaussian approximation to the distribution of cross-power spectra at higher multipoles. The main improvements are the use of more and better processed data and of Planck polarization information, along with more detailed models of foregrounds and instrumental uncertainties. The increased redundancy brought by more than doubling the amount of data analysed enables further consistency checks and enhanced immunity to systematic effects. It also improves the constraining power of Planck, in particular with regard to small-scale foreground properties. Progress in the modelling of foreground emission enables the retention of a larger fraction of the sky to determine the properties of the CMB, which also contributes to the enhanced precision of the spectra. Improvements in data processing and instrumental modelling further reduce uncertainties. Extensive tests establish the robustness and accuracy of the likelihood results, from temperature alone, from polarization alone, and from their combination. For temperature, we also perform a full likelihood analysis of realistic end-to-end simulations of the instrumental response to the sky, which were fed into the actual data processing pipeline; this does not reveal biases from residual low-level instrumental systematics. Even with the increase in precision and robustness, the ΛCDM cosmological model continues to offer a very good fit to the Planck data. The slope of the primordial scalar fluctuations, ns, is confirmed smaller than unity at more than 5σ from Planck alone. We further validate the robustness of the likelihood results against specific extensions to the baseline cosmology, which are particularly sensitive to data at high multipoles. For instance, the effective number of neutrino species remains compatible with the canonical value of 3.046. For this first detailed analysis of Planck polarization spectra, we concentrate at high multipoles on the E modes, leaving the analysis of the weaker B modes to future work. At low multipoles we use temperature maps at all Planck frequencies along with a subset of polarization data. These data take advantage of Planck's wide frequency coverage to improve the separation of CMB and foreground emission. Within the baseline ΛCDM cosmology this requires τ = 0.078 ± 0.019 for the reionization optical depth, which is significantly lower than estimates without the use of high-frequency data for explicit monitoring of dust emission. At high multipoles we detect residual systematic errors in E polarization, typically at the μK2 level; we therefore choose to retain temperature information alone for high multipoles as the recommended baseline, in particular for testing non-minimal models. Nevertheless, the high-multipole polarization spectra from Planck are already good enough to enable a separate high-precision determination of the parameters of the ΛCDM model, showing consistency with those established independently from temperature information alone.

  15. Planck 2015 results: XI. CMB power spectra, likelihoods, and robustness of parameters

    DOE PAGES

    Aghanim, N.; Arnaud, M.; Ashdown, M.; ...

    2016-09-20

    This study presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlationfunctions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties, both instrumental and astrophysical in nature. They are based on the same hybrid approach used for the previous release, i.e., a pixel-based likelihood at low multipoles (ℓ< 30) and a Gaussian approximation to the distribution of cross-power spectra at higher multipoles. The main improvements are the use of more and better processed data and of Planck polarization information, along with more detailed models of foregrounds and instrumental uncertainties. The increased redundancy broughtmore » by more than doubling the amount of data analysed enables further consistency checks and enhanced immunity to systematic effects. It also improves the constraining power of Planck, in particular with regard to small-scale foreground properties. Progress in the modelling of foreground emission enables the retention of a larger fraction of the sky to determine the properties of the CMB, which also contributes to the enhanced precision of the spectra. Improvements in data processing and instrumental modelling further reduce uncertainties. Extensive tests establish the robustness and accuracy of the likelihood results, from temperature alone, from polarization alone, and from their combination. For temperature, we also perform a full likelihood analysis of realistic end-to-end simulations of the instrumental response to the sky, which were fed into the actual data processing pipeline; this does not reveal biases from residual low-level instrumental systematics. Even with the increase in precision and robustness, the ΛCDM cosmological model continues to offer a very good fit to the Planck data. The slope of the primordial scalar fluctuations, n s, is confirmed smaller than unity at more than 5σ from Planck alone. We further validate the robustness of the likelihood results against specific extensions to the baseline cosmology, which are particularly sensitive to data at high multipoles. For instance, the effective number of neutrino species remains compatible with the canonical value of 3.046. For this first detailed analysis of Planck polarization spectra, we concentrate at high multipoles on the E modes, leaving the analysis of the weaker B modes to future work. At low multipoles we use temperature maps at all Planck frequencies along with a subset of polarization data. These data take advantage of Planck’s wide frequency coverage to improve the separation of CMB and foreground emission. Within the baseline ΛCDM cosmology this requires τ = 0.078 ± 0.019 for the reionization optical depth, which is significantly lower than estimates without the use of high-frequency data for explicit monitoring of dust emission. At high multipoles we detect residual systematic errors in E polarization, typically at the μK 2 level; we therefore choose to retain temperature information alone for high multipoles as the recommended baseline, in particular for testing non-minimal models. Finally and nevertheless, the high-multipole polarization spectra from Planck are already good enough to enable a separate high-precision determination of the parameters of the ΛCDM model, showing consistency with those established independently from temperature information alone.« less

  16. Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Chang, K. C.

    2005-05-01

    Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.

  17. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  18. Likelihood ratio-based differentiation of nodular Hashimoto thyroiditis and papillary thyroid carcinoma in patients with sonographically evident diffuse hashimoto thyroiditis: preliminary study.

    PubMed

    Wang, Liang; Xia, Yu; Jiang, Yu-Xin; Dai, Qing; Li, Xiao-Yi

    2012-11-01

    To assess the efficacy of sonography for discriminating nodular Hashimoto thyroiditis from papillary thyroid carcinoma in patients with sonographically evident diffuse Hashimoto thyroiditis. This study included 20 patients with 24 surgically confirmed Hashimoto thyroiditis nodules and 40 patients with 40 papillary thyroid carcinoma nodules; all had sonographically evident diffuse Hashimoto thyroiditis. A retrospective review of the sonograms was performed, and significant benign and malignant sonographic features were selected by univariate and multivariate analyses. The combined likelihood ratio was calculated as the product of each feature's likelihood ratio for papillary thyroid carcinoma. We compared the abilities of the original sonographic features and combined likelihood ratios in diagnosing nodular Hashimoto thyroiditis and papillary thyroid carcinoma by their sensitivity, specificity, and Youden index. The diagnostic capabilities of the sonographic features varied greatly, with Youden indices ranging from 0.175 to 0.700. Compared with single features, combinations of features were unable to improve the Youden indices effectively because the sensitivity and specificity usually changed in opposite directions. For combined likelihood ratios, however, the sensitivity improved greatly without an obvious reduction in specificity, which resulted in the maximum Youden index (0.825). With a combined likelihood ratio greater than 7.00 as the diagnostic criterion for papillary thyroid carcinoma, sensitivity reached 82.5%, whereas specificity remained at 100.0%. With a combined likelihood ratio less than 1.00 for nodular Hashimoto thyroiditis, sensitivity and specificity were 90.0% and 92.5%, respectively. Several sonographic features of nodular Hashimoto thyroiditis and papillary thyroid carcinoma in a background of diffuse Hashimoto thyroiditis were significantly different. The combined likelihood ratio may be superior to original sonographic features for discrimination of nodular Hashimoto thyroiditis from papillary thyroid carcinoma; therefore, it is a promising risk index for thyroid nodules and warrants further investigation.

  19. Fluctuations in microwave background radiation due to secondary ionization of the intergalactic gas in the universe

    NASA Technical Reports Server (NTRS)

    Sunyayev, R. A.

    1979-01-01

    Secondary heating and ionization of the intergalactic gas at redshifts z approximately 10-30 could lead to the large optical depth of the Universe for Thomson scattering and could smooth the primordial fluctuations formed at z approximately 1500. It is shown that the gas motions connected with the large scale density perturbations at z approximately 10-15 must lead to the generation of secondary fluctuations of microwave background. The contribution of the rich clusters of galaxies and young galaxies to the fluctuations of microwave background is also estimated.

  20. Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors

    ERIC Educational Resources Information Center

    Guerra-Peña, Kiero; Steinley, Douglas

    2016-01-01

    Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…

  1. Mixture Factor Analysis for Approximating a Nonnormally Distributed Continuous Latent Factor with Continuous and Dichotomous Observed Variables

    ERIC Educational Resources Information Center

    Wall, Melanie M.; Guo, Jia; Amemiya, Yasuo

    2012-01-01

    Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus…

  2. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    ERIC Educational Resources Information Center

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  3. An Investigation of the Sample Performance of Two Nonnormality Corrections for RMSEA

    ERIC Educational Resources Information Center

    Brosseau-Liard, Patricia E.; Savalei, Victoria; Li, Libo

    2012-01-01

    The root mean square error of approximation (RMSEA) is a popular fit index in structural equation modeling (SEM). Typically, RMSEA is computed using the normal theory maximum likelihood (ML) fit function. Under nonnormality, the uncorrected sample estimate of the ML RMSEA tends to be inflated. Two robust corrections to the sample ML RMSEA have…

  4. "Drunkorexia": Understanding Eating and Physical Activity Behaviors of Weight Conscious Drinkers in a Sample of College Students

    ERIC Educational Resources Information Center

    Wilkerson, Amanda H.; Hackman, Christine L.; Rush, Sarah E.; Usdan, Stuart L.; Smith, Corinne S.

    2017-01-01

    Objective: Behaviors of weight conscious drinkers (BWCD) include disordered eating, excessive physical activity (PA), and heavy episodic drinking. Considering that approximately 25% of the college students report BWCD, it is important to investigate what characteristics increase the likelihood of college students engaged in BWCD for both moderate…

  5. Physician Manpower in Florida Series II. Prospects for Meeting the Goals.

    ERIC Educational Resources Information Center

    Florida State Board of Regents, Tallahassee.

    This document analyzes the prospects and likelihood of meeting the goals suggested in the first paper of this series discussing physician manpower in Florida. The first paper indicates that approximately 76% of the mid-1973 pool of licensed physicians living in Florida could be considered in active practice providing care to the civilian…

  6. Influence of weather, rank, and home advantage on football outcomes in the Gulf region.

    PubMed

    Brocherie, Franck; Girard, Olivier; Farooq, Abdulaziz; Millet, Grégoire P

    2015-02-01

    The objective of this study was to investigate the effects of weather, rank, and home advantage on international football match results and scores in the Gulf Cooperation Council (GCC) region. Football matches (n = 2008) in six GCC countries were analyzed. To determine the weather influence on the likelihood of favorable outcome and goal difference, generalized linear model with a logit link function and multiple regression analysis were performed. In the GCC region, home teams tend to have greater likelihood of a favorable outcome (P < 0.001) and higher goal difference (P < 0.001). Temperature difference was identified as a significant explanatory variable when used independently (P < 0.001) or after adjustment for home advantage and team ranking (P < 0.001). The likelihood of favorable outcome for GCC teams increases by 3% for every 1-unit increase in temperature difference. After inclusion of interaction with opposition, this advantage remains significant only when playing against non-GCC opponents. While home advantage increased the odds of favorable outcome (P < 0.001) and goal difference (P < 0.001) after inclusion of interaction term, the likelihood of favorable outcome for a GCC team decreased (P < 0.001) when playing against a stronger opponent. Finally, the temperature and wet bulb globe temperature approximation were found as better indicators of the effect of environmental conditions than absolute and relative humidity or heat index on match outcomes. In GCC region, higher temperature increased the likelihood of a favorable outcome when playing against non-GCC teams. However, international ranking should be considered because an opponent with a higher rank reduced, but did not eliminate, the likelihood of a favorable outcome.

  7. The Anisotropy of the Microwave Background to l=3500: Mosaic Observations with the Cosmic Background Imager

    NASA Technical Reports Server (NTRS)

    Pearson, T. J.; Mason, B. S.; Readhead, A. C. S.; Shepherd, M. C.; Sievers, J. L.; Udomprasert, P. S.; Cartwright, J. K.; Farmer, A. J.; Padin, S.; Myers, S. T.; hide

    2002-01-01

    Using the Cosmic Background Imager, a 13-element interferometer array operating in the 26-36 GHz frequency band, we have observed 40 deg (sup 2) of sky in three pairs of fields, each approximately 145 feet x 165 feet, using overlapping pointings: (mosaicing). We present images and power spectra of the cosmic microwave background radiation in these mosaic fields. We remove ground radiation and other low-level contaminating signals by differencing matched observations of the fields in each pair. The primary foreground contamination is due to point sources (radio galaxies and quasars). We have subtracted the strongest sources from the data using higher-resolution measurements, and we have projected out the response to other sources of known position in the power-spectrum analysis. The images show features on scales approximately 6 feet-15 feet, corresponding to masses approximately 5-80 x 10(exp 14) solar mass at the surface of last scattering, which are likely to be the seeds of clusters of galaxies. The power spectrum estimates have a resolution delta l approximately 200 and are consistent with earlier results in the multipole range l approximately less than 1000. The power spectrum is detected with high signal-to-noise ratio in the range 300 approximately less than l approximately less than 1700. For 1700 approximately less than l approximately less than 3000 the observations are consistent with the results from more sensitive CBI deep-field observations. The results agree with the extrapolation of cosmological models fitted to observations at lower l, and show the predicted drop at high l (the "damping tail").

  8. Copula based flexible modeling of associations between clustered event times.

    PubMed

    Geerdens, Candida; Claeskens, Gerda; Janssen, Paul

    2016-07-01

    Multivariate survival data are characterized by the presence of correlation between event times within the same cluster. First, we build multi-dimensional copulas with flexible and possibly symmetric dependence structures for such data. In particular, clustered right-censored survival data are modeled using mixtures of max-infinitely divisible bivariate copulas. Second, these copulas are fit by a likelihood approach where the vast amount of copula derivatives present in the likelihood is approximated by finite differences. Third, we formulate conditions for clustered right-censored survival data under which an information criterion for model selection is either weakly consistent or consistent. Several of the familiar selection criteria are included. A set of four-dimensional data on time-to-mastitis is used to demonstrate the developed methodology.

  9. Modelling default and likelihood reasoning as probabilistic reasoning

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. Likely and by default are in fact treated as duals in the same sense as possibility and necessity. To model these four forms probabilistically, a qualitative default probabilistic (QDP) logic and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequent results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.

  10. Perceived experiences with sexism among adolescent girls.

    PubMed

    Leaper, Campbell; Brown, Christia Spears

    2008-01-01

    This study investigated predictors of adolescent girls' experiences with sexism and feminism. Girls (N = 600; M = 15.1 years, range = 12-18), of varied socioeconomic and ethnic backgrounds, completed surveys of personal experiences with sexual harassment, academic sexism (regarding science, math, and computer technology), and athletics. Most girls reported sexual harassment (90%), academic sexism (52%), and athletic sexism (76%) at least once, with likelihood increasing with age. Socialization influences and individual factors, however, influenced likelihood of all three forms of sexism. Specifically, learning about feminism and gender-conformity pressures were linked to higher perceptions of sexism. Furthermore, girls' social gender identity (i.e., perceived gender typicality and gender-role contentedness) and gender-egalitarian attitudes were related to perceived sexism.

  11. Comparison of two weighted integration models for the cueing task: linear and likelihood

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2003-01-01

    In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.

  12. Performance of Low-Density Parity-Check Coded Modulation

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    This paper reports the simulated performance of each of the nine accumulate-repeat-4-jagged-accumulate (AR4JA) low-density parity-check (LDPC) codes [3] when used in conjunction with binary phase-shift-keying (BPSK), quadrature PSK (QPSK), 8-PSK, 16-ary amplitude PSK (16- APSK), and 32-APSK.We also report the performance under various mappings of bits to modulation symbols, 16-APSK and 32-APSK ring scalings, log-likelihood ratio (LLR) approximations, and decoder variations. One of the simple and well-performing LLR approximations can be expressed in a general equation that applies to all of the modulation types.

  13. Impact of delays to cardiac surgery after failed angioplasty and stenting.

    PubMed

    Lotfi, Mat; Mackie, Karen; Dzavik, Vladimir; Seidelin, Peter H

    2004-02-04

    This study was designed to determine the likelihood of harm in patients having additional delays before urgent coronary artery bypass graft (UCABG) surgery after percutaneous coronary intervention (PCI). Patients who have PCI at hospitals without cardiac surgery have additional delays to surgery when UCABG is indicated. Detailed chart review was performed on all patients who had a failed PCI leading to UCABG at a large tertiary care hospital. A prespecified set of criteria (hemodynamic instability, coronary perforation with significant effusion or tamponade, or severe ischemia) was used to identify patients who would have an increased likelihood of harm with additional delays to surgery. From 1996 to 2000, 6,582 PCIs were performed. There were 45 patients (0.7%) identified to have UCABG. The demographic characteristics of the UCABG patients were similar to the rest of the patients in the PCI database, except for significantly more type C lesions (45.3% vs. 25.0%, p < 0.001) and more urgent cases (66.6% vs. 49.8%, p = 0.03) in patients with UCABG. Myocardial infarction occurred in eight patients (17.0%) after UCABG, with a mean peak creatine kinase of 2,445 +/- 1,212 IU/l. Death during the index hospital admission occurred in two patients. Eleven of the 45 patients (24.4%) were identified by the prespecified criteria to be at high likelihood of harm with additional delays to surgery. The absolute risk of harm is approximately one to two patients per 1,000 PCIs. Approximately one in four patients referred for UCABG would be placed at increased risk of harm if delays to surgery were encountered.

  14. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. A New Method to Measure the Post-reionization Ionizing Background from the Joint Distribution of Lyα and Lyβ Forest Transmission

    NASA Astrophysics Data System (ADS)

    Davies, Frederick B.; Hennawi, Joseph F.; Eilers, Anna-Christina; Lukić, Zarija

    2018-03-01

    The amplitude of the ionizing background that pervades the intergalactic medium (IGM) at the end of the epoch of reionization provides a valuable constraint on the emissivity of the sources that reionized the universe. While measurements of the ionizing background at lower redshifts rely on a simulation-calibrated mapping between the photoionization rate and the mean transmission of the Lyα forest, at z ≳ 6 the IGM becomes increasingly opaque and transmission arises solely in narrow spikes separated by saturated Gunn–Peterson troughs. In this regime, the traditional approach of measuring the average transmission over large ∼50 Mpc/h regions is less sensitive and suboptimal. In addition, the five times smaller oscillator strength of the Lyβ transition implies that the Lyβ forest is considerably more transparent at z ≳ 6, even in the presence of contamination by foreground z ∼ 5 Lyα forest absorption. In this work we present a novel statistical approach to analyze the joint distribution of transmission spikes in the cospatial z ∼ 6 Lyα and Lyβ forests. Our method relies on approximate Bayesian computation (ABC), which circumvents the necessity of computing the intractable likelihood function describing the highly correlated Lyα and Lyβ transmission. We apply ABC to mock data generated from a large-volume hydrodynamical simulation combined with a state-of-the-art model of ionizing background fluctuations in the post-reionization IGM and show that it is sensitive to higher IGM neutral hydrogen fractions than previous techniques. As a proof of concept, we apply this methodology to a real spectrum of a z = 6.54 quasar and measure the ionizing background from 5.4 ≤ z ≤ 6.4 along this sightline with ∼0.2 dex statistical uncertainties. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.

  16. An efficient algorithm to compute marginal posterior genotype probabilities for every member of a pedigree with loops

    PubMed Central

    2009-01-01

    Background Marginal posterior genotype probabilities need to be computed for genetic analyses such as geneticcounseling in humans and selective breeding in animal and plant species. Methods In this paper, we describe a peeling based, deterministic, exact algorithm to compute efficiently genotype probabilities for every member of a pedigree with loops without recourse to junction-tree methods from graph theory. The efficiency in computing the likelihood by peeling comes from storing intermediate results in multidimensional tables called cutsets. Computing marginal genotype probabilities for individual i requires recomputing the likelihood for each of the possible genotypes of individual i. This can be done efficiently by storing intermediate results in two types of cutsets called anterior and posterior cutsets and reusing these intermediate results to compute the likelihood. Examples A small example is used to illustrate the theoretical concepts discussed in this paper, and marginal genotype probabilities are computed at a monogenic disease locus for every member in a real cattle pedigree. PMID:19958551

  17. Optimism and spontaneous self-affirmation are associated with lower likelihood of cognitive impairment and greater positive affect among cancer survivors

    PubMed Central

    Taber, Jennifer M.; Klein, William M. P.; Ferrer, Rebecca A.; Kent, Erin E.; Harris, Peter R.

    2016-01-01

    Background Optimism and self-affirmation promote adaptive coping, goal achievement, and better health. Purpose To examine the associations of optimism and spontaneous self-affirmation (SSA) with physical, mental, and cognitive health and information seeking among cancer survivors. Methods Cancer survivors (n=326) completed the Health Information National Trends Survey 2013, a national survey of U.S. adults. Participants reported optimism, SSA, cognitive and physical impairment, affect, health status, and information seeking. Results Participants higher in optimism reported better health on nearly all indices examined, even when controlling for SSA. Participants higher in SSA reported lower likelihood of cognitive impairment, greater happiness and hopefulness, and greater likelihood of cancer information seeking. SSA remained significantly associated with greater hopefulness and cancer information seeking when controlling for optimism. Conclusions Optimism and SSA may be associated with beneficial health-related outcomes among cancer survivors. Given the demonstrated malleability of self-affirmation, these findings represent important avenues for future research. PMID:26497697

  18. Early Language Impairments and Developmental Pathways of Emotional Problems across Childhood

    ERIC Educational Resources Information Center

    Goh Kok Yew, Shaun; O'Kearney, Richard

    2015-01-01

    Background: Language impairments are associated with an increased likelihood of emotional difficulties later in childhood or adolescence, but little is known about the impact of LI on the growth of emotional problems. Aims: To examine the link between early language status (language impaired (LI), typical language (TL)) and the pattern and…

  19. University Students' Emotions, Interest and Activities in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Nummenmaa, Minna; Nummenmaa, Lauri

    2008-01-01

    Background: Within academic settings, students experience varied emotions and interest towards learning. Although both emotions and interest can increase students' likelihood to engage in traditional learning, little is known about the influence of emotions and interest in learning activities in a web-based learning environment (WBLE). Aims: This…

  20. Developmental Assets: Profile of Youth in a Juvenile Justice Facility

    ERIC Educational Resources Information Center

    Chew, Weslee; Osseck, Jenna; Raygor, Desiree; Eldridge-Houser, Jennifer; Cox, Carol

    2010-01-01

    Background: Possessing high numbers of developmental assets greatly reduces the likelihood of a young person engaging in health-risk behaviors. Since youth in the juvenile justice system seem to exhibit many high-risk behaviors, the purpose of this study was to assess the presence of external, internal, and social context areas of developmental…

  1. Survival analysis of clinical mastitis data using a nested frailty Cox model fit as a mixed-effects Poisson model.

    PubMed

    Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik

    2014-12-01

    Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation approach for frailty Cox models based on the penalized partial likelihood. The simulation study showed good performance for the Poisson maximum likelihood approach with Gaussian quadrature and biased variance component estimates for both the Poisson maximum likelihood with Laplace approximation and penalized partial likelihood approaches. Copyright © 2014. Published by Elsevier B.V.

  2. Behaviour and burnout in medical students.

    PubMed

    Cecil, Jo; McHale, Calum; Hart, Jo; Laidlaw, Anita

    2014-01-01

    Background Burnout is prevalent in doctors and can impact on job dissatisfaction and patient care. In medical students, burnout is associated with poorer self-rated health; however, it is unclear what factors influence its development. This study investigated whether health behaviours predict burnout in medical students. Methods Medical students (n=356) at the Universities of St Andrews and Manchester completed an online questionnaire assessing: emotional exhaustion (EE), depersonalisation (DP), personal accomplishment (PA), alcohol use, physical activity, diet, and smoking. Results Approximately 55% (54.8%) of students reported high levels of EE, 34% reported high levels of DP, and 46.6% reported low levels of PA. Linear regression analysis revealed that year of study, physical activity, and smoking status significantly predicted EE whilst gender, year of study, and institution significantly predicted DP. PA was significantly predicted by alcohol binge score, year of study, gender, and physical activity. Conclusions Burnout is present in undergraduate medical students in the United Kingdom, and health behaviours, particularly physical activity, predict components of burnout. Gender, year of study, and institution also appear to influence the prevalence of burnout. Encouraging medical students to make healthier lifestyle choices early in their medical training may reduce the likelihood of the development of burnout.

  3. Behaviour and burnout in medical students

    PubMed Central

    Cecil, Jo; McHale, Calum; Hart, Jo; Laidlaw, Anita

    2014-01-01

    Background Burnout is prevalent in doctors and can impact on job dissatisfaction and patient care. In medical students, burnout is associated with poorer self-rated health; however, it is unclear what factors influence its development. This study investigated whether health behaviours predict burnout in medical students. Methods Medical students (n=356) at the Universities of St Andrews and Manchester completed an online questionnaire assessing: emotional exhaustion (EE), depersonalisation (DP), personal accomplishment (PA), alcohol use, physical activity, diet, and smoking. Results Approximately 55% (54.8%) of students reported high levels of EE, 34% reported high levels of DP, and 46.6% reported low levels of PA. Linear regression analysis revealed that year of study, physical activity, and smoking status significantly predicted EE whilst gender, year of study, and institution significantly predicted DP. PA was significantly predicted by alcohol binge score, year of study, gender, and physical activity. Conclusions Burnout is present in undergraduate medical students in the United Kingdom, and health behaviours, particularly physical activity, predict components of burnout. Gender, year of study, and institution also appear to influence the prevalence of burnout. Encouraging medical students to make healthier lifestyle choices early in their medical training may reduce the likelihood of the development of burnout. PMID:25160716

  4. Measurement of the Forward-Backward Asymmetry in the Production of B+/- Mesons In pp Collisions

    NASA Astrophysics Data System (ADS)

    Hogan, Julie M.

    We present a measurement of the forward-backward asymmetry in the production of B+ mesons, A_FB(B+), using B+ -> J/psi K+ decays in 10.4 inverse femtobarns of p p-bar collisions at sqrt(s) = 1.96 TeV collected by the D0 experiment during Run II of the Tevatron collider. A nonzero asymmetry would indicate a preference for a particular flavor, i.e., b quark or b-bar antiquark, to be produced in the direction of the proton beam. We extract A_FB(B+) from a maximum likelihood fit to the difference between the numbers of forward- and backward-produced B+ mesons, using a boosted decision tree to reduce background. Corrections are made for reconstruction asymmetries of the decay products. We measure an asymmetry consistent with zero: A_FB(B+) = [-0.24 +/- 0.41(stat) +/- 0.19(syst)]%. The standard model estimate from next-to-leading-order Monte Carlo is A_FB(B+) = [2.31 +/- 0.34(stat.) +/- 0.51(syst.)]%. There is a difference of approximately 3 standard deviations between this prediction and our result, which suggests that more rigorous determination of the standard model prediction is needed to interpret these results.

  5. A HECT Ubiquitin-Protein Ligase as a Novel Candidate Gene for Altered Quinine and Quinidine Responses in Plasmodium falciparum

    PubMed Central

    Sanchez, Cecilia P.; Cyrklaff, Marek; Mu, Jianbing; Ferdig, Michael T.; Stein, Wilfred D.; Lanzer, Michael

    2014-01-01

    The emerging resistance to quinine jeopardizes the efficacy of a drug that has been used in the treatment of malaria for several centuries. To identify factors contributing to differential quinine responses in the human malaria parasite Plasmodium falciparum, we have conducted comparative quantitative trait locus analyses on the susceptibility to quinine and also its stereoisomer quinidine, and on the initial and steady-state intracellular drug accumulation levels in the F1 progeny of a genetic cross. These data, together with genetic screens of field isolates and laboratory strains associated differential quinine and quinidine responses with mutated pfcrt, a segment on chromosome 13, and a novel candidate gene, termed MAL7P1.19 (encoding a HECT ubiquitin ligase). Despite a strong likelihood of association, episomal transfections demonstrated a role for the HECT ubiquitin-protein ligase in quinine and quinidine sensitivity in only a subset of genetic backgrounds, and here the changes in IC50 values were moderate (approximately 2-fold). These data show that quinine responsiveness is a complex genetic trait with multiple alleles playing a role and that more experiments are needed to unravel the role of the contributing factors. PMID:24830312

  6. The Maximum Likelihood Solution for Inclination-only Data

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2006-12-01

    The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag

  7. Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction

    NASA Astrophysics Data System (ADS)

    Cui, Tiangang; Marzouk, Youssef; Willcox, Karen

    2016-06-01

    Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.

  8. Bayesian Parameter Inference and Model Selection by Population Annealing in Systems Biology

    PubMed Central

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor. PMID:25089832

  9. Confidence Intervals for the Between-Study Variance in Random Effects Meta-Analysis Using Generalised Cochran Heterogeneity Statistics

    ERIC Educational Resources Information Center

    Jackson, Dan

    2013-01-01

    Statistical inference is problematic in the common situation in meta-analysis where the random effects model is fitted to just a handful of studies. In particular, the asymptotic theory of maximum likelihood provides a poor approximation, and Bayesian methods are sensitive to the prior specification. Hence, less efficient, but easily computed and…

  10. Identification of multiple leaks in pipeline: Linearized model, maximum likelihood, and super-resolution localization

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Ghidaoui, Mohamed S.

    2018-07-01

    This paper considers the problem of identifying multiple leaks in a water-filled pipeline based on inverse transient wave theory. The analytical solution to this problem involves nonlinear interaction terms between the various leaks. This paper shows analytically and numerically that these nonlinear terms are of the order of the leak sizes to the power two and; thus, negligible. As a result of this simplification, a maximum likelihood (ML) scheme that identifies leak locations and leak sizes separately is formulated and tested. It is found that the ML estimation scheme is highly efficient and robust with respect to noise. In addition, the ML method is a super-resolution leak localization scheme because its resolvable leak distance (approximately 0.15λmin , where λmin is the minimum wavelength) is below the Nyquist-Shannon sampling theorem limit (0.5λmin). Moreover, the Cramér-Rao lower bound (CRLB) is derived and used to show the efficiency of the ML scheme estimates. The variance of the ML estimator approximates the CRLB proving that the ML scheme belongs to class of best unbiased estimator of leak localization methods.

  11. Learn-as-you-go acceleration of cosmological parameter estimates

    NASA Astrophysics Data System (ADS)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C.

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.

  12. Inferring social ties from geographic coincidences.

    PubMed

    Crandall, David J; Backstrom, Lars; Cosley, Dan; Suri, Siddharth; Huttenlocher, Daniel; Kleinberg, Jon

    2010-12-28

    We investigate the extent to which social ties between people can be inferred from co-occurrence in time and space: Given that two people have been in approximately the same geographic locale at approximately the same time, on multiple occasions, how likely are they to know each other? Furthermore, how does this likelihood depend on the spatial and temporal proximity of the co-occurrences? Such issues arise in data originating in both online and offline domains as well as settings that capture interfaces between online and offline behavior. Here we develop a framework for quantifying the answers to such questions, and we apply this framework to publicly available data from a social media site, finding that even a very small number of co-occurrences can result in a high empirical likelihood of a social tie. We then present probabilistic models showing how such large probabilities can arise from a natural model of proximity and co-occurrence in the presence of social ties. In addition to providing a method for establishing some of the first quantifiable estimates of these measures, our findings have potential privacy implications, particularly for the ways in which social structures can be inferred from public online records that capture individuals' physical locations over time.

  13. Learn-as-you-go acceleration of cosmological parameter estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C., E-mail: g.aslanyan@auckland.ac.nz, E-mail: r.easther@auckland.ac.nz, E-mail: lpri691@aucklanduni.ac.nz

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitlymore » describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.« less

  14. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less

  15. Uncued Low SNR Detection with Likelihood from Image Multi Bernoulli Filter

    NASA Astrophysics Data System (ADS)

    Murphy, T.; Holzinger, M.

    2016-09-01

    Both SSA and SDA necessitate uncued, partially informed detection and orbit determination efforts for small space objects which often produce only low strength electro-optical signatures. General frame to frame detection and tracking of objects includes methods such as moving target indicator, multiple hypothesis testing, direct track-before-detect methods, and random finite set based multiobject tracking. This paper will apply the multi-Bernoilli filter to low signal-to-noise ratio (SNR), uncued detection of space objects for space domain awareness applications. The primary novel innovation in this paper is a detailed analysis of the existing state-of-the-art likelihood functions and a likelihood function, based on a binary hypothesis, previously proposed by the authors. The algorithm is tested on electro-optical imagery obtained from a variety of sensors at Georgia Tech, including the GT-SORT 0.5m Raven-class telescope, and a twenty degree field of view high frame rate CMOS sensor. In particular, a data set of an extended pass of the Hitomi Astro-H satellite approximately 3 days after loss of communication and potential break up is examined.

  16. A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.

    PubMed

    Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua

    2017-07-01

    Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.

  17. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.

    PubMed

    Han, Qiyang; Wellner, Jon A

    2016-01-01

    In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.

  18. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES

    PubMed Central

    Han, Qiyang; Wellner, Jon A.

    2017-01-01

    In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410

  19. Approximate Bayesian computation for forward modeling in cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akeret, Joël; Refregier, Alexandre; Amara, Adam

    Bayesian inference is often used in cosmology and astrophysics to derive constraints on model parameters from observations. This approach relies on the ability to compute the likelihood of the data given a choice of model parameters. In many practical situations, the likelihood function may however be unavailable or intractable due to non-gaussian errors, non-linear measurements processes, or complex data formats such as catalogs and maps. In these cases, the simulation of mock data sets can often be made through forward modeling. We discuss how Approximate Bayesian Computation (ABC) can be used in these cases to derive an approximation to themore » posterior constraints using simulated data sets. This technique relies on the sampling of the parameter set, a distance metric to quantify the difference between the observation and the simulations and summary statistics to compress the information in the data. We first review the principles of ABC and discuss its implementation using a Population Monte-Carlo (PMC) algorithm and the Mahalanobis distance metric. We test the performance of the implementation using a Gaussian toy model. We then apply the ABC technique to the practical case of the calibration of image simulations for wide field cosmological surveys. We find that the ABC analysis is able to provide reliable parameter constraints for this problem and is therefore a promising technique for other applications in cosmology and astrophysics. Our implementation of the ABC PMC method is made available via a public code release.« less

  20. Nicotine Dependence, “Background” and Cue-Induced Craving and Smoking in the Laboratory

    PubMed Central

    Dunbar, Michael S.; Shiffman, Saul; Kirchner, Thomas; Tindle, Hilary; Scholl, Sarah

    2014-01-01

    Background Nicotine dependence has been associated with higher “background” craving and smoking, independent of situational cues. Due in part to conceptual and methodological differences across past studies, the relationship between dependence and cue-reactivity (CR; e.g., cue-induced craving and smoking) remains unclear. Methods 207 daily smokers completed six pictorial CR sessions (smoking, negative affect, positive affect, alcohol, smoking prohibitions, and neutral). Individuals rated craving before (background craving) and after cues, and could smoke following cue exposure. Session videos were coded to assess smoking. Participants completed four nicotine dependence measures. Regression models assessed the relationship of dependence to cue-independent (i.e., pre-cue) and cue-specific (i.e., pre-post cue change for each cue, relative to neutral) craving and smoking (likelihood of smoking, latency to smoke, puff count). Results Dependence was associated with background craving and smoking, but did not predict change in craving across the entire sample for any cue. Among alcohol drinkers, dependence was associated with greater increases in craving following the alcohol cue. Only one dependence measure (Wisconsin Inventory of Smoking Dependence Motives) was consistently associated with smoking reactivity (higher likelihood of smoking, shorter latency to smoke, greater puff count) in response to cues. Conclusion While related to cue-independent background craving and smoking, dependence is not strongly associated with laboratory cue-induced craving under conditions of minimal deprivation. Dependence measures that incorporate situational influences on smoking correlate with greater cue-provoked smoking. This may suggest independent roles for CR and traditional dependence as determinants of smoking, and highlights the importance of assessing behavioral CR outcomes. PMID:25028339

  1. From biomedicine to natural history research: EST resources for ambystomatid salamanders

    PubMed Central

    Putta, Srikrishna; Smith, Jeramiah J; Walker, John A; Rondet, Mathieu; Weisrock, David W; Monaghan, James; Samuels, Amy K; Kump, Kevin; King, David C; Maness, Nicholas J; Habermann, Bianca; Tanaka, Elly; Bryant, Susan V; Gardiner, David M; Parichy, David M; Voss, S Randal

    2004-01-01

    Background Establishing genomic resources for closely related species will provide comparative insights that are crucial for understanding diversity and variability at multiple levels of biological organization. We developed ESTs for Mexican axolotl (Ambystoma mexicanum) and Eastern tiger salamander (A. tigrinum tigrinum), species with deep and diverse research histories. Results Approximately 40,000 quality cDNA sequences were isolated for these species from various tissues, including regenerating limb and tail. These sequences and an existing set of 16,030 cDNA sequences for A. mexicanum were processed to yield 35,413 and 20,599 high quality ESTs for A. mexicanum and A. t. tigrinum, respectively. Because the A. t. tigrinum ESTs were obtained primarily from a normalized library, an approximately equal number of contigs were obtained for each species, with 21,091 unique contigs identified overall. The 10,592 contigs that showed significant similarity to sequences from the human RefSeq database reflected a diverse array of molecular functions and biological processes, with many corresponding to genes expressed during spinal cord injury in rat and fin regeneration in zebrafish. To demonstrate the utility of these EST resources, we searched databases to identify probes for regeneration research, characterized intra- and interspecific nucleotide polymorphism, saturated a human – Ambystoma synteny group with marker loci, and extended PCR primer sets designed for A. mexicanum / A. t. tigrinum orthologues to a related tiger salamander species. Conclusions Our study highlights the value of developing resources in traditional model systems where the likelihood of information transfer to multiple, closely related taxa is high, thus simultaneously enabling both laboratory and natural history research. PMID:15310388

  2. Improvement and comparison of likelihood functions for model calibration and parameter uncertainty analysis within a Markov chain Monte Carlo scheme

    NASA Astrophysics Data System (ADS)

    Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim

    2014-11-01

    In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.

  3. Detection methods for non-Gaussian gravitational wave stochastic backgrounds

    NASA Astrophysics Data System (ADS)

    Drasco, Steve; Flanagan, Éanna É.

    2003-04-01

    A gravitational wave stochastic background can be produced by a collection of independent gravitational wave events. There are two classes of such backgrounds, one for which the ratio of the average time between events to the average duration of an event is small (i.e., many events are on at once), and one for which the ratio is large. In the first case the signal is continuous, sounds something like a constant hiss, and has a Gaussian probability distribution. In the second case, the discontinuous or intermittent signal sounds something like popcorn popping, and is described by a non-Gaussian probability distribution. In this paper we address the issue of finding an optimal detection method for such a non-Gaussian background. As a first step, we examine the idealized situation in which the event durations are short compared to the detector sampling time, so that the time structure of the events cannot be resolved, and we assume white, Gaussian noise in two collocated, aligned detectors. For this situation we derive an appropriate version of the maximum likelihood detection statistic. We compare the performance of this statistic to that of the standard cross-correlation statistic both analytically and with Monte Carlo simulations. In general the maximum likelihood statistic performs better than the cross-correlation statistic when the stochastic background is sufficiently non-Gaussian, resulting in a gain factor in the minimum gravitational-wave energy density necessary for detection. This gain factor ranges roughly between 1 and 3, depending on the duty cycle of the background, for realistic observing times and signal strengths for both ground and space based detectors. The computational cost of the statistic, although significantly greater than that of the cross-correlation statistic, is not unreasonable. Before the statistic can be used in practice with real detector data, further work is required to generalize our analysis to accommodate separated, misaligned detectors with realistic, colored, non-Gaussian noise.

  4. IMNN: Information Maximizing Neural Networks

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  5. Speckle attenuation by adaptive singular value shrinking with generalized likelihood matching in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Chen, Huaiguang; Fu, Shujun; Wang, Hong; Lv, Hongli; Zhang, Caiming

    2018-03-01

    As a high-resolution imaging mode of biological tissues and materials, optical coherence tomography (OCT) is widely used in medical diagnosis and analysis. However, OCT images are often degraded by annoying speckle noise inherent in its imaging process. Employing the bilateral sparse representation an adaptive singular value shrinking method is proposed for its highly sparse approximation of image data. Adopting the generalized likelihood ratio as similarity criterion for block matching and an adaptive feature-oriented backward projection strategy, the proposed algorithm can restore better underlying layered structures and details of the OCT image with effective speckle attenuation. The experimental results demonstrate that the proposed algorithm achieves a state-of-the-art despeckling performance in terms of both quantitative measurement and visual interpretation.

  6. The Anisotropy of the Microwave Background to l = 3500: Deep Field Observations with the Cosmic Background Imager

    NASA Technical Reports Server (NTRS)

    Mason, B. S.; Pearson, T. J.; Readhead, A. C. S.; Shepherd, M. C.; Sievers, J.; Udomprasert, P. S.; Cartwright, J. K.; Farmer, A. J.; Padin, S.; Myers, S. T.; hide

    2002-01-01

    We report measurements of anisotropy in the cosmic microwave background radiation over the multipole range l approximately 200 (right arrow) 3500 with the Cosmic Background Imager based on deep observations of three fields. These results confirm the drop in power with increasing l first reported in earlier measurements with this instrument, and extend the observations of this decline in power out to l approximately 2000. The decline in power is consistent with the predicted damping of primary anisotropies. At larger multipoles, l = 2000-3500, the power is 3.1 sigma greater than standard models for intrinsic microwave background anisotropy in this multipole range, and 3.5 sigma greater than zero. This excess power is not consistent with expected levels of residual radio source contamination but, for sigma 8 is approximately greater than 1, is consistent with predicted levels due to a secondary Sunyaev-Zeldovich anisotropy. Further observations are necessary to confirm the level of this excess and, if confirmed, determine its origin.

  7. Promoting Exercise as Part of a Physiotherapy-Led Falls Pathway Service for Adults with Intellectual Disabilities: A Service Evaluation

    ERIC Educational Resources Information Center

    Crockett, Jennifer; Finlayson, Janet; Skelton, Dawn A.; Miller, Gillian

    2015-01-01

    Background: People with intellectual disabilities experience high rates of falls. Balance and gait problems are common in people with intellectual disabilities, increasing the likelihood of falls; thus, tailored exercise interventions to improve gait and balance are recommended. The present authors set up a physiotherapy-led falls pathway service…

  8. Using Propensity Score Matching to Model Retention of Developmental Math Students in Community Colleges in North Carolina

    ERIC Educational Resources Information Center

    Frye, Bobbie Jean

    2014-01-01

    Traditionally, modeling student retention has been done by deriving student success predictors and measuring the likelihood of success based on several background factors such as age, race, gender, and other pre-college variables, also known as the input-output model. Increasingly, however, researchers have used mediating factors of the student…

  9. Development of Junior High School Students' Fundamental Movement Skills and Physical Activity in a Naturalistic Physical Education Setting

    ERIC Educational Resources Information Center

    Kalaja, Sami Pekka; Jaakkola, Timo Tapio; Liukkonen, Jarmo Olavi; Digelidis, Nikolaos

    2012-01-01

    Background: There is evidence showing that fundamental movement skills and physical activity are related with each other. The ability to perform a variety of fundamental movement skills increases the likelihood of children participating in different physical activities throughout their lives. However, no fundamental movement skill interventions…

  10. Mathematical description and program documentation for CLASSY, an adaptive maximum likelihood clustering method

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Rassbach, M. E.

    1979-01-01

    Discussed in this report is the clustering algorithm CLASSY, including detailed descriptions of its general structure and mathematical background and of the various major subroutines. The report provides a development of the logic and equations used with specific reference to program variables. Some comments on timing and proposed optimization techniques are included.

  11. The Likelihood of Completing a Government-Funded VET Program, 2010-14. Australian Vocational Education and Training Statistics

    ERIC Educational Resources Information Center

    National Centre for Vocational Education Research (NCVER), 2016

    2016-01-01

    The Australian vocational education and training (VET) system provides training across a wide range of subject areas and is delivered through a variety of training institutions and enterprises (including to apprentices and trainees). The system provides training for students of all ages and backgrounds. Students may study individual subjects or…

  12. Differences in Health Behaviors of Overweight or Obese College Students Compared to Healthy Weight Students

    ERIC Educational Resources Information Center

    Harrington, M. Rachel; Ickes, Melinda J.

    2016-01-01

    Background: Obesity continues to be an epidemic in college students, yet research is warranted to determine whether obesity increases the likelihood of risky health behaviors in this population. Purpose: The purpose of this study was to examine the association between body mass index (BMI) and health behaviors in college students. Methods: A…

  13. Improving the School Environment to Reduce School Violence: A Review of the Literature

    ERIC Educational Resources Information Center

    Johnson, Sarah Lindstrom

    2009-01-01

    Background: School violence can impact the social, psychological, and physical well-being of both students and teachers and disrupt the learning process. This review focuses on a new area of research, the mechanisms by which the school environment determines the likelihood of school violence. Methods: A search for peer-reviewed articles was made…

  14. Predicting Rank Attainment in Political Science: What Else besides Publications Affects Promotion?

    ERIC Educational Resources Information Center

    Hesli, Vicki L.; Lee, Jae Mook; Mitchell, Sara McLaughlin

    2012-01-01

    We report the results of hypotheses tests about the effects of several measures of research, teaching, and service on the likelihood of achieving the ranks of associate and full professor. In conducting these tests, we control for institutional and individual background characteristics. We focus our tests on the link between productivity and…

  15. Australian Vocational Education and Training Statistics: The Likelihood of Completing a VET Qualification, 2005-08

    ERIC Educational Resources Information Center

    National Centre for Vocational Education Research (NCVER), 2011

    2011-01-01

    The Australian vocational education and training (VET) system provides training across a wide range of subject areas and is delivered through a variety of training institutions and enterprises (including to apprentices and trainees). The system provides training for students of all ages and backgrounds. Students may study individual subjects or…

  16. School Context Matters: The Impacts of Concentrated Poverty and Racial Segregation on Childhood Obesity

    ERIC Educational Resources Information Center

    Piontak, Joy Rayanne; Schulman, Michael D.

    2016-01-01

    Background: Schools are important sites for interventions to prevent childhood obesity. This study examines how variables measuring the socioeconomic and racial composition of schools and counties affect the likelihood of obesity among third to fifth grade children. Methods: Body mass index data were collected from third to fifth grade public…

  17. The Relationship between Fundamental Movement Skills and Self-Reported Physical Activity during Finnish Junior High School

    ERIC Educational Resources Information Center

    Jaakkola, Timo; Washington, Tracy

    2013-01-01

    Background: Previous studies have shown that fundamental movement skills (FMS) and physical activity are related. Specifically, earlier studies have demonstrated that the ability to perform a variety of FMS increases the likelihood of children participating in a range of physical activities throughout their lives. To date, however, there have not…

  18. Using the Extended Parallel Process Model to Examine Teachers' Likelihood of Intervening in Bullying

    ERIC Educational Resources Information Center

    Duong, Jeffrey; Bradshaw, Catherine P.

    2013-01-01

    Background: Teachers play a critical role in protecting students from harm in schools, but little is known about their attitudes toward addressing problems like bullying. Previous studies have rarely used theoretical frameworks, making it difficult to advance this area of research. Using the Extended Parallel Process Model (EPPM), we examined the…

  19. Relative Immaturity and ADHD: Findings from Nationwide Registers, Parent- and Self-Reports

    ERIC Educational Resources Information Center

    Halldner, Linda; Tillander, Annika; Lundholm, Cecilia; Boman, Marcus; Långström, Niklas; Larsson, Henrik; Lichtenstein, Paul

    2014-01-01

    Background: We addressed if immaturity relative to peers reflected in birth month increases the likelihood of ADHD diagnosis and treatment. Methods: We linked nationwide Patient and Prescribed Drug Registers and used prospective cohort and nested case-control designs to study 6-69 year-old individuals in Sweden from July 2005 to December 2009…

  20. Applying ecological concepts to the management of widespread grass invasions [Chapter 7

    Treesearch

    Carla M. D' Antonio; Jeanne C. Chambers; Rhonda Loh; J. Tim Tunison

    2009-01-01

    The management of plant invasions has typically focused on the removal of invading populations or control of existing widespread species to unspecified but lower levels. Invasive plant management typically has not involved active restoration of background vegetation to reduce the likelihood of invader reestablishment. Here, we argue that land managers could benefit...

  1. A Struggling School Receives an "F" That: Connecting Moral Luck and Educational Technology Will Build Hope

    ERIC Educational Resources Information Center

    Thomas, Joy; Brevetti, Melissa

    2017-01-01

    With the recent publication of "Our Kids: The American Dream in Crisis," Robert Putnam, renowned social scientist, exposed the new class divide as an urgent issue, based upon new quantitative and qualitative data (2015). Research shows that students from struggling backgrounds have little likelihood of graduating from colleges and…

  2. Predictors of Academic Performance for Finance Students: Women at Higher Education in the UAE

    ERIC Educational Resources Information Center

    El Massah, Suzanna Sobhy; Fadly, Dalia

    2017-01-01

    Purpose: The study uses data drawn from a senior finance major cohort of 78 female undergraduates at Zayed University (ZU)-UAE to investigate factors, which increase the likelihood of achieving better academic performance in an Islamic finance course based on information about socioeconomic background of female students. The paper aims to discuss…

  3. Parameter inference in small world network disease models with approximate Bayesian Computational methods

    NASA Astrophysics Data System (ADS)

    Walker, David M.; Allingham, David; Lee, Heung Wing Joseph; Small, Michael

    2010-02-01

    Small world network models have been effective in capturing the variable behaviour of reported case data of the SARS coronavirus outbreak in Hong Kong during 2003. Simulations of these models have previously been realized using informed “guesses” of the proposed model parameters and tested for consistency with the reported data by surrogate analysis. In this paper we attempt to provide statistically rigorous parameter distributions using Approximate Bayesian Computation sampling methods. We find that such sampling schemes are a useful framework for fitting parameters of stochastic small world network models where simulation of the system is straightforward but expressing a likelihood is cumbersome.

  4. HLA Match Likelihoods for Hematopoietic Stem-Cell Grafts in the U.S. Registry

    PubMed Central

    Gragert, Loren; Eapen, Mary; Williams, Eric; Freeman, John; Spellman, Stephen; Baitty, Robert; Hartzman, Robert; Rizzo, J. Douglas; Horowitz, Mary; Confer, Dennis; Maiers, Martin

    2018-01-01

    Background Hematopoietic stem-cell transplantation (HSCT) is a potentially lifesaving therapy for several blood cancers and other diseases. For patients without a suitable related HLA-matched donor, unrelated-donor registries of adult volunteers and banked umbilical cord–blood units, such as the Be the Match Registry operated by the National Marrow Donor Program (NMDP), provide potential sources of donors. Our goal in the present study was to measure the likelihood of finding a suitable donor in the U.S. registry. Methods Using human HLA data from the NMDP donor and cord-blood-unit registry, we built population-based genetic models for 21 U.S. racial and ethnic groups to predict the likelihood of identifying a suitable donor (either an adult donor or a cord-blood unit) for patients in each group. The models incorporated the degree of HLA matching, adult-donor availability (i.e., ability to donate), and cord-blood-unit cell dose. Results Our models indicated that most candidates for HSCT will have a suitable (HLA-matched or minimally mismatched) adult donor. However, many patients will not have an optimal adult donor — that is, a donor who is matched at high resolution at HLA-A, HLA-B, HLA-C, and HLA-DRB1. The likelihood of finding an optimal donor varies among racial and ethnic groups, with the highest probability among whites of European descent, at 75%, and the lowest probability among blacks of South or Central American descent, at 16%. Likelihoods for other groups are intermediate. Few patients will have an optimal cord-blood unit — that is, one matched at the antigen level at HLA-A and HLA-B and matched at high resolution at HLA-DRB1. However, cord-blood units mismatched at one or two HLA loci are available for almost all patients younger than 20 years of age and for more than 80% of patients 20 years of age or older, regardless of racial and ethnic background. Conclusions Most patients likely to benefit from HSCT will have a donor. Public investment in donor recruitment and cord-blood banks has expanded access to HSCT. (Funded by the Office of Naval Research, Department of the Navy, and the Health Resources and Services Administration, Department of Health and Human Services.) PMID:25054717

  5. Epidemiology of Suicide Attempts among Youth Transitioning to Adulthood.

    PubMed

    Thompson, Martie P; Swartout, Kevin

    2018-04-01

    Suicide is the second leading cause of death for older adolescents and young adults. Although empirical literature has identified important risk factors of suicidal behavior, it is less understood if changes in risk factors correspond with changes in suicide risk. To address this knowledge gap, we assessed if there were different trajectories of suicidal behavior as youth transition into young adulthood and determined what time-varying risk factors predicted these trajectories. This study used four waves of data spanning approximately 13 years from the National Longitudinal Study of Adolescent Health. The sample included 9027 respondents who were 12-18 years old (M = 15.26; SD = 1.76) at Wave 1, 50% male, 17% Hispanic, and 58% White. The results indicated that 93.6% of the sample had a low likelihood for suicide attempts across time, 5.1% had an elevated likelihood of attempting suicide in adolescence but not young adulthood, and 1.3% had an elevated likelihood of attempting suicide during adolescence and adulthood. The likelihood of a suicide attempt corresponded with changes on depression, impulsivity, delinquency, alcohol problems, family and friend suicide history, and experience with partner violence. Determining how suicide risk changes as youth transition into young adulthood and what factors predict these changes can help prevent suicide. Interventions targeting these risk factors could lead to reductions in suicide attempts.

  6. Stochastic capture zone analysis of an arsenic-contaminated well using the generalized likelihood uncertainty estimator (GLUE) methodology

    NASA Astrophysics Data System (ADS)

    Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro

    2003-06-01

    In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.

  7. CFHTLenS: a Gaussian likelihood is a sufficient approximation for a cosmological analysis of third-order cosmic shear statistics

    NASA Astrophysics Data System (ADS)

    Simon, P.; Semboloni, E.; van Waerbeke, L.; Hoekstra, H.; Erben, T.; Fu, L.; Harnois-Déraps, J.; Heymans, C.; Hildebrandt, H.; Kilbinger, M.; Kitching, T. D.; Miller, L.; Schrabback, T.

    2015-05-01

    We study the correlations of the shear signal between triplets of sources in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) to probe cosmological parameters via the matter bispectrum. In contrast to previous studies, we adopt a non-Gaussian model of the data likelihood which is supported by our simulations of the survey. We find that for state-of-the-art surveys, similar to CFHTLenS, a Gaussian likelihood analysis is a reasonable approximation, albeit small differences in the parameter constraints are already visible. For future surveys we expect that a Gaussian model becomes inaccurate. Our algorithm for a refined non-Gaussian analysis and data compression is then of great utility especially because it is not much more elaborate if simulated data are available. Applying this algorithm to the third-order correlations of shear alone in a blind analysis, we find a good agreement with the standard cosmological model: Σ _8=σ _8(Ω _m/0.27)^{0.64}=0.79^{+0.08}_{-0.11} for a flat Λ cold dark matter cosmology with h = 0.7 ± 0.04 (68 per cent credible interval). Nevertheless our models provide only moderately good fits as indicated by χ2/dof = 2.9, including a 20 per cent rms uncertainty in the predicted signal amplitude. The models cannot explain a signal drop on scales around 15 arcmin, which may be caused by systematics. It is unclear whether the discrepancy can be fully explained by residual point spread function systematics of which we find evidence at least on scales of a few arcmin. Therefore we need a better understanding of higher order correlations of cosmic shear and their systematics to confidently apply them as cosmological probes.

  8. Population stochastic modelling (PSM)--an R package for mixed-effects models based on stochastic differential equations.

    PubMed

    Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik

    2009-06-01

    The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.

  9. Hybrid Stochastic Models for Remaining Lifetime Prognosis

    DTIC Science & Technology

    2004-08-01

    literature for techniques and comparisons. Os- ogami and Harchol-Balter [70], Perros [73], Johnson [36], and Altiok [5] provide excellent summaries of...and type of PH-distribution approximation for c2 > 0.5 is not as obvious. In order to use the minimum distance estimation, Perros [73] indicated that...moment-matching techniques. Perros [73] indicated that the maximum likelihood and minimum distance techniques require nonlinear optimization. Johnson

  10. The Power of Positioning: The Stories of National Hispanic Scholars' Lives and Their Mothers' Careful Placement to Enhance the Likelihood of Academic Success

    ERIC Educational Resources Information Center

    Ulibarri-Nasio, Crystal S.

    2010-01-01

    Established in 1983 by the College Board, the National Hispanic Recognition Program annually recognizes approximately 3,300 Hispanic students who scored the highest on the Preliminary SAT/National Merit Scholarship Qualifying Test (PSAT/NMSQT). These top-performing high school students are recruited by U.S. universities as National Hispanic…

  11. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    NASA Astrophysics Data System (ADS)

    De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles

    2017-09-01

    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.

  12. Neyman Pearson detection of K-distributed random variables

    NASA Astrophysics Data System (ADS)

    Tucker, J. Derek; Azimi-Sadjadi, Mahmood R.

    2010-04-01

    In this paper a new detection method for sonar imagery is developed in K-distributed background clutter. The equation for the log-likelihood is derived and compared to the corresponding counterparts derived for the Gaussian and Rayleigh assumptions. Test results of the proposed method on a data set of synthetic underwater sonar images is also presented. This database contains images with targets of different shapes inserted into backgrounds generated using a correlated K-distributed model. Results illustrating the effectiveness of the K-distributed detector are presented in terms of probability of detection, false alarm, and correct classification rates for various bottom clutter scenarios.

  13. PyEvolve: a toolkit for statistical modelling of molecular evolution.

    PubMed

    Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A

    2004-01-05

    Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used interactively or by writing and executing scripts. The toolkit uses efficient processes for specifying the parameterisation of statistical models, and implements numerous optimisations that make highly parameter rich likelihood functions solvable within hours on multi-cpu hardware. PyEvolve can be readily adapted in response to changing computational demands and hardware configurations to maximise performance. PyEvolve is released under the GPL and can be downloaded from http://cbis.anu.edu.au/software.

  14. A novel retinal vessel extraction algorithm based on matched filtering and gradient vector flow

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Xia, Mingliang; Xuan, Li

    2013-10-01

    The microvasculature network of retina plays an important role in the study and diagnosis of retinal diseases (age-related macular degeneration and diabetic retinopathy for example). Although it is possible to noninvasively acquire high-resolution retinal images with modern retinal imaging technologies, non-uniform illumination, the low contrast of thin vessels and the background noises all make it difficult for diagnosis. In this paper, we introduce a novel retinal vessel extraction algorithm based on gradient vector flow and matched filtering to segment retinal vessels with different likelihood. Firstly, we use isotropic Gaussian kernel and adaptive histogram equalization to smooth and enhance the retinal images respectively. Secondly, a multi-scale matched filtering method is adopted to extract the retinal vessels. Then, the gradient vector flow algorithm is introduced to locate the edge of the retinal vessels. Finally, we combine the results of matched filtering method and gradient vector flow algorithm to extract the vessels at different likelihood levels. The experiments demonstrate that our algorithm is efficient and the intensities of vessel images exactly represent the likelihood of the vessels.

  15. Universal Decoder for PPM of any Order

    NASA Technical Reports Server (NTRS)

    Moision, Bruce E.

    2010-01-01

    A recently developed algorithm for demodulation and decoding of a pulse-position- modulation (PPM) signal is suitable as a basis for designing a single hardware decoding apparatus to be capable of handling any PPM order. Hence, this algorithm offers advantages of greater flexibility and lower cost, in comparison with prior such algorithms, which necessitate the use of a distinct hardware implementation for each PPM order. In addition, in comparison with the prior algorithms, the present algorithm entails less complexity in decoding at large orders. An unavoidably lengthy presentation of background information, including definitions of terms, is prerequisite to a meaningful summary of this development. As an aid to understanding, the figure illustrates the relevant processes of coding, modulation, propagation, demodulation, and decoding. An M-ary PPM signal has M time slots per symbol period. A pulse (signifying 1) is transmitted during one of the time slots; no pulse (signifying 0) is transmitted during the other time slots. The information intended to be conveyed from the transmitting end to the receiving end of a radio or optical communication channel is a K-bit vector u. This vector is encoded by an (N,K) binary error-correcting code, producing an N-bit vector a. In turn, the vector a is subdivided into blocks of m = log2(M) bits and each such block is mapped to an M-ary PPM symbol. The resultant coding/modulation scheme can be regarded as equivalent to a nonlinear binary code. The binary vector of PPM symbols, x is transmitted over a Poisson channel, such that there is obtained, at the receiver, a Poisson-distributed photon count characterized by a mean background count nb during no-pulse time slots and a mean signal-plus-background count of ns+nb during a pulse time slot. In the receiver, demodulation of the signal is effected in an iterative soft decoding process that involves consideration of relationships among photon counts and conditional likelihoods of m-bit vectors of coded bits. Inasmuch as the likelihoods of all the m-bit vectors of coded bits mapping to the same PPM symbol are correlated, the best performance is obtained when the joint mbit conditional likelihoods are utilized. Unfortunately, the complexity of decoding, measured in the number of operations per bit, grows exponentially with m, and can thus become prohibitively expensive for large PPM orders. For a system required to handle multiple PPM orders, the cost is even higher because it is necessary to have separate decoding hardware for each order. This concludes the prerequisite background information. In the present algorithm, the decoding process as described above is modified by, among other things, introduction of an lbit marginalizer sub-algorithm. The term "l-bit marginalizer" signifies that instead of m-bit conditional likelihoods, the decoder computes l-bit conditional likelihoods, where l is fixed. Fixing l, regardless of the value of m, makes it possible to use a single hardware implementation for any PPM order. One could minimize the decoding complexity and obtain an especially simple design by fixing l at 1, but this would entail some loss of performance. An intermediate solution is to fix l at some value, greater than 1, that may be less than or greater than m. This solution makes it possible to obtain the desired flexibility to handle any PPM order while compromising between complexity and loss of performance.

  16. Treatment Recommendations for Single-Unit Crowns: Findings from The National Dental Practice-Based Research Network

    PubMed Central

    McCracken, Michael S.; Louis, David R.; Litaker, Mark S.; Minyé, Helena M.; Mungia, Rahma; Gordan, Valeria V.; Marshall, Don G.; Gilbert, Gregg H.

    2016-01-01

    Background Objectives were to: (1) quantify practitioner variation in likelihood to recommend a crown; and (2) test whether certain dentist, practice, and clinical factors are significantly associated with this likelihood. Methods Dentists in the National Dental Practice-Based Research Network completed a questionnaire about indications for single-unit crowns. In four clinical scenarios, practitioners ranked their likelihood of recommending a single-unit crown. These responses were used to calculate a dentist-specific “Crown Factor” (CF; range 0–12). A higher score implies a higher likelihood to recommend a crown. Certain characteristics were tested for statistically significant associations with the CF. Results 1,777 of 2,132 eligible dentists responded (83%). Practitioners were most likely to recommend crowns for teeth that were fractured, cracked, endodontically-treated, or had a broken restoration. Practitioners overwhelmingly recommended crowns for posterior teeth treated endodontically (94%). Practice owners, Southwest practitioners, and practitioners with a balanced work load were more likely to recommend crowns, as were practitioners who use optical scanners for digital impressions. Conclusions There is substantial variation in the likelihood of recommending a crown. While consensus exists in some areas (posterior endodontic treatment), variation dominates in others (size of an existing restoration). Recommendations varied by type of practice, network region, practice busyness, patient insurance status, and use of optical scanners. Practical Implications Recommendations for crowns may be influenced by factors unrelated to tooth and patient variables. A concern for tooth fracture -- whether from endodontic treatment, fractured teeth, or large restorations -- prompted many clinicians to recommend crowns. PMID:27492046

  17. The Associations Between the Religious Background, Social Supports, and Do-Not-Resuscitate Orders in Taiwan

    PubMed Central

    Lin, Kuan-Han; Chen, Yih-Sharng; Chou, Nai-Kuan; Huang, Sheng-Jean; Wu, Chau-Chung; Chen, Yen-Yuan

    2016-01-01

    Abstract Prior studies have demonstrated important implications related to religiosity and a do-not-resuscitate (DNR) decision. However, the association between patients’ religious background and DNR decisions is vague. In particular, the association between the religious background of Buddhism/Daoism and DNR decisions has never been examined. The objective of this study was to examine the association between patients’ religious background and their DNR decisions, with a particular focus on Buddhism/Daoism. The medical records of the patients who were admitted to the 3 surgical intensive care units (SICU) in a university-affiliated medical center located at Northern Taiwan from June 1, 2011 to December 31, 2013 were retrospectively collected. We compared the clinical/demographic variables of DNR patients with those of non-DNR patients using the Student t test or χ2 test depending on the scale of the variables. We used multivariate logistic regression analysis to examine the association between the religious backgrounds and DNR decisions. A sample of 1909 patients was collected: 122 patients had a DNR order; and 1787 patients did not have a DNR order. Old age (P = 0.02), unemployment (P = 0.02), admission diagnosis of “nonoperative, cardiac failure/insufficiency” (P = 0.03), and severe acute illness at SICU admission (P < 0.01) were significantly associated with signing of DNR orders. Patients’ religious background of Buddhism/Daoism (P = 0.04), married marital status (P = 0.02), and admission diagnosis of “postoperative, major surgery” (P = 0.02) were less likely to have a DNR order written during their SICU stay. Furthermore, patients with poor social support, as indicated by marital and working status, were more likely to consent to a DNR order during SICU stay. This study showed that the religious background of Buddhism/Daoism was significantly associated with a lower likelihood of consenting to a DNR, and poor social support was significantly associated with a higher likelihood of having a DNR order written during SICU stay. PMID:26817913

  18. The Associations Between the Religious Background, Social Supports, and Do-Not-Resuscitate Orders in Taiwan: An Observational Study.

    PubMed

    Lin, Kuan-Han; Chen, Yih-Sharng; Chou, Nai-Kuan; Huang, Sheng-Jean; Wu, Chau-Chung; Chen, Yen-Yuan

    2016-01-01

    Prior studies have demonstrated important implications related to religiosity and a do-not-resuscitate (DNR) decision. However, the association between patients' religious background and DNR decisions is vague. In particular, the association between the religious background of Buddhism/Daoism and DNR decisions has never been examined. The objective of this study was to examine the association between patients' religious background and their DNR decisions, with a particular focus on Buddhism/Daoism.The medical records of the patients who were admitted to the 3 surgical intensive care units (SICU) in a university-affiliated medical center located at Northern Taiwan from June 1, 2011 to December 31, 2013 were retrospectively collected. We compared the clinical/demographic variables of DNR patients with those of non-DNR patients using the Student t test or χ test depending on the scale of the variables. We used multivariate logistic regression analysis to examine the association between the religious backgrounds and DNR decisions.A sample of 1909 patients was collected: 122 patients had a DNR order; and 1787 patients did not have a DNR order. Old age (P = 0.02), unemployment (P = 0.02), admission diagnosis of "nonoperative, cardiac failure/insufficiency" (P = 0.03), and severe acute illness at SICU admission (P < 0.01) were significantly associated with signing of DNR orders. Patients' religious background of Buddhism/Daoism (P = 0.04), married marital status (P = 0.02), and admission diagnosis of "postoperative, major surgery" (P = 0.02) were less likely to have a DNR order written during their SICU stay. Furthermore, patients with poor social support, as indicated by marital and working status, were more likely to consent to a DNR order during SICU stay.This study showed that the religious background of Buddhism/Daoism was significantly associated with a lower likelihood of consenting to a DNR, and poor social support was significantly associated with a higher likelihood of having a DNR order written during SICU stay.

  19. Is Primary Prostate Cancer Treatment Influenced by Likelihood of Extraprostatic Disease? A Surveillance, Epidemiology and End Results Patterns of Care Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holmes, Jordan A.; Wang, Andrew Z.; University of North Carolina-Lineberger Comprehensive Cancer Center, Chapel Hill, NC

    2012-09-01

    Purpose: To examine the patterns of primary treatment in a recent population-based cohort of prostate cancer patients, stratified by the likelihood of extraprostatic cancer as predicted by disease characteristics available at diagnosis. Methods and Materials: A total of 157,371 patients diagnosed from 2004 to 2008 with clinically localized and potentially curable (node-negative, nonmetastatic) prostate cancer, who have complete information on prostate-specific antigen, Gleason score, and clinical stage, were included. Patients with clinical T1/T2 disease were grouped into categories of <25%, 25%-50%, and >50% likelihood of having extraprostatic disease using the Partin nomogram. Clinical T3/T4 patients were examined separately as themore » highest-risk group. Logistic regression was used to examine the association between patient group and receipt of each primary treatment, adjusting for age, race, year of diagnosis, marital status, Surveillance, Epidemiology and End Results database region, and county-level education. Separate models were constructed for primary surgery, external-beam radiotherapy (RT), and conservative management. Results: On multivariable analysis, increasing likelihood of extraprostatic disease was significantly associated with increasing use of RT and decreased conservative management. Use of surgery also increased. Patients with >50% likelihood of extraprostatic cancer had almost twice the odds of receiving prostatectomy as those with <25% likelihood, and T3-T4 patients had 18% higher odds. Prostatectomy use increased in recent years. Patients aged 76-80 years were likely to be managed conservatively, even those with a >50% likelihood of extraprostatic cancer (34%) and clinical T3-T4 disease (24%). The proportion of patients who received prostatectomy or conservative management was approximately 50% or slightly higher in all groups. Conclusions: There may be underutilization of RT in older prostate cancer patients and those with likely extraprostatic disease. Because more than half of prostate cancer patients do not consult with a radiation oncologist, a multidisciplinary consultation may affect the treatment decision-making process.« less

  20. Utilization of Lean Methodology to Refine Hiring Practices in a Clinical Research Center Setting

    ERIC Educational Resources Information Center

    Johnson, Marcus R.; Bullard, A. Jasmine; Whitley, R. Lawrence

    2018-01-01

    Background & Aims: Lean methodology is a continuous process improvement approach that is used to identify and eliminate unnecessary steps (or waste) in a process. It increases the likelihood that the highest level of value possible is provided to the end-user, or customer, in the form of the product delivered through that process. Lean…

  1. Dopamine and Serotonin Transporter Genotypes Moderate Sensitivity to Maternal Expressed Emotion: The Case of Conduct and Emotional Problems in Attention Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Sonuga-Barke, Edmund J. S.; Oades, Robert D.; Psychogiou, Lamprini; Chen, Wai; Franke, Barbara; Buitelaar, Jan; Banaschewski, Tobias; Ebstein, Richard P.; Gil, Michael; Anney, Richard; Miranda, Ana; Roeyers, Herbert; Rothenberger, Aribert; Sergeant, Joseph; Steinhausen, Hans Christoph; Thompson, Margaret; Asherson, Philip; Faraone, Stephen V.

    2009-01-01

    Background: Mothers' positive emotions expressed about their children with attention deficit/hyperactivity disorder (ADHD) are associated with a reduced likelihood of comorbid conduct problems (CP). We examined whether this association with CP, and one with emotional problems (EMO), is moderated by variants within three genes, previously reported…

  2. Sexting, Risk Behavior, and Mental Health in Adolescents: An Examination of 2015 Pennsylvania Youth Risk Behavior Survey Data

    ERIC Educational Resources Information Center

    Frankel, Anne S.; Bass, Sarah Bauerle; Patterson, Freda; Dai, Ting; Brown, Deanna

    2018-01-01

    Background: Sexting, the sharing of sexually suggestive photos, may be a gateway behavior to early sexual activity and increase the likelihood of social ostracism. Methods: Youth Risk Behavior Survey (N = 6021) data from 2015 among Pennsylvania 9th-12th grade students were used to examine associations between consensual and nonconsensual sexting…

  3. 2005 Workplace and Equal Opportunity Survey of Active-Duty Members Administration, Datasets, and Codebook

    DTIC Science & Technology

    2007-06-01

    Results .......................................................................17 Table 7. E-mail Address Availability by Active-Duty Service ...the following ten topic areas: 1. Background Information— Service , gender, paygrade, race/ethnicity, ethnic ancestry, and education. 2. Family and...likelihood to stay on active duty, spouse/family support to stay on active duty, years spent in military service , willingness to recommend military

  4. Poisson mixture model for measurements using counting.

    PubMed

    Miller, Guthrie; Justus, Alan; Vostrotin, Vadim; Dry, Donald; Bertelli, Luiz

    2010-03-01

    Starting with the basic Poisson statistical model of a counting measurement process, 'extraPoisson' variance or 'overdispersion' are included by assuming that the Poisson parameter representing the mean number of counts itself comes from another distribution. The Poisson parameter is assumed to be given by the quantity of interest in the inference process multiplied by a lognormally distributed normalising coefficient plus an additional lognormal background that might be correlated with the normalising coefficient (shared uncertainty). The example of lognormal environmental background in uranium urine data is discussed. An additional uncorrelated background is also included. The uncorrelated background is estimated from a background count measurement using Bayesian arguments. The rather complex formulas are validated using Monte Carlo. An analytical expression is obtained for the probability distribution of gross counts coming from the uncorrelated background, which allows straightforward calculation of a classical decision level in the form of a gross-count alarm point with a desired false-positive rate. The main purpose of this paper is to derive formulas for exact likelihood calculations in the case of various kinds of backgrounds.

  5. Approximate Genealogies Under Genetic Hitchhiking

    PubMed Central

    Pfaffelhuber, P.; Haubold, B.; Wakolbinger, A.

    2006-01-01

    The rapid fixation of an advantageous allele leads to a reduction in linked neutral variation around the target of selection. The genealogy at a neutral locus in such a selective sweep can be simulated by first generating a random path of the advantageous allele's frequency and then a structured coalescent in this background. Usually the frequency path is approximated by a logistic growth curve. We discuss an alternative method that approximates the genealogy by a random binary splitting tree, a so-called Yule tree that does not require first constructing a frequency path. Compared to the coalescent in a logistic background, this method gives a slightly better approximation for identity by descent during the selective phase and a much better approximation for the number of lineages that stem from the founder of the selective sweep. In applications such as the approximation of the distribution of Tajima's D, the two approximation methods perform equally well. For relevant parameter ranges, the Yule approximation is faster. PMID:17182733

  6. Parenting Efficacy and Support in Mothers With Dual Disorders in a Substance Abuse Treatment Program.

    PubMed

    Brown, Suzanne; Hicks, Laurel M; Tracy, Elizabeth M

    2016-01-01

    Approximately 73% of women entering treatment for substance use disorders are mothers of children younger than 18, and the high rate of mental health disorders among mothers with substance use disorders increases their vulnerability to poor parenting practices. Parenting efficacy and social support for parenting have emerged as significant predictors of positive parenting practices among families at risk for child maltreatment. The purpose of the current study was to examine the impact of parenting support and parenting efficacy on the likelihood of out-of-home placement and custody status among the children of mothers with dual substance use and mental health disorders. This study examined the impact of parenting efficacy and assistance with childcare on the likelihood of child out-of-home placement and custody status among 175 mothers with diagnosed dual substance and mental health disorder and in treatment for substance dependence. Logistic regression was utilized to assess the contributions of parenting efficacy and the number of individuals in mothers' social networks who assist with childcare to the likelihood of out-of-home placement and custody loss of children. Parenting efficacy was also examined as a mediator using bootstrapping in PROCESS for SPSS. Greater parenting efficacy was associated with lower likelihood of having at least one child in out-of-home placement (B = -.064, SE = .029, p = .027) and lower likelihood of loss of child custody (B = -.094, SE = .034, p = .006). Greater number of children in the 6 to 18 age range predicted greater likelihood of having at least one child in the custody of someone else (B = .409, SE = .171, p = .017) and in out-of-home placement (B = .651, SE = .167, p < .001). In addition, mothers who identified as African American were less likely to have a child in out-of-home placement (B = .927, SE = .382, p = .015) or to have lost custody of a child (B = -1.31, SE = .456, p = .004). Finally, parenting efficacy mediated the relationship between parenting support and likelihood of out-of-home placement (effect = -.0604, SE = .0297, z = 2.035, p = .042) and between parenting support and likelihood of custody loss (effect = -.0332, SE = .0144, z = -2.298, p = .022). Implications for practice include the utilization of personal network interventions, such as increased assistance with childcare, and increased attention to efficacy among mothers with dual disorders.

  7. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    PubMed

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  8. Contextual effects of noise on vocalization encoding in primary auditory cortex

    PubMed Central

    Ni, Ruiye; Bender, David A.; Shanechi, Amirali M.; Gamble, Jeffrey R.

    2016-01-01

    Robust auditory perception plays a pivotal function for processing behaviorally relevant sounds, particularly with distractions from the environment. The neuronal coding enabling this ability, however, is still not well understood. In this study, we recorded single-unit activity from the primary auditory cortex (A1) of awake marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise and vocalization babble. Noise effects on neural representation of target vocalizations were quantified by measuring the responses' similarity to those elicited by natural vocalizations as a function of signal-to-noise ratio. A clustering approach was used to describe the range of response profiles by reducing the population responses to a summary of four response classes (robust, balanced, insensitive, and brittle) under both noise conditions. This clustering approach revealed that, on average, approximately two-thirds of the neurons change their response class when encountering different noises. Therefore, the distortion induced by one particular masking background in single-unit responses is not necessarily predictable from that induced by another, suggesting the low likelihood of a unique group of noise-invariant neurons across different background conditions in A1. Regarding noise influence on neural activities, the brittle response group showed addition of spiking activity both within and between phrases of vocalizations relative to clean vocalizations, whereas the other groups generally showed spiking activity suppression within phrases, and the alteration between phrases was noise dependent. Overall, the variable single-unit responses, yet consistent response types, imply that primate A1 performs scene analysis through the collective activity of multiple neurons. NEW & NOTEWORTHY The understanding of where and how auditory scene analysis is accomplished is of broad interest to neuroscientists. In this paper, we systematically investigated neuronal coding of multiple vocalizations degraded by two distinct noises at various signal-to-noise ratios in nonhuman primates. In the process, we uncovered heterogeneity of single-unit representations for different auditory scenes yet homogeneity of responses across the population. PMID:27881720

  9. Contextual effects of noise on vocalization encoding in primary auditory cortex.

    PubMed

    Ni, Ruiye; Bender, David A; Shanechi, Amirali M; Gamble, Jeffrey R; Barbour, Dennis L

    2017-02-01

    Robust auditory perception plays a pivotal function for processing behaviorally relevant sounds, particularly with distractions from the environment. The neuronal coding enabling this ability, however, is still not well understood. In this study, we recorded single-unit activity from the primary auditory cortex (A1) of awake marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise and vocalization babble. Noise effects on neural representation of target vocalizations were quantified by measuring the responses' similarity to those elicited by natural vocalizations as a function of signal-to-noise ratio. A clustering approach was used to describe the range of response profiles by reducing the population responses to a summary of four response classes (robust, balanced, insensitive, and brittle) under both noise conditions. This clustering approach revealed that, on average, approximately two-thirds of the neurons change their response class when encountering different noises. Therefore, the distortion induced by one particular masking background in single-unit responses is not necessarily predictable from that induced by another, suggesting the low likelihood of a unique group of noise-invariant neurons across different background conditions in A1. Regarding noise influence on neural activities, the brittle response group showed addition of spiking activity both within and between phrases of vocalizations relative to clean vocalizations, whereas the other groups generally showed spiking activity suppression within phrases, and the alteration between phrases was noise dependent. Overall, the variable single-unit responses, yet consistent response types, imply that primate A1 performs scene analysis through the collective activity of multiple neurons. The understanding of where and how auditory scene analysis is accomplished is of broad interest to neuroscientists. In this paper, we systematically investigated neuronal coding of multiple vocalizations degraded by two distinct noises at various signal-to-noise ratios in nonhuman primates. In the process, we uncovered heterogeneity of single-unit representations for different auditory scenes yet homogeneity of responses across the population. Copyright © 2017 the American Physiological Society.

  10. Simulation-based Bayesian inference for latent traits of item response models: Introduction to the ltbayes package for R.

    PubMed

    Johnson, Timothy R; Kuhn, Kristine M

    2015-12-01

    This paper introduces the ltbayes package for R. This package includes a suite of functions for investigating the posterior distribution of latent traits of item response models. These include functions for simulating realizations from the posterior distribution, profiling the posterior density or likelihood function, calculation of posterior modes or means, Fisher information functions and observed information, and profile likelihood confidence intervals. Inferences can be based on individual response patterns or sets of response patterns such as sum scores. Functions are included for several common binary and polytomous item response models, but the package can also be used with user-specified models. This paper introduces some background and motivation for the package, and includes several detailed examples of its use.

  11. Finding shelter: two-year housing trajectories among homeless youth.

    PubMed

    Tevendale, Heather D; Comulada, W Scott; Lightfoot, Marguerita A

    2011-12-01

    The aim of this study was to (1) identify trajectories of homeless youth remaining sheltered or returning to shelter over a period of 2 years, and (2) to identify predictors of these trajectories. A sample of 426 individuals aged 14-24 years receiving services at homeless youth serving agencies completed six assessments over 2 years. Latent class growth analysis was applied to the reports of whether youth had been inconsistently sheltered (i.e., living on the street or in a squat, abandoned building, or automobile) or consistently sheltered (i.e., not living in any of those settings) during the past 3 months. Three trajectories of homeless youth remaining sheltered or returning to shelter were identified: consistently sheltered (approximately 41% of the sample); inconsistently sheltered, short-term (approximately 20%); and inconsistently sheltered, long-term (approximately 39%). Being able to go home and having not left of one's own accord predicted greater likelihood of membership in the short-term versus the long-term inconsistently sheltered trajectory. Younger age, not using drugs other than alcohol or marijuana, less involvement in informal sector activities, being able to go home, and having been homeless for <1 year predicted membership in the consistently sheltered groups versus the long-term inconsistently sheltered groups in the multivariate analyses. Findings suggest that being able to return home is more important than the degree of individual impairment (e.g., substance use or mental health problems) when determining the likelihood that a homeless youth follows a more or a less chronically homeless pathway. Copyright © 2011. Published by Elsevier Inc.

  12. Work Status and Return to the Workforce after Coronary Artery Bypass Grafting and/or Heart Valve Surgery: A One-Year-Follow Up Study.

    PubMed

    Fonager, Kirsten; Lundbye-Christensen, Søren; Andreasen, Jan Jesper; Futtrup, Mikkel; Christensen, Anette Luther; Ahmad, Khalil; Nørgaard, Martin Agge

    2014-01-01

    Background. Several characteristics appear to be important for estimating the likelihood of reentering the workforce after surgery. The aim of the present study was to describe work status in a two-year time period around the time of cardiac surgery and estimate the probability of returning to the workforce. Methods. We included 681 patients undergoing coronary artery bypass grafting and/or heart valve procedures from 2003 to 2007 in the North Denmark Region. We linked hospital data to data in the DREAM database which holds information of everyone receiving social benefits. Results. At the time of surgery 17.3% were allocated disability pension and 2.3% were allocated a permanent part-time benefit. Being unemployed one year before surgery reduced the likelihood of return to the workforce (RR = 0.74 (0.60-0.92)) whereas unemployment at the time of surgery had no impact on return to the workforce (RR = 0.96 (0.78-1.18)). Sickness absence before surgery reduced the likelihood of return to the workforce. Conclusion. This study found the work status before surgery to be associated with the likelihood of return to the workforce within one year after surgery. Before surgery one-fifth of the population either was allocated disability pension or received a permanent part-time benefit.

  13. Work Status and Return to the Workforce after Coronary Artery Bypass Grafting and/or Heart Valve Surgery: A One-Year-Follow Up Study

    PubMed Central

    Fonager, Kirsten; Lundbye-Christensen, Søren; Andreasen, Jan Jesper; Futtrup, Mikkel; Christensen, Anette Luther; Ahmad, Khalil; Nørgaard, Martin Agge

    2014-01-01

    Background. Several characteristics appear to be important for estimating the likelihood of reentering the workforce after surgery. The aim of the present study was to describe work status in a two-year time period around the time of cardiac surgery and estimate the probability of returning to the workforce. Methods. We included 681 patients undergoing coronary artery bypass grafting and/or heart valve procedures from 2003 to 2007 in the North Denmark Region. We linked hospital data to data in the DREAM database which holds information of everyone receiving social benefits. Results. At the time of surgery 17.3% were allocated disability pension and 2.3% were allocated a permanent part-time benefit. Being unemployed one year before surgery reduced the likelihood of return to the workforce (RR = 0.74 (0.60–0.92)) whereas unemployment at the time of surgery had no impact on return to the workforce (RR = 0.96 (0.78–1.18)). Sickness absence before surgery reduced the likelihood of return to the workforce. Conclusion. This study found the work status before surgery to be associated with the likelihood of return to the workforce within one year after surgery. Before surgery one-fifth of the population either was allocated disability pension or received a permanent part-time benefit. PMID:25024848

  14. Does the Company Programme Have the Same Impact on Young Women and Men? A Study of Entrepreneurship Education in Norwegian Upper Secondary Schools

    ERIC Educational Resources Information Center

    Johansen, Vegard

    2017-01-01

    This paper asks whether a European entrepreneurship programme called the "Company Programme" (CP) has an impact on young women and men with regard to career preferences, and perceptions of business skills and the likelihood of having a company. CP is taught to 270,000 students in 39 European countries, and approximately 15% of Norwegian…

  15. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    PubMed

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  16. Extending the Fellegi-Sunter probabilistic record linkage method for approximate field comparators.

    PubMed

    DuVall, Scott L; Kerber, Richard A; Thomas, Alun

    2010-02-01

    Probabilistic record linkage is a method commonly used to determine whether demographic records refer to the same person. The Fellegi-Sunter method is a probabilistic approach that uses field weights based on log likelihood ratios to determine record similarity. This paper introduces an extension of the Fellegi-Sunter method that incorporates approximate field comparators in the calculation of field weights. The data warehouse of a large academic medical center was used as a case study. The approximate comparator extension was compared with the Fellegi-Sunter method in its ability to find duplicate records previously identified in the data warehouse using different demographic fields and matching cutoffs. The approximate comparator extension misclassified 25% fewer pairs and had a larger Welch's T statistic than the Fellegi-Sunter method for all field sets and matching cutoffs. The accuracy gain provided by the approximate comparator extension grew as less information was provided and as the matching cutoff increased. Given the ubiquity of linkage in both clinical and research settings, the incremental improvement of the extension has the potential to make a considerable impact.

  17. Analytical approximations to the Hotelling trace for digital x-ray detectors

    NASA Astrophysics Data System (ADS)

    Clarkson, Eric; Pineda, Angel R.; Barrett, Harrison H.

    2001-06-01

    The Hotelling trace is the signal-to-noise ratio for the ideal linear observer in a detection task. We provide an analytical approximation for this figure of merit when the signal is known exactly and the background is generated by a stationary random process, and the imaging system is an ideal digital x-ray detector. This approximation is based on assuming that the detector is infinite in extent. We test this approximation for finite-size detectors by comparing it to exact calculations using matrix inversion of the data covariance matrix. After verifying the validity of the approximation under a variety of circumstances, we use it to generate plots of the Hotelling trace as a function of pairs of parameters of the system, the signal and the background.

  18. Factors associated with seeking treatment for postpartum morbidities in rural India

    PubMed Central

    Singh, Aditya; Kumar, Abhishek

    2014-01-01

    OBJECTIVES: To understand the prevalence of postpartum morbidities and factors associated with treatment-seeking behaviour among currently married women aged 15-49 residing in rural India. METHODS: We used data from the nationally representative District Level Household Survey from 2007-2008. Cross-tabulation was used to understand the differentials for the prevalence of postpartum morbidities and treatment-seeking behaviours across selected background characteristics. Two-level binary logistic regression was applied to understand the factors associated with treatment-seeking behaviour. RESULTS: Approximately 39.8% of rural women suffered from at least one of the six postpartum morbidities including high fever, lower abdominal pain, foul-smelling vaginal discharge, excessive bleeding, convulsions, and severe headache. Morbidities were more prevalent among poor, illiterate, Muslim, and high-parity women. About 55.1% of these rural women sought treatment/consultation for their problems. The odds of seeking treatment/consultation increased as economic status and years of schooling among both the woman and her husband increased. Poor, uneducated, unemployed, Hindu, and tribal women were less likely to seek treatment/consultation for postpartum morbidities than their counterparts were. The odds of seeking treatment/consultation decreased as the distance to the nearest private health facility increased. Most women visited a private hospital (46.3%) or a friend/family member’s home (20.8%) for treatment/consultation. Only a small percentage visited publicly funded health institutions such as a primary health centre (8.8%), community health centre (6.5%), health sub-centre (2.8%), or district hospital (13.1%). Rural women from the northeast region of India were 50% less likely to seek treatment/consultation than women from the central region were. CONCLUSIONS: Providing antenatal and delivery care, and ensuring nearby government healthcare facilities are available to serve rural women might increase the likelihood of care-seeking for postpartum morbidities. Targeted interventions for vulnerable groups should be considered in future policies to increase the likelihood women will seek treatment or advice postpartum. PMID:25358467

  19. Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; Sigloch, Karin

    2016-11-01

    Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.

  20. A survey of quality assurance practices in biomedical open source software projects.

    PubMed

    Koru, Günes; El Emam, Khaled; Neisa, Angelica; Umarji, Medha

    2007-05-07

    Open source (OS) software is continuously gaining recognition and use in the biomedical domain, for example, in health informatics and bioinformatics. Given the mission critical nature of applications in this domain and their potential impact on patient safety, it is important to understand to what degree and how effectively biomedical OS developers perform standard quality assurance (QA) activities such as peer reviews and testing. This would allow the users of biomedical OS software to better understand the quality risks, if any, and the developers to identify process improvement opportunities to produce higher quality software. A survey of developers working on biomedical OS projects was conducted to examine the QA activities that are performed. We took a descriptive approach to summarize the implementation of QA activities and then examined some of the factors that may be related to the implementation of such practices. Our descriptive results show that 63% (95% CI, 54-72) of projects did not include peer reviews in their development process, while 82% (95% CI, 75-89) did include testing. Approximately 74% (95% CI, 67-81) of developers did not have a background in computing, 80% (95% CI, 74-87) were paid for their contributions to the project, and 52% (95% CI, 43-60) had PhDs. A multivariate logistic regression model to predict the implementation of peer reviews was not significant (likelihood ratio test = 16.86, 9 df, P = .051) and neither was a model to predict the implementation of testing (likelihood ratio test = 3.34, 9 df, P = .95). Less attention is paid to peer review than testing. However, the former is a complementary, and necessary, QA practice rather than an alternative. Therefore, one can argue that there are quality risks, at least at this point in time, in transitioning biomedical OS software into any critical settings that may have operational, financial, or safety implications. Developers of biomedical OS applications should invest more effort in implementing systemic peer review practices throughout the development and maintenance processes.

  1. ANTARES constrains a blazar origin of two IceCube PeV neutrino events

    NASA Astrophysics Data System (ADS)

    ANTARES Collaboration; Adrián-Martínez, S.; Albert, A.; André, M.; Anton, G.; Ardid, M.; Aubert, J.-J.; Baret, B.; Barrios, J.; Basa, S.; Bertin, V.; Biagi, S.; Bogazzi, C.; Bormuth, R.; Bou-Cabo, M.; Bouwhuis, M. C.; Bruijn, R.; Brunner, J.; Busto, J.; Capone, A.; Caramete, L.; Carr, J.; Chiarusi, T.; Circella, M.; Coniglione, R.; Costantini, H.; Coyle, P.; Creusot, A.; De Rosa, G.; Dekeyser, I.; Deschamps, A.; De Bonis, G.; Distefano, C.; Donzaud, C.; Dornic, D.; Dorosti, Q.; Drouhin, D.; Dumas, A.; Eberl, T.; Enzenhöfer, A.; Escoffier, S.; Fehn, K.; Felis, I.; Fermani, P.; Folger, F.; Fusco, L. A.; Galatà, S.; Gay, P.; Geißelsöder, S.; Geyer, K.; Giordano, V.; Gleixner, A.; Gómez-González, J. P.; Gracia-Ruiz, R.; Graf, K.; van Haren, H.; Heijboer, A. J.; Hello, Y.; Hernández-Rey, J. J.; Herrero, A.; Hößl, J.; Hofestädt, J.; Hugon, C.; James, C. W.; de Jong, M.; Kalekin, O.; Katz, U.; Kießling, D.; Kooijman, P.; Kouchner, A.; Kulikovskiy, V.; Lahmann, R.; Lattuada, D.; Lefèvre, D.; Leonora, E.; Loehner, H.; Loucatos, S.; Mangano, S.; Marcelin, M.; Margiotta, A.; Martínez-Mora, J. A.; Martini, S.; Mathieu, A.; Michael, T.; Migliozzi, P.; Neff, M.; Nezri, E.; Palioselitis, D.; Păvălaş, G. E.; Perrina, C.; Piattelli, P.; Popa, V.; Pradier, T.; Racca, C.; Riccobene, G.; Richter, R.; Roensch, K.; Rostovtsev, A.; Saldaña, M.; Samtleben, D. F. E.; Sánchez-Losa, A.; Sanguineti, M.; Sapienza, P.; Schmid, J.; Schnabel, J.; Schulte, S.; Schüssler, F.; Seitz, T.; Sieger, C.; Spies, A.; Spurio, M.; Steijger, J. J. M.; Stolarczyk, Th.; Taiuti, M.; Tamburini, C.; Tayalati, Y.; Trovato, A.; Tselengidou, M.; Tönnis, C.; Vallage, B.; Vallée, C.; Van Elewyck, V.; Visser, E.; Vivolo, D.; Wagner, S.; de Wolf, E.; Yepes, H.; Zornoza, J. D.; Zúñiga, J.; TANAMI Collaboration; Krauß, F.; Kadler, M.; Mannheim, K.; Schulz, R.; Trüstedt, J.; Wilms, J.; Ojha, R.; Ros, E.; Baumgartner, W.; Beuchert, T.; Blanchard, J.; Bürkel, C.; Carpenter, B.; Edwards, P. G.; Eisenacher Glawion, D.; Elsässer, D.; Fritsch, U.; Gehrels, N.; Gräfe, C.; Großberger, C.; Hase, H.; Horiuchi, S.; Kappes, A.; Kreikenbohm, A.; Kreykenbohm, I.; Langejahn, M.; Leiter, K.; Litzinger, E.; Lovell, J. E. J.; Müller, C.; Phillips, C.; Plötz, C.; Quick, J.; Steinbring, T.; Stevens, J.; Thompson, D. J.; Tzioumis, A. K.

    2015-04-01

    Context. The source(s) of the neutrino excess reported by the IceCube Collaboration is unknown. The TANAMI Collaboration recently reported on the multiwavelength emission of six bright, variable blazars which are positionally coincident with two of the most energetic IceCube events. Objects like these are prime candidates to be the source of the highest-energy cosmic rays, and thus of associated neutrino emission. Aims: We present an analysis of neutrino emission from the six blazars using observations with the ANTARES neutrino telescope. Methods: The standard methods of the ANTARES candidate list search are applied to six years of data to search for an excess of muons - and hence their neutrino progenitors - from the directions of the six blazars described by the TANAMI Collaboration, and which are possibly associated with two IceCube events. Monte Carlo simulations of the detector response to both signal and background particle fluxes are used to estimate the sensitivity of this analysis for different possible source neutrino spectra. A maximum-likelihood approach, using the reconstructed energies and arrival directions of through-going muons, is used to identify events with properties consistent with a blazar origin. Results: Both blazars predicted to be the most neutrino-bright in the TANAMI sample (1653-329 and 1714-336) have a signal flux fitted by the likelihood analysis corresponding to approximately one event. This observation is consistent with the blazar-origin hypothesis of the IceCube event IC 14 for a broad range of blazar spectra, although an atmospheric origin cannot be excluded. No ANTARES events are observed from any of the other four blazars, including the three associated with IceCube event IC20. This excludes at a 90% confidence level the possibility that this event was produced by these blazars unless the neutrino spectrum is flatter than -2.4. Figures 2, 3 and Appendix A are available in electronic form at http://www.aanda.org

  2. Case-Deletion Diagnostics for Maximum Likelihood Multipoint Quantitative Trait Locus Linkage Analysis

    PubMed Central

    Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.

    2009-01-01

    Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086

  3. RAxML-VI-HPC: maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models.

    PubMed

    Stamatakis, Alexandros

    2006-11-01

    RAxML-VI-HPC (randomized axelerated maximum likelihood for high performance computing) is a sequential and parallel program for inference of large phylogenies with maximum likelihood (ML). Low-level technical optimizations, a modification of the search algorithm, and the use of the GTR+CAT approximation as replacement for GTR+Gamma yield a program that is between 2.7 and 52 times faster than the previous version of RAxML. A large-scale performance comparison with GARLI, PHYML, IQPNNI and MrBayes on real data containing 1000 up to 6722 taxa shows that RAxML requires at least 5.6 times less main memory and yields better trees in similar times than the best competing program (GARLI) on datasets up to 2500 taxa. On datasets > or =4000 taxa it also runs 2-3 times faster than GARLI. RAxML has been parallelized with MPI to conduct parallel multiple bootstraps and inferences on distinct starting trees. The program has been used to compute ML trees on two of the largest alignments to date containing 25,057 (1463 bp) and 2182 (51,089 bp) taxa, respectively. icwww.epfl.ch/~stamatak

  4. Branching-ratio approximation for the self-exciting Hawkes process

    NASA Astrophysics Data System (ADS)

    Hardiman, Stephen J.; Bouchaud, Jean-Philippe

    2014-12-01

    We introduce a model-independent approximation for the branching ratio of Hawkes self-exciting point processes. Our estimator requires knowing only the mean and variance of the event count in a sufficiently large time window, statistics that are readily obtained from empirical data. The method we propose greatly simplifies the estimation of the Hawkes branching ratio, recently proposed as a proxy for market endogeneity and formerly estimated using numerical likelihood maximization. We employ our method to support recent theoretical and experimental results indicating that the best fitting Hawkes model to describe S&P futures price changes is in fact critical (now and in the recent past) in light of the long memory of financial market activity.

  5. Estimation of channel parameters and background irradiance for free-space optical link.

    PubMed

    Khatoon, Afsana; Cowley, William G; Letzepis, Nick; Giggenbach, Dirk

    2013-05-10

    Free-space optical communication can experience severe fading due to optical scintillation in long-range links. Channel estimation is also corrupted by background and electrical noise. Accurate estimation of channel parameters and scintillation index (SI) depends on perfect removal of background irradiance. In this paper, we propose three different methods, the minimum-value (MV), mean-power (MP), and maximum-likelihood (ML) based methods, to remove the background irradiance from channel samples. The MV and MP methods do not require knowledge of the scintillation distribution. While the ML-based method assumes gamma-gamma scintillation, it can be easily modified to accommodate other distributions. Each estimator's performance is compared using simulation data as well as experimental measurements. The estimators' performance are evaluated from low- to high-SI areas using simulation data as well as experimental trials. The MV and MP methods have much lower complexity than the ML-based method. However, the ML-based method shows better SI and background-irradiance estimation performance.

  6. Bayesian penalized-likelihood reconstruction algorithm suppresses edge artifacts in PET reconstruction based on point-spread-function.

    PubMed

    Yamaguchi, Shotaro; Wagatsuma, Kei; Miwa, Kenta; Ishii, Kenji; Inoue, Kazumasa; Fukushi, Masahiro

    2018-03-01

    The Bayesian penalized-likelihood reconstruction algorithm (BPL), Q.Clear, uses relative difference penalty as a regularization function to control image noise and the degree of edge-preservation in PET images. The present study aimed to determine the effects of suppression on edge artifacts due to point-spread-function (PSF) correction using a Q.Clear. Spheres of a cylindrical phantom contained a background of 5.3 kBq/mL of [ 18 F]FDG and sphere-to-background ratios (SBR) of 16, 8, 4 and 2. The background also contained water and spheres containing 21.2 kBq/mL of [ 18 F]FDG as non-background. All data were acquired using a Discovery PET/CT 710 and were reconstructed using three-dimensional ordered-subset expectation maximization with time-of-flight (TOF) and PSF correction (3D-OSEM), and Q.Clear with TOF (BPL). We investigated β-values of 200-800 using BPL. The PET images were analyzed using visual assessment and profile curves, edge variability and contrast recovery coefficients were measured. The 38- and 27-mm spheres were surrounded by higher radioactivity concentration when reconstructed with 3D-OSEM as opposed to BPL, which suppressed edge artifacts. Images of 10-mm spheres had sharper overshoot at high SBR and non-background when reconstructed with BPL. Although contrast recovery coefficients of 10-mm spheres in BPL decreased as a function of increasing β, higher penalty parameter decreased the overshoot. BPL is a feasible method for the suppression of edge artifacts of PSF correction, although this depends on SBR and sphere size. Overshoot associated with BPL caused overestimation in small spheres at high SBR. Higher penalty parameter in BPL can suppress overshoot more effectively. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Rice University observations of the galactic center

    NASA Technical Reports Server (NTRS)

    Meegan, C. A.

    1978-01-01

    The most sensitive of the four balloon fight observations of the galactic center made by Rice University was conducted in 1974 from Rio Cuarto, Argentina at a float altitude of 4 mbar. The count rate spectrum of the observed background and the energy spectrum of the galactic center region are discussed. The detector used consists of a 6 inch Nal(T 1ambda) central detector collimated to approximately 15 deg FWHM by a Nal(T lamdba) anticoincidence shield. The shield in at least two interaction mean free paths thick at all gamma ray energies. The instrumental resolution is approximately 11% FWHM at 662 keV. Pulses from the central detector are analyzed by two 256 channel PHA's covering the energy range approximately 20 keV to approximately 12 MeV. The detector is equatorially mounted and pointed by command from the ground. Observations are made by measuring source and background alternately for 10 minute periods. Background is measured by rotating the detector 180 deg about the azimuthal axis.

  8. A two-fluid approximation for calculating the cosmic microwave background anisotropies

    NASA Technical Reports Server (NTRS)

    Seljak, Uros

    1994-01-01

    We present a simplified treatment for calculating the cosmic microwave background anisotropy power spectrum in adiabatic models. It consists of solving for the evolution of a two-fluid model until the epoch of recombination and then integrating over the sources to obtain the cosmic microwave background (CMB) anisotropy power spectrum. The approximation is useful both for a physical understanding of CMB anisotropies as well as for a quantitative analysis of cosmological models. Comparison with exact calculations shows that the accuracy is typically 10%-20% over a large range of angles and cosmological models, including those with curvature and cosmological constant. Using this approximation we investigate the dependence of the CMB anisotropy on the cosmological parameters. We identify six dimensionless parameters that uniquely determine the anisotropy power spectrum within our approximation. CMB experiments on different angular scales could in principle provide information on all these parameters. In particular, mapping of the Doppler peaks would allow an independent determination of baryon mass density, matter mass density, and the Hubble constant.

  9. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  10. Factors associated with the hospital admission of consumer product-related injuries treated in U.S. hospital emergency departments.

    PubMed

    Schroeder, Thomas J; Rodgers, Gregory B

    2013-10-01

    While unintentional injuries and hazard patterns involving consumer products have been studied extensively in recent years, little attention has focused on the characteristics of those who are hospitalized after treatment in emergency departments, as opposed to those treated and released. This study quantifies the impact of the age and sex of the injury victims, and other factors, on the likelihood of hospitalization. The analysis focuses on consumer product injuries, and was based on approximately 400,000 injury cases reported through the U.S. Consumer Product Safety Commission's National Electronic Injury Surveillance System, a national probability sample of U.S. hospital emergency departments. Logistic regression was used to quantify the factors associated with the likelihood of hospitalization. The analysis suggests a smooth U-shaped relationship between the age of the victim and the likelihood of hospitalization, declining from about 3.4% for children under age 5 years to 1.9% for 15-24 year-olds, but then rising to more than 25% for those ages 75 years and older. The likelihood of hospitalization was also significantly affected by the victim's sex, as well as by the types of products involved, fire involvement, and the size and type of hospital at which the injury was treated. This study shows that the probability of hospitalization is strongly correlated with the characteristics of those who are injured, as well as other factors. Published by Elsevier Ltd.

  11. The retention of health human resources in primary healthcare centers in Lebanon: a national survey.

    PubMed

    Alameddine, Mohamad; Saleh, Shadi; El-Jardali, Fadi; Dimassi, Hani; Mourad, Yara

    2012-11-22

    Critical shortages of health human resources (HHR), associated with high turnover rates, have been a concern in many countries around the globe. Of particular interest is the effect of such a trend on the primary healthcare (PHC) sector; considered a cornerstone in any effective healthcare system. This study is a rare attempt to investigate PHC HHR work characteristics, level of burnout and likelihood to quit as well as the factors significantly associated with staff retention at PHC centers in Lebanon. A cross-sectional design was utilized to survey all health providers at 81 PHC centers dispersed in all districts of Lebanon. The questionnaire consisted of four sections: socio-demographic/ professional background, organizational/institutional characteristics, likelihood to quit and level of professional burnout (using the Maslach-Burnout Inventory). A total of 755 providers completed the questionnaire (60.5% response rate). Bivariate analyses and multinomial logistic regression were used to determine factors associated with likelihood to quit. Two out of five respondents indicated likelihood to quit their jobs within the next 1-3 years and an additional 13.4% were not sure about quitting. The top three reasons behind likelihood to quit were poor salary (54.4%), better job opportunities outside the country (35.1%) and lack of professional development (33.7%). A U-shaped relationship was observed between age and likelihood to quit. Regression analysis revealed that high levels of burnout, lower level of education and low tenure were all associated with increased likelihood to quit. The study findings reflect an unstable workforce and are not conducive to supporting an expanded role for PHC in the Lebanese healthcare system. While strategies aiming at improving staff retention would be important to develop and implement for all PHC HHR; targeted retention initiatives should focus on the young-new recruits and allied health professionals. Particular attention should be dedicated to enhancing providers' role satisfaction and sense of job security. Such initiatives are of pivotal importance to stabilize the workforce and ensure its longevity.

  12. Controlling behavior, power relations within intimate relationships and intimate partner physical and sexual violence against women in Nigeria

    PubMed Central

    2011-01-01

    Background Controlling behavior is more common and can be equally or more threatening than physical or sexual violence. This study sought to determine the role of husband/partner controlling behavior and power relations within intimate relationships in the lifetime risk of physical and sexual violence in Nigeria. Methods This study used secondary data from a cross-sectional nationally-representative survey collected by face-to-face interviews from women aged 15 - 49 years in the 2008 Nigeria Demographic and Health Survey. Utilizing a stratified two-stage cluster sample design, data was collected frrm 19 216 eligible with the DHS domestic violence module, which is based on the Conflict Tactics Scale (CTS). Multivariate logistic regression analysis was used to determine the role of husband/partner controlling behavior in the risk of ever experiencing physical and sexual violence among 2877 women aged 15 - 49 years who were currently or formerly married or cohabiting with a male partner. Results Women who reported controlling behavior by husband/partner had a higher likelihood of experiencing physical violence (RR = 3.04; 95% CI: 2.50 - 3.69), and women resident in rural areas and working in low status occupations had increased likelihood of experiencing physical IPV. Controlling behavior by husband/partner was associated with higher likelihood of experiencing physical violence (RR = 4.01; 95% CI: 2.54 - 6.34). In addition, women who justified wife beating and earned more than their husband/partner were at higher likelihood of experiencing physical and sexual violence. In contrast, women who had decision-making autonomy had lower likelihood of experiencing physical and sexual violence. Conclusion Controlling behavior by husband/partner significantly increases the likelihood of physical and sexual IPV, thus acting as a precursor to violence. Findings emphasize the need to adopt a proactive integrated approach to controlling behavior and intimate partner violence within the society. PMID:21714854

  13. Expansion of health insurance in Moldova and associated improvements in access and reductions in direct payments

    PubMed Central

    Hone, Thomas; Habicht, Jarno; Domente, Silviu; Atun, Rifat

    2016-01-01

    Background Moldova is the poorest country in Europe. Economic constraints mean that Moldova faces challenges in protecting individuals from excessive costs, improving population health and securing health system sustainability. The Moldovan government has introduced a state benefit package and expanded health insurance coverage to reduce the burden of health care costs for citizens. This study examines the effects of expanded health insurance by examining factors associated with health insurance coverage, likelihood of incurring out–of–pocket (OOP) payments for medicines or services, and the likelihood of forgoing health care when unwell. Methods Using publically available databases and the annual Moldova Household Budgetary Survey, we examine trends in health system financing, health care utilization, health insurance coverage, and costs incurred by individuals for the years 2006–2012. We perform logistic regression to assess the likelihood of having health insurance, incurring a cost for health care, and forgoing health care when ill, controlling for socio–economic and demographic covariates. Findings Private expenditure accounted for 55.5% of total health expenditures in 2012. 83.2% of private health expenditures is OOP payments–especially for medicines. Healthcare utilization is in line with EU averages of 6.93 outpatient visits per person. Being uninsured is associated with groups of those aged 25–49 years, the self–employed, unpaid family workers, and the unemployed, although we find lower likelihood of being uninsured for some of these groups over time. Over time, the likelihood of OOP for medicines increased (odds ratio OR = 1.422 in 2012 compared to 2006), but fell for health care services (OR = 0.873 in 2012 compared to 2006). No insurance and being older and male, was associated with increased likelihood of forgoing health care when sick, but we found the likelihood of forgoing health care to be increasing over time (OR = 1.295 in 2012 compared to 2009). Conclusions Moldova has achieved improvements in health insurance coverage with reductions in OOP for services, which are modest but are eroded by increasing likelihood of OOP for medicines. Insurance coverage was an important determinant for health care costs incurred by patients and patients forgoing health care. Improvements notwithstanding, there is an unfinished agenda of attaining universal health coverage in Moldova to protect individuals from health care costs. PMID:27909581

  14. Variations on Bayesian Prediction and Inference

    DTIC Science & Technology

    2016-05-09

    inference 2.2.1 Background There are a number of statistical inference problems that are not generally formulated via a full probability model...problem of inference about an unknown parameter, the Bayesian approach requires a full probability 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...the problem of inference about an unknown parameter, the Bayesian approach requires a full probability model/likelihood which can be an obstacle

  15. Nutritional and developmental status among 6- to 8-month-old children in southwestern Uganda: a cross-sectional study

    PubMed Central

    Muhoozi, Grace K. M.; Atukunda, Prudence; Mwadime, Robert; Iversen, Per Ole; Westerberg, Ane C.

    2016-01-01

    Background Undernutrition continues to pose challenges to Uganda's children, but there is limited knowledge on its association with physical and intellectual development. Objective In this cross-sectional study, we assessed the nutritional status and milestone development of 6- to 8-month-old children and associated factors in two districts of southwestern Uganda. Design Five hundred and twelve households with mother–infant (6–8 months) pairs were randomly sampled. Data about background variables (e.g. household characteristics, poverty likelihood, and child dietary diversity scores (CDDS)) were collected using questionnaires. Bayley Scales of Infant and Toddler Development (BSID III) and Ages and Stages questionnaires (ASQ) were used to collect data on child development. Anthropometric measures were used to determine z-scores for weight-for-age (WAZ), length-for-age (LAZ), weight-for-length (WLZ), head circumference (HCZ), and mid-upper arm circumference. Chi-square tests, correlation coefficients, and linear regression analyses were used to relate background variables, nutritional status indicators, and infant development. Results The prevalence of underweight, stunting, and wasting was 12.1, 24.6, and 4.7%, respectively. Household head education, gender, sanitation, household size, maternal age and education, birth order, poverty likelihood, and CDDS were associated (p<0.05) with WAZ, LAZ, and WLZ. Regression analysis showed that gender, sanitation, CDDS, and likelihood to be below the poverty line were predictors (p<0.05) of undernutrition. BSID III indicated development delay of 1.3% in cognitive and language, and 1.6% in motor development. The ASQ indicated delayed development of 24, 9.1, 25.2, 12.2, and 15.1% in communication, fine motor, gross motor, problem solving, and personal social ability, respectively. All nutritional status indicators except HCZ were positively and significantly associated with development domains. WAZ was the main predictor for all development domains. Conclusion Undernutrition among infants living in impoverished rural Uganda was associated with household sanitation, poverty, and low dietary diversity. Development domains were positively and significantly associated with nutritional status. Nutritional interventions might add value to improvement of child growth and development. PMID:27238555

  16. Time-of-flight PET image reconstruction using origin ensembles.

    PubMed

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-07

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  17. Time-of-flight PET image reconstruction using origin ensembles

    NASA Astrophysics Data System (ADS)

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-01

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  18. Driving Behaviors in Iran: A Descriptive Study Among Drivers of Mashhad City in 2014

    PubMed Central

    Bazzaz, Mojtaba Mousavi; Zarifian, Ahmadreza; Emadzadeh, Maryam; Vakili, Veda

    2015-01-01

    Background: Driver-related behaviors are substantial causes for motor vehicle accidents. It has been estimated that about 95% of all accidents are due to driver-related dangerous behaviors and approximately 60% of accidents are directly caused by driving behaviors. The aim of this study was to assess driving behaviors and its possible related factors among drivers in Mashhad city, Iran. Method: In a cross-sectional design, a total number of 514 drivers in Mashhad, Iran Surveyed. Manchester driver behavior questionnaire with 50 questions evaluated dangerous driving behaviors in 4 categories “aggressive violations”, “ordinary violations”, “errors” and “lapses”. Results: In this study, the median age of drivers was 31. Besides, 58.2% of men mentioned having a history of driving accident. Our study indicated smoking and alcohol drinking as risk factors of having more accidents. Hookah abuse is a predictor of aggressive violations and errors. Conclusion: This is the first study to assess the relation of personal car and its market value with the likelihood of having accidents. Due to major influences of driving fines, cigarette smoking, alcohol consumption and addiction on violations and errors, we recommend pivotal measures to be taken by road safety practitioners regarding driving surveillance. PMID:26153202

  19. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree

    PubMed Central

    2010-01-01

    Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service. PMID:21034504

  20. Can obviously intoxicated patrons still easily buy alcohol at on-premise establishments?

    PubMed Central

    Toomey, Traci L.; Lenk, Kathleen M.; Nederhoff, Dawn M.; Nelson, Toben F.; Ecklund, Alexandra M.; Horvath, Keith J.; Erickson, Darin J.

    2015-01-01

    Background Excessive alcohol consumption at licensed alcohol establishments (i.e., bars and restaurants) has been directly linked to alcohol-related problems such as traffic crashes and violence. Historically, alcohol establishments have had a high likelihood of selling alcohol to obviously intoxicated patrons (also referred to as “overservice”) despite laws prohibiting these sales. Given the risks associated with overservice and the need for up-to-date data, it is critical that we monitor the likelihood of sales to obviously intoxicated patrons. Methods To assess the current likelihood of a licensed alcohol establishment selling alcohol to an obviously intoxicated patron, we conducted pseudo-intoxicated purchase attempts (i.e., actors attempt to purchase alcohol while acting out obvious signs of intoxication) at 340 establishments in one Midwestern metropolitan area. We also measured characteristics of the establishments, the pseudo-intoxicated patrons, the servers, the managers, and the neighborhoods to assess whether these characteristics were associated with likelihood of sales of obviously intoxicated patrons. We assessed these associations with bivariate and multivariate regression models. Results Pseudo-intoxicated buyers were able to purchase alcohol at 82% of the establishments. In the fully adjusted multivariate regression model, only one of the characteristics we assessed was significantly associated with likelihood of selling to intoxicated patrons–establishments owned by a corporate entity had 3.6 greater odds of selling alcohol to a pseudo-intoxicated buyer compared to independently-owned establishments. Discussion Given the risks associated with overservice of alcohol, more resources should be devoted first to identify effective interventions for decreasing overservice of alcohol and then to educate practitioners who are working in their communities to address this public health problem. PMID:26891204

  1. Do Major Field of Study and Cultural Familiarity Affect TOEFL[R] iBT Reading Performance? A Confirmatory Approach to Differential Item Functioning

    ERIC Educational Resources Information Center

    Liu, Ou Lydia

    2011-01-01

    The TOEFL[R] iBT has increased the length of each reading passage to better approximate academic reading at North American universities, resulting in a reduction in the number of passages on the reading section of the test. One of the concerns brought about by this change is whether the decrease in topic variety increases the likelihood that an…

  2. Optimism following a tornado disaster.

    PubMed

    Suls, Jerry; Rose, Jason P; Windschitl, Paul D; Smith, Andrew R

    2013-05-01

    Effects of exposure to a severe weather disaster on perceived future vulnerability were assessed in college students, local residents contacted through random-digit dialing, and community residents of affected versus unaffected neighborhoods. Students and community residents reported being less vulnerable than their peers at 1 month, 6 months, and 1 year after the disaster. In Studies 1 and 2, absolute risk estimates were more optimistic with time, whereas comparative vulnerability was stable. Residents of affected neighborhoods (Study 3), surprisingly, reported less comparative vulnerability and lower "gut-level" numerical likelihood estimates at 6 months, but later their estimates resembled the unaffected residents. Likelihood estimates (10%-12%), however, exceeded the 1% risk calculated by storm experts, and gut-level versus statistical-level estimates were more optimistic. Although people believed they had approximately a 1-in-10 chance of injury from future tornadoes (i.e., an overestimate), they thought their risk was lower than peers.

  3. Multigene analysis of lophophorate and chaetognath phylogenetic relationships.

    PubMed

    Helmkampf, Martin; Bruchhaus, Iris; Hausdorf, Bernhard

    2008-01-01

    Maximum likelihood and Bayesian inference analyses of seven concatenated fragments of nuclear-encoded housekeeping genes indicate that Lophotrochozoa is monophyletic, i.e., the lophophorate groups Bryozoa, Brachiopoda and Phoronida are more closely related to molluscs and annelids than to Deuterostomia or Ecdysozoa. Lophophorates themselves, however, form a polyphyletic assemblage. The hypotheses that they are monophyletic and more closely allied to Deuterostomia than to Protostomia can be ruled out with both the approximately unbiased test and the expected likelihood weights test. The existence of Phoronozoa, a putative clade including Brachiopoda and Phoronida, has also been rejected. According to our analyses, phoronids instead share a more recent common ancestor with bryozoans than with brachiopods. Platyhelminthes is the sister group of Lophotrochozoa. Together these two constitute Spiralia. Although Chaetognatha appears as the sister group of Priapulida within Ecdysozoa in our analyses, alternative hypothesis concerning chaetognath relationships could not be rejected.

  4. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.

    2017-01-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119

  5. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    PubMed

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  6. ON THE PROPER USE OF THE REDUCED SPEED OF LIGHT APPROXIMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov

    I show that the reduced speed of light (RSL) approximation, when used properly (i.e., as originally designed—only for local sources but not for the cosmic background), remains a highly accurate numerical method for modeling cosmic reionization. Simulated ionization and star formation histories from the “Cosmic Reionization on Computers” project are insensitive to the adopted value of the RSL for as long as that value does not fall below about 10% of the true speed of light. A recent claim of the failure of the RSL approximation in the Illustris reionization model appears to be due to the effective speed ofmore » light being reduced in the equation for the cosmic background too and hence illustrates the importance of maintaining the correct speed of light in modeling the cosmic background.« less

  7. Constraints to Dark Energy Using PADE Parameterizations

    NASA Astrophysics Data System (ADS)

    Rezaei, M.; Malekjani, M.; Basilakos, S.; Mehrabi, A.; Mota, D. F.

    2017-07-01

    We put constraints on dark energy (DE) properties using PADE parameterization, and compare it to the same constraints using Chevalier-Polarski-Linder (CPL) and ΛCDM, at both the background and the perturbation levels. The DE equation of the state parameter of the models is derived following the mathematical treatment of PADE expansion. Unlike CPL parameterization, PADE approximation provides different forms of the equation of state parameter that avoid the divergence in the far future. Initially we perform a likelihood analysis in order to put constraints on the model parameters using solely background expansion data, and we find that all parameterizations are consistent with each other. Then, combining the expansion and the growth rate data, we test the viability of PADE parameterizations and compare them with CPL and ΛCDM models, respectively. Specifically, we find that the growth rate of the current PADE parameterizations is lower than ΛCDM model at low redshifts, while the differences among the models are negligible at high redshifts. In this context, we provide for the first time a growth index of linear matter perturbations in PADE cosmologies. Considering that DE is homogeneous, we recover the well-known asymptotic value of the growth index (namely {γ }∞ =\\tfrac{3({w}∞ -1)}{6{w}∞ -5}), while in the case of clustered DE, we obtain {γ }∞ ≃ \\tfrac{3{w}∞ (3{w}∞ -5)}{(6{w}∞ -5)(3{w}∞ -1)}. Finally, we generalize the growth index analysis in the case where γ is allowed to vary with redshift, and we find that the form of γ (z) in PADE parameterization extends that of the CPL and ΛCDM cosmologies, respectively.

  8. Validation of the diagnostic score for acute lower abdominal pain in women of reproductive age.

    PubMed

    Jearwattanakanok, Kijja; Yamada, Sirikan; Suntornlimsiri, Watcharin; Smuthtai, Waratsuda; Patumanond, Jayanton

    2014-01-01

    Background. The differential diagnoses of acute appendicitis obstetrics, and gynecological conditions (OB-GYNc) or nonspecific abdominal pain in young adult females with lower abdominal pain are clinically challenging. The present study aimed to validate the recently developed clinical score for the diagnosis of acute lower abdominal pain in female of reproductive age. Method. Medical records of reproductive age women (15-50 years) who were admitted for acute lower abdominal pain were collected. Validation data were obtained from patients admitted during a different period from the development data. Result. There were 302 patients in the validation cohort. For appendicitis, the score had a sensitivity of 91.9%, a specificity of 79.0%, and a positive likelihood ratio of 4.39. The sensitivity, specificity, and positive likelihood ratio in diagnosis of OB-GYNc were 73.0%, 91.6%, and 8.73, respectively. The areas under the receiver operating curves (ROC), the positive likelihood ratios, for appendicitis and OB-GYNc in the validation data were not significantly different from the development data, implying similar performances. Conclusion. The clinical score developed for the diagnosis of acute lower abdominal pain in female of reproductive age may be applied to guide differential diagnoses in these patients.

  9. 8D likelihood effective Higgs couplings extraction framework in h → 4ℓ

    DOE PAGES

    Chen, Yi; Di Marco, Emanuele; Lykken, Joe; ...

    2015-01-23

    We present an overview of a comprehensive analysis framework aimed at performing direct extraction of all possible effective Higgs couplings to neutral electroweak gauge bosons in the decay to electrons and muons, the so called ‘golden channel’. Our framework is based primarily on a maximum likelihood method constructed from analytic expressions of the fully differential cross sections for h → 4l and for the dominant irreduciblemore » $$ q\\overline{q} $$ → 4l background, where 4l = 2e2μ, 4e, 4μ. Detector effects are included by an explicit convolution of these analytic expressions with the appropriate transfer function over all center of mass variables. Utilizing the full set of observables, we construct an unbinned detector-level likelihood which is continuous in the effective couplings. We consider possible ZZ, Zγ, and γγ couplings simultaneously, allowing for general CP odd/even admixtures. A broad overview is given of how the convolution is performed and we discuss the principles and theoretical basis of the framework. This framework can be used in a variety of ways to study Higgs couplings in the golden channel using data obtained at the LHC and other future colliders.« less

  10. Halo-independent determination of the unmodulated WIMP signal in DAMA: the isotropic case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gondolo, Paolo; Scopel, Stefano, E-mail: paolo.gondolo@utah.edu, E-mail: scopel@sogang.ac.kr

    2017-09-01

    We present a halo-independent determination of the unmodulated signal corresponding to the DAMA modulation if interpreted as due to dark matter weakly interacting massive particles (WIMPs). First we show how a modulated signal gives information on the WIMP velocity distribution function in the Galactic rest frame from which the unmodulated signal descends. Then we describe a mathematically-sound profile likelihood analysis in which the likelihood is profiled over a continuum of nuisance parameters (namely, the WIMP velocity distribution). As a first application of the method, which is very general and valid for any class of velocity distributions, we restrict the analysismore » to velocity distributions that are isotropic in the Galactic frame. In this way we obtain halo-independent maximum-likelihood estimates and confidence intervals for the DAMA unmodulated signal. We find that the estimated unmodulated signal is in line with expectations for a WIMP-induced modulation and is compatible with the DAMA background+signal rate. Specifically, for the isotropic case we find that the modulated amplitude ranges between a few percent and about 25% of the unmodulated amplitude, depending on the WIMP mass.« less

  11. Extreme data compression for the CMB

    NASA Astrophysics Data System (ADS)

    Zablocki, Alan; Dodelson, Scott

    2016-04-01

    We apply the Karhunen-Loéve methods to cosmic microwave background (CMB) data sets, and show that we can recover the input cosmology and obtain the marginalized likelihoods in Λ cold dark matter cosmologies in under a minute, much faster than Markov chain Monte Carlo methods. This is achieved by forming a linear combination of the power spectra at each multipole l , and solving a system of simultaneous equations such that the Fisher matrix is locally unchanged. Instead of carrying out a full likelihood evaluation over the whole parameter space, we need evaluate the likelihood only for the parameter of interest, with the data compression effectively marginalizing over all other parameters. The weighting vectors contain insight about the physical effects of the parameters on the CMB anisotropy power spectrum Cl . The shape and amplitude of these vectors give an intuitive feel for the physics of the CMB, the sensitivity of the observed spectrum to cosmological parameters, and the relative sensitivity of different experiments to cosmological parameters. We test this method on exact theory Cl as well as on a Wilkinson Microwave Anisotropy Probe (WMAP)-like CMB data set generated from a random realization of a fiducial cosmology, comparing the compression results to those from a full likelihood analysis using CosmoMC. After showing that the method works, we apply it to the temperature power spectrum from the WMAP seven-year data release, and discuss the successes and limitations of our method as applied to a real data set.

  12. Using latent class analysis to model prescription medications in the measurement of falling among a community elderly population

    PubMed Central

    2013-01-01

    Background Falls among the elderly are a major public health concern. Therefore, the possibility of a modeling technique which could better estimate fall probability is both timely and needed. Using biomedical, pharmacological and demographic variables as predictors, latent class analysis (LCA) is demonstrated as a tool for the prediction of falls among community dwelling elderly. Methods Using a retrospective data-set a two-step LCA modeling approach was employed. First, we looked for the optimal number of latent classes for the seven medical indicators, along with the patients’ prescription medication and three covariates (age, gender, and number of medications). Second, the appropriate latent class structure, with the covariates, were modeled on the distal outcome (fall/no fall). The default estimator was maximum likelihood with robust standard errors. The Pearson chi-square, likelihood ratio chi-square, BIC, Lo-Mendell-Rubin Adjusted Likelihood Ratio test and the bootstrap likelihood ratio test were used for model comparisons. Results A review of the model fit indices with covariates shows that a six-class solution was preferred. The predictive probability for latent classes ranged from 84% to 97%. Entropy, a measure of classification accuracy, was good at 90%. Specific prescription medications were found to strongly influence group membership. Conclusions In conclusion the LCA method was effective at finding relevant subgroups within a heterogenous at-risk population for falling. This study demonstrated that LCA offers researchers a valuable tool to model medical data. PMID:23705639

  13. A Study of the Correlation between STEM Career Knowledge, Mathematics Self-Efficacy, Career Interests, and Career Activities on the Likelihood of Pursuing a STEM Career among Middle School Students

    ERIC Educational Resources Information Center

    Blotnicky, Karen A.; Franz-Odendaal, Tamara; French, Frederick; Joy, Phillip

    2018-01-01

    Background: A sample of 1448 students in grades 7 and 9 was drawn from public schools in Atlantic Canada to explore students' knowledge of science and mathematics requirements for science, technology, engineering, and mathematics (STEM) careers. Also explored were their mathematics self-efficacy (MSE), their future career interests, their…

  14. The Effects of Observation and Intervention on the Judgment of Causal and Correlational Relationships

    DTIC Science & Technology

    2009-07-28

    further referred to as normative models of causation. A second type of model, which are based on Pavlovian classical conditioning , is associative... conditions of high cognitive load), the likelihood of the accuracy of the perception is compromised. If an inaccurate perception translates to an inaccurate...correlation and causation detection in specific military operations and under conditions of operational stress. Background Models of correlation

  15. The Impact of Full Time Enrollment in the First Semester on Community College Transfer Rates: New Evidence from Texas with Pre-College Determinants

    ERIC Educational Resources Information Center

    Park, Toby J.

    2015-01-01

    Background/Context: Recent developments in state-level policy have begun to require, incentivize, and/or encourage students at community colleges to enroll full time in an effort to increase the likelihood that students will persist and transfer to four-year institution where they will be able to complete their bachelor's degree. Often, these…

  16. Disability and the impact of need for periodontal care on quality of life: A cross-sectional study

    PubMed Central

    AlAgl, Adel

    2017-01-01

    Objective The need for periodontal care may negatively impact daily life. We compared the need for periodontal care and its impact on daily life between disabled and healthy adults in the Eastern Province, Saudi Arabia. Methods In this cross-sectional study of 819 adults, a questionnaire was used to assess personal background factors; the impact of periodontitis on pain, avoiding foods, embarrassment, sleeplessness, work absence, and discontinuing daily activities; and risk factors (smoking, diabetes, toothbrushing, insurance, professional tooth cleaning, and dental visits). The outcome was clinically assessed need for periodontal care impacting daily life. The relationship between the outcome and risk factors adjusted for personal background and disability was assessed using ordinal regression. Results Healthy and disabled persons had a high need for periodontal care (66.8%). Current smokers had a higher likelihood and health-insured persons had a lower likelihood of need for periodontal care impacting daily life regardless of whether disability was considered. Conclusions Most adults needed periodontal care, and disabled persons experienced a greater impact on life. Current smokers and uninsured persons were more likely to need periodontal care impacting daily life. Our findings are important for the prevention of periodontitis through tobacco cessation and extending insurance coverage. PMID:28635358

  17. Parenting Efficacy and Support in Mothers with Dual Disorders in a Substance Abuse Treatment Program

    PubMed Central

    Brown, Suzanne; Hicks, Laurel M.; Tracy, Elizabeth M.

    2016-01-01

    Objective Approximately 73% of women entering treatment for substance use disorders are mothers of children under the age of 18 (SAMHSA, 2009), and the high rate of mental health disorders among mothers with substance use disorders increases their vulnerability to poor parenting practices. Parenting efficacy and social support for parenting have emerged as significant predictors of positive parenting practices among families at risk for child maltreatment. The purpose of the current study was to examine the impact of parenting support and parenting efficacy on the likelihood of out-of-home placement and custody status among the children of mothers with dual substance use and mental health disorders. Methods This study examined the impact of parenting efficacy, and assistance with child-care on the likelihood of child out-of-home placement and custody status among 175 mothers diagnosed with a dual substance and mental health disorder and in treatment for substance dependence. Logistic regression was utilized to assess the contributions of parenting efficacy, and the number of individuals in mothers’ social networks who assist with child-care, to the likelihood of out-of-home placement and custody loss of children. Parenting efficacy was also examined as a mediator using bootstrapping in PROCESS for SPSS. Results Greater parenting efficacy was associated with lower likelihood of having at least one child in out-of-home placement (B = −.064, SE =.029, p = .027), and lower likelihood of loss of child custody (B = −.094, SE =.034, p = .006). Greater number of children in the 6–18 age range predicted greater likelihood of having at least one child in the custody of someone else (B = .409, SE = .171, p = .017) and in out-of-home placement (B = .651, SE = .167, p < .001). Additionally, mothers who identified as African-American were less likely to have a child in out-of-home placement (B = .927, SE = .382, p = .015) or to have lost custody of a child (B = −1.31, SE = .456, p = .004). Finally, parenting efficacy mediated the relationship between parenting support and likelihood of out-of-home placement (Effect = −.0604, SE = .0297, z = 2.035, p = .042), and between parenting support and likelihood of custody loss (Effect = −.0332, SE = .0144, z = −2.298, p = .022). Conclusion Implications for practice include the utilization of personal network interventions, such as increased assistance with child-care, and increased attention to efficacy among mothers with dual disorders. PMID:27739932

  18. Estimation of correlation functions by stochastic approximation.

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Wintz, P. A.

    1972-01-01

    Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.

  19. New cosmic microwave background constraint to primordial gravitational waves.

    PubMed

    Smith, Tristan L; Pierpaoli, Elena; Kamionkowski, Marc

    2006-07-14

    Primordial gravitational waves (GWs) with frequencies > or approximately equal to 10(-15) Hz contribute to the radiation density of the Universe at the time of decoupling of the cosmic microwave background (CMB). This affects the CMB and matter power spectra in a manner identical to massless neutrinos, unless the initial density perturbation for the GWs is nonadiabatic, as may occur if such GWs are produced during inflation or some post-inflation phase transition. In either case, current observations provide a constraint to the GW amplitude that competes with that from big-bang nucleosynthesis (BBN), although it extends to much lower frequencies (approximately 10(-15) Hz rather than the approximately 10(-10) Hz from BBN): at 95% confidence level, omega(gw)h(2)

  20. Opting for rural practice: the influence of medical student origin, intention and immersion experience.

    PubMed

    Playford, Denese; Ngo, Hanh; Gupta, Surabhi; Puddey, Ian B

    2017-08-21

    To compare the influence of rural background, rural intent at medical school entry, and Rural Clinical School (RCS) participation on the likelihood of later participation in rural practice. Analysis of linked data from the Medical School Outcomes Database Commencing Medical Students Questionnaire (CMSQ), routinely collected demographic information, and the Australian Health Practitioner Regulation Agency database on practice location. University of Western Australia medical students who completed the CMSQ during 2006-2010 and were practising medicine in 2016. Medical practice in rural areas (ASGC-RAs 2-5) during postgraduate years 2-5. Full data were available for 508 eligible medical graduates. Rural background (OR, 3.91; 95% CI, 2.12-7.21; P < 0.001) and experience in an RCS (OR, 1.93; 95% CI, 1.05-3.54; P = 0.034) were significant predictors of rural practice in the multivariate analysis of all potential factors. When interactions between intention, origin, and RCS experience were included, RCS participation significantly increased the likelihood of graduates with an initial rural intention practising in a rural location (OR, 3.57; 95% CI, 1.25-10.2; P = 0.017). The effect of RCS participation was not significant if there was no pre-existing intention to practise rurally (OR, 1.38; 95% CI, 0.61-3.16; P = 0.44). For students who entered medical school with the intention to later work in a rural location, RCS experience was the deciding factor for realising this intention. Background, intent and RCS participation should all be considered if medical schools are to increase the proportion of graduates working rurally.

  1. On the proper use of the reduced speed of light approximation

    DOE PAGES

    Gnedin, Nickolay Y.

    2016-12-07

    I show that the Reduced Speed of Light (RSL) approximation, when used properly (i.e. as originally designed - only for the local sources but not for the cosmic background), remains a highly accurate numerical method for modeling cosmic reionization. Simulated ionization and star formation histories from the "Cosmic Reionization On Computers" (CROC) project are insensitive to the adopted value of the reduced speed of light for as long as that value does not fall below about 10% of the true speed of light. Here, a recent claim of the failure of the RSL approximation in the Illustris reionization model appearsmore » to be due to the effective speed of light being reduced in the equation for the cosmic background too, and, hence, illustrates the importance of maintaining the correct speed of light in modeling the cosmic background.« less

  2. On the proper use of the reduced speed of light approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y.

    I show that the Reduced Speed of Light (RSL) approximation, when used properly (i.e. as originally designed - only for the local sources but not for the cosmic background), remains a highly accurate numerical method for modeling cosmic reionization. Simulated ionization and star formation histories from the "Cosmic Reionization On Computers" (CROC) project are insensitive to the adopted value of the reduced speed of light for as long as that value does not fall below about 10% of the true speed of light. Here, a recent claim of the failure of the RSL approximation in the Illustris reionization model appearsmore » to be due to the effective speed of light being reduced in the equation for the cosmic background too, and, hence, illustrates the importance of maintaining the correct speed of light in modeling the cosmic background.« less

  3. Hardware Implementation of Serially Concatenated PPM Decoder

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael

    2009-01-01

    A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in operation of the decoder. This is accomplished in the receiver by transmitting only a subset consisting of the likelihoods that correspond to time slots containing the largest numbers of observed photons during each PPM symbol period. The assumed number of observed photons in the remaining time slots is set to the mean of a noise slot. In low background noise, the selection of a small subset in this manner results in only negligible loss. Other features of the decoder design to reduce complexity and increase speed include (1) quantization of metrics in an efficient procedure chosen to incur no more than a small performance loss and (2) the use of the max-star function that allows sum of exponentials to be computed by simple operations that involve only an addition, a subtraction, and a table lookup. Another prominent feature of the design is a provision for access to interleaver and de-interleaver memory in a single clock cycle, eliminating the multiple clock-cycle latency characteristic of prior interleaver and de-interleaver designs.

  4. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    NASA Astrophysics Data System (ADS)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  5. Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.

    PubMed

    Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu

    2015-06-01

    Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.

  6. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.

  7. The Climate Science Special Report: Perspectives on Climate Change Mitigation

    NASA Astrophysics Data System (ADS)

    DeAngelo, B. J.

    2017-12-01

    This chapter of CSSR provides scientific context for key issues regarding the long-term mitigation of climate change. Policy analysis and recommendations are beyond the scope of CSSR. Limiting and stabilizing warming to any level implies that there is an upper limit to the cumulative amount of CO2 that can be added to the atmosphere. Eventually stabilizing the global temperature requires CO2 emissions to approach zero. For a 3.6°F (2°C) or any desired global mean temperature target, an estimated range of allowable cumulative CO2 emissions from the current period onward can be calculated. Accounting for the temperature effects of non-CO2 species, cumulative CO2 emissions are required to stay below about 800 GtC in order to provide a two-thirds likelihood of preventing 3.6°F (2°C) of warming, meaning approximately 230 GtC more could be emitted globally. Assuming global emissions follow the range between the RCP8.5 and RCP4.5 scenarios, emissions could continue for approximately two decades before this cumulative carbon threshold is exceeded. Meeting a 2.7°F (1.5°C) target implies much tighter constraints. Mitigation of non-CO2 species contributes substantially to near-term cooling benefits but cannot be relied upon for ultimate stabilization goals. Successful implementation of the first round of Nationally Determined Contributions associated with the Paris Agreement will provide some likelihood of meeting the long-term temperature goal of limiting global warming to "well below" 3.6°F (2°C) above preindustrial levels; the likelihood depends strongly on the magnitude of global emission reductions after 2030. If interest in geoengineering increases, interest will also increase in assessments of the technical feasibilities, costs, risks, co-benefits, and governance challenges of these additional measures, which are as yet unproven at scale.

  8. Approximate Bayesian Computation Using Markov Chain Monte Carlo Simulation: Theory, Concepts, and Applications

    NASA Astrophysics Data System (ADS)

    Sadegh, M.; Vrugt, J. A.

    2013-12-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at increasingly finer spatial and temporal scales. Reconciling these system models with field and remote sensing data is a difficult task, particularly because average measures of model/data similarity inherently lack the power to provide a meaningful comparative evaluation of the consistency in model form and function. The very construction of the likelihood function - as a summary variable of the (usually averaged) properties of the error residuals - dilutes and mixes the available information into an index having little remaining correspondence to specific behaviors of the system (Gupta et al., 2008). The quest for a more powerful method for model evaluation has inspired Vrugt and Sadegh [2013] to introduce "likelihood-free" inference as vehicle for diagnostic model evaluation. This class of methods is also referred to as Approximate Bayesian Computation (ABC) and relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a much stronger and compelling diagnostic power than some aggregated measure of the size of the error residuals. Here, we will introduce an efficient ABC sampling method that is orders of magnitude faster in exploring the posterior parameter distribution than commonly used rejection and Population Monte Carlo (PMC) samplers. Our methodology uses Markov Chain Monte Carlo simulation with DREAM, and takes advantage of a simple computational trick to resolve discontinuity problems with the application of set-theoretic summary statistics. We will also demonstrate a set of summary statistics that are rather insensitive to errors in the forcing data. This enhances prospects of detecting model structural deficiencies.

  9. The cultural influences on help-seeking among a national sample of victimized Latino women.

    PubMed

    Sabina, Chiara; Cuevas, Carlos A; Schally, Jennifer L

    2012-06-01

    The current study examined the influence of legal status and cultural variables (i.e., acculturation, gender role ideology and religious coping) on the formal and informal help-seeking efforts of Latino women who experienced interpersonal victimization. The sample was drawn from the Sexual Assault Among Latinas (SALAS) Study that surveyed 2,000 self-identified adult Latino women. The random digit dial methodology employed in high-density Latino neighborhoods resulted in a cooperation rate of 53.7%. Women who experienced lifetime victimization (n = 714) reported help-seeking efforts in response to their most distressful victimization event that occurred in the US. Approximately one-third of the women reported formal help-seeking and about 70% of women reported informal help-seeking. Help-seeking responses were generally not predicted by the cultural factors measured, with some exceptions. Anglo orientation and negative religious coping increased the likelihood of formal help-seeking. Positive religious coping, masculine gender role and Anglo acculturation increased the likelihood of specific forms of informal help-seeking. Latino orientation decreased the likelihood of talking to a sibling. Overall, these findings reinforce the importance of bilingual culturally competent services as cultural factors shape the ways in which women respond to victimization either formally or within their social networks.

  10. Neutron Tomography of a Fuel Cell: Statistical Learning Implementation of a Penalized Likelihood Method

    NASA Astrophysics Data System (ADS)

    Coakley, Kevin J.; Vecchia, Dominic F.; Hussey, Daniel S.; Jacobson, David L.

    2013-10-01

    At the NIST Neutron Imaging Facility, we collect neutron projection data for both the dry and wet states of a Proton-Exchange-Membrane (PEM) fuel cell. Transmitted thermal neutrons captured in a scintillator doped with lithium-6 produce scintillation light that is detected by an amorphous silicon detector. Based on joint analysis of the dry and wet state projection data, we reconstruct a residual neutron attenuation image with a Penalized Likelihood method with an edge-preserving Huber penalty function that has two parameters that control how well jumps in the reconstruction are preserved and how well noisy fluctuations are smoothed out. The choice of these parameters greatly influences the resulting reconstruction. We present a data-driven method that objectively selects these parameters, and study its performance for both simulated and experimental data. Before reconstruction, we transform the projection data so that the variance-to-mean ratio is approximately one. For both simulated and measured projection data, the Penalized Likelihood method reconstruction is visually sharper than a reconstruction yielded by a standard Filtered Back Projection method. In an idealized simulation experiment, we demonstrate that the cross validation procedure selects regularization parameters that yield a reconstruction that is nearly optimal according to a root-mean-square prediction error criterion.

  11. Simpson's paradox - aggregating and partitioning populations in health disparities of lung cancer patients.

    PubMed

    Fu, P; Panneerselvam, A; Clifford, B; Dowlati, A; Ma, P C; Zeng, G; Halmos, B; Leidner, R S

    2015-12-01

    It is well known that non-small cell lung cancer (NSCLC) is a heterogeneous group of diseases. Previous studies have demonstrated genetic variation among different ethnic groups in the epidermal growth factor receptor (EGFR) in NSCLC. Research by our group and others has recently shown a lower frequency of EGFR mutations in African Americans with NSCLC, as compared to their White counterparts. In this study, we use our original study data of EGFR pathway genetics in African American NSCLC as an example to illustrate that univariate analyses based on aggregation versus partition of data leads to contradictory results, in order to emphasize the importance of controlling statistical confounding. We further investigate analytic approaches in logistic regression for data with separation, as is the case in our example data set, and apply appropriate methods to identify predictors of EGFR mutation. Our simulation shows that with separated or nearly separated data, penalized maximum likelihood (PML) produces estimates with smallest bias and approximately maintains the nominal value with statistical power equal to or better than that from maximum likelihood and exact conditional likelihood methods. Application of the PML method in our example data set shows that race and EGFR-FISH are independently significant predictors of EGFR mutation. © The Author(s) 2011.

  12. Role of the speech-language pathologist: augmentative and alternative communication for acute care patients with severe communication impairments.

    PubMed

    Vento-Wilson, Margaret T; McGuire, Anthony; Ostergren, Jennifer A

    2015-01-01

    Severe communication deficits occur frequently in acute care. Augmentative and alternative communication (AAC) may improve patient-nurse communication, yet it remains underutilized. The objective of this study was to assess the impact of training student nurses (SNs) in acute and critical care on the use of AAC with regard to confidence levels and likelihood of implementation of AAC by SNs in acute care. Training in AAC techniques was provided to SNs. A pretraining and posttraining assessment was completed along with follow-up surveys conducted after the SNs had an opportunity to use AAC. A 6-fold increase in confidence (P < .01) was reported by the SNs after AAC training, as was an approximately 3-fold increase in likelihood of use (P < .01). The reliable yes/no was the most reported AAC technique (34.7% of the students). Providing SNs with AAC tools accompanied by brief training increases their confidence in the use of AAC and the likelihood that they will use them. Inclusion of AAC education in nursing curricula and nursing orientations could be an important step in risk reduction among patients with severe communication disorders. Further study is needed of the relationship between training student nurses in the use of AAC as a way to change practice and improve communication outcomes.

  13. The role of family social background and inheritance in later life volunteering: Evidence from SHARE-Israel

    PubMed Central

    Youssim, Iaroslav; Hank, Karsten; Litwin, Howard

    2014-01-01

    Building on a tripartite model of capitals necessary to perform productive activities and on work suggesting that cumulative (dis-) advantage processes are important mechanisms for life-course inequalities, our study set out to investigate the potential role of family social background and inheritance in later-life volunteering. We hypothesized that older individuals who inherited work-relevant economic and cultural capitals from their family of origin are more likely to be engaged in voluntary activities than their counterparts with a less advantageous family social background. Our main findings from the analysis of a representative sample of community-dwelling Israelis aged 50 and over provide strong support for this hypothesis: the likelihood to volunteer is significantly higher among those who received substantial financial transfers from their family of origin (‘inherited economic capital’) and among those having a ‘white collar’ parental background (‘inherited cultural capital’). We conclude with perspectives for future research. PMID:25651548

  14. The role of family social background and inheritance in later life volunteering: evidence from SHARE-Israel.

    PubMed

    Youssim, Iaroslav; Hank, Karsten; Litwin, Howard

    2015-01-01

    Building on a tripartite model of capitals necessary to perform productive activities and on work suggesting that cumulative (dis-)advantage processes are important mechanisms for life course inequalities, our study set out to investigate the potential role of family social background and inheritance in later life volunteering. We hypothesized that older individuals who inherited work-relevant economic and cultural capitals from their family of origin are more likely to be engaged in voluntary activities than their counterparts with a less advantageous family social background. Our main findings from the analysis of a representative sample of community-dwelling Israelis aged 50 and over provide strong support for this hypothesis: the likelihood to volunteer is significantly higher among those who received substantial financial transfers from their family of origin ("inherited economic capital") and among those having a "white collar" parental background ("inherited cultural capital"). We conclude with perspectives for future research. © The Author(s) 2014.

  15. The TileCal Online Energy Estimation for the Next LHC Operation Period

    NASA Astrophysics Data System (ADS)

    Sotto-Maior Peralva, B.; ATLAS Collaboration

    2015-05-01

    The ATLAS Tile Calorimeter (TileCal) is the detector used in the reconstruction of hadrons, jets and missing transverse energy from the proton-proton collisions at the Large Hadron Collider (LHC). It covers the central part of the ATLAS detector (|η| < 1.6). The energy deposited by the particles is read out by approximately 5,000 cells, with double readout channels. The signal provided by the readout electronics for each channel is digitized at 40 MHz and its amplitude is estimated by an optimal filtering algorithm, which expects a single signal with a well-defined shape. However, the LHC luminosity is expected to increase leading to pile-up that deforms the signal of interest. Due to limited resources, the current hardware setup, which is based on Digital Signal Processors (DSP), does not allow the implementation of sophisticated energy estimation methods that deal with the pile-up. Therefore, the technique to be employed for online energy estimation in TileCal for next LHC operation period must be based on fast filters such as the Optimal Filter (OF) and the Matched Filter (MF). Both the OF and MF methods envisage the use of the background second order statistics in its design, more precisely the covariance matrix. However, the identity matrix has been used to describe this quantity. Although this approximation can be valid for low luminosity LHC, it leads to biased estimators under pile- up conditions. Since most of the TileCal cell present low occupancy, the pile-up, which is often modeled by a non-Gaussian distribution, can be seen as outlier events. Consequently, the classical covariance matrix estimation does not describe correctly the second order statistics of the background for the majority of the events, as this approach is very sensitive to outliers. As a result, the OF (or MF) coefficients are miscalculated leading to a larger variance and biased energy estimator. This work evaluates the usage of a robust covariance estimator, namely the Minimum Covariance Determinant (MCD) algorithm, to be applied in the OF design. The goal of the MCD estimator is to find a number of observations whose classical covariance matrix has the lowest determinant. Hence, this procedure avoids taking into account low likelihood events to describe the background. It is worth mentioning that the background covariance matrix as well as the OF coefficients for each TileCal channel are computed offline and stored for both online and offline use. In order to evaluate the impact of the MCD estimator on the performance of the OF, simulated data sets were used. Different average numbers of interactions per bunch crossing and bunch spacings were tested. The results show that the estimation of the background covariance matrix through MCD improves significantly the final energy resolution with respect to the identity matrix which is currently used. Particularly, for high occupancy cells, the final energy resolution is improved by more than 20%. Moreover, the use of the classical covariance matrix degrades the energy resolution for the majority of TileCal cells.

  16. The NuSTAR Extragalactic Surveys: The Number Counts Of Active Galactic Nuclei And The Resolved Fraction Of The Cosmic X-ray Background

    NASA Technical Reports Server (NTRS)

    Harrison, F. A.; Aird, J.; Civano, F.; Lansbury, G.; Mullaney, J. R.; Ballentyne, D. R.; Alexander, D. M.; Stern, D.; Ajello, M.; Barret, D.; hide

    2016-01-01

    We present the 3-8 kiloelectronvolts and 8-24 kiloelectronvolts number counts of active galactic nuclei (AGNs) identified in the Nuclear Spectroscopic Telescope Array (NuSTAR) extragalactic surveys. NuSTAR has now resolved 33 percent -39 percent of the X-ray background in the 8-24 kiloelectronvolts band, directly identifying AGNs with obscuring columns up to approximately 10 (exp 25) per square centimeter. In the softer 3-8 kiloelectronvolts band the number counts are in general agreement with those measured by XMM-Newton and Chandra over the flux range 5 times 10 (exp -15) less than or approximately equal to S (3-8 kiloelectronvolts) divided by ergs per second per square centimeter less than or approximately equal to 10 (exp -12) probed by NuSTAR. In the hard 8-24 kiloelectronvolts band NuSTAR probes fluxes over the range 2 times 10 (exp -14) less than or approximately equal to S (8-24 kiloelectronvolts) divided by ergs per second per square centimeter less than or approximately equal to 10 (exp -12), a factor approximately 100 times fainter than previous measurements. The 8-24 kiloelectronvolts number counts match predictions from AGN population synthesis models, directly confirming the existence of a population of obscured and/or hard X-ray sources inferred from the shape of the integrated cosmic X-ray background. The measured NuSTAR counts lie significantly above simple extrapolation with a Euclidian slope to low flux of the Swift/BAT15-55 kiloelectronvolts number counts measured at higher fluxes (S (15-55 kiloelectronvolts) less than or approximately equal to 10 (exp -11) ergs per second per square centimeter), reflecting the evolution of the AGN population between the Swift/BAT local (redshift is less than 0.1) sample and NuSTAR's redshift approximately equal to 1 sample. CXB (Cosmic X-ray Background) synthesis models, which account for AGN evolution, lie above the Swift/BAT measurements, suggesting that they do not fully capture the evolution of obscured AGNs at low redshifts

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaumier, Michael J.

    This thesis discusses the process of extracting the longitudinal asymmetry, Amore » $$W±\\atop{L}$$ describing W → μ production in forward kinematic regimes. This asymmetry is used to constrain our understanding of the polarized parton distribution functions characterizing $$\\bar{u}$$ and $$\\bar{d}$$ sea quarks in the proton. This asymmetry will be used to constrain the overall contribution of the sea-quarks to the total proton spin. The asymmetry is evaluated over the pseudorapidity range of the PHENIX Muon Arms, 2.1 < |η| 2.6, for longitudinally polarized proton-proton collisions at 510 GeV √s. In particular, I will discuss the statistical methods used to characterize real muonic W decays and the various background processes is presented, including a discussion of likelihood event selection and the Extended Unbinned Maximum Likelihood t. These statistical methods serve estimate the yields of W muonic decays, which are used to calculate the longitudinal asymmetry.« less

  18. Efficient Exploration of the Space of Reconciled Gene Trees

    PubMed Central

    Szöllősi, Gergely J.; Rosikiewicz, Wojciech; Boussau, Bastien; Tannier, Eric; Daubin, Vincent

    2013-01-01

    Gene trees record the combination of gene-level events, such as duplication, transfer and loss (DTL), and species-level events, such as speciation and extinction. Gene tree–species tree reconciliation methods model these processes by drawing gene trees into the species tree using a series of gene and species-level events. The reconstruction of gene trees based on sequence alone almost always involves choosing between statistically equivalent or weakly distinguishable relationships that could be much better resolved based on a putative species tree. To exploit this potential for accurate reconstruction of gene trees, the space of reconciled gene trees must be explored according to a joint model of sequence evolution and gene tree–species tree reconciliation. Here we present amalgamated likelihood estimation (ALE), a probabilistic approach to exhaustively explore all reconciled gene trees that can be amalgamated as a combination of clades observed in a sample of gene trees. We implement the ALE approach in the context of a reconciliation model (Szöllősi et al. 2013), which allows for the DTL of genes. We use ALE to efficiently approximate the sum of the joint likelihood over amalgamations and to find the reconciled gene tree that maximizes the joint likelihood among all such trees. We demonstrate using simulations that gene trees reconstructed using the joint likelihood are substantially more accurate than those reconstructed using sequence alone. Using realistic gene tree topologies, branch lengths, and alignment sizes, we demonstrate that ALE produces more accurate gene trees even if the model of sequence evolution is greatly simplified. Finally, examining 1099 gene families from 36 cyanobacterial genomes we find that joint likelihood-based inference results in a striking reduction in apparent phylogenetic discord, with respectively. 24%, 59%, and 46% reductions in the mean numbers of duplications, transfers, and losses per gene family. The open source implementation of ALE is available from https://github.com/ssolo/ALE.git. [amalgamation; gene tree reconciliation; gene tree reconstruction; lateral gene transfer; phylogeny.] PMID:23925510

  19. Challenges in Species Tree Estimation Under the Multispecies Coalescent Model

    PubMed Central

    Xu, Bo; Yang, Ziheng

    2016-01-01

    The multispecies coalescent (MSC) model has emerged as a powerful framework for inferring species phylogenies while accounting for ancestral polymorphism and gene tree-species tree conflict. A number of methods have been developed in the past few years to estimate the species tree under the MSC. The full likelihood methods (including maximum likelihood and Bayesian inference) average over the unknown gene trees and accommodate their uncertainties properly but involve intensive computation. The approximate or summary coalescent methods are computationally fast and are applicable to genomic datasets with thousands of loci, but do not make an efficient use of information in the multilocus data. Most of them take the two-step approach of reconstructing the gene trees for multiple loci by phylogenetic methods and then treating the estimated gene trees as observed data, without accounting for their uncertainties appropriately. In this article we review the statistical nature of the species tree estimation problem under the MSC, and explore the conceptual issues and challenges of species tree estimation by focusing mainly on simple cases of three or four closely related species. We use mathematical analysis and computer simulation to demonstrate that large differences in statistical performance may exist between the two classes of methods. We illustrate that several counterintuitive behaviors may occur with the summary methods but they are due to inefficient use of information in the data by summary methods and vanish when the data are analyzed using full-likelihood methods. These include (i) unidentifiability of parameters in the model, (ii) inconsistency in the so-called anomaly zone, (iii) singularity on the likelihood surface, and (iv) deterioration of performance upon addition of more data. We discuss the challenges and strategies of species tree inference for distantly related species when the molecular clock is violated, and highlight the need for improving the computational efficiency and model realism of the likelihood methods as well as the statistical efficiency of the summary methods. PMID:27927902

  20. Navy Ford (CVN-78) Class Aircraft Carrier Program: Background and Issues for Congress

    DTIC Science & Technology

    2013-10-22

    states: The CVN 78 is experiencing cost growth due to “first of class” material availability (i.e., valves, actuators ), construction labor...assessment during IOT &E [initial operational test and evaluation]. • The current TEMP [test and evaluation master plan] does not adequately address...developmental testing significantly raises the likelihood of the discovery of platform-level problems during IOT &E. • The Navy plans to deliver CVN-78 in

  1. Estimating the Effects of Pre-College Education on College Performance

    DTIC Science & Technology

    2013-05-10

    background information from many variables into a single measure of the expected likelihood of a person receiving treatment. This leads into a discussion of...but do not directly effect outcome variables like academic order of merit, graduation rates, or academic grades. Our model had to not only include the...both indicator variables for whether the individual’s parents ever served in any of the armed forces. High School Quality Measure is a variable

  2. Herschel Key Program Heritage: a Far-Infrared Source Catalog for the Magellanic Clouds

    NASA Astrophysics Data System (ADS)

    Seale, Jonathan P.; Meixner, Margaret; Sewiło, Marta; Babler, Brian; Engelbracht, Charles W.; Gordon, Karl; Hony, Sacha; Misselt, Karl; Montiel, Edward; Okumura, Koryo; Panuzzo, Pasquale; Roman-Duval, Julia; Sauvage, Marc; Boyer, Martha L.; Chen, C.-H. Rosie; Indebetouw, Remy; Matsuura, Mikako; Oliveira, Joana M.; Srinivasan, Sundar; van Loon, Jacco Th.; Whitney, Barbara; Woods, Paul M.

    2014-12-01

    Observations from the HERschel Inventory of the Agents of Galaxy Evolution (HERITAGE) have been used to identify dusty populations of sources in the Large and Small Magellanic Clouds (LMC and SMC). We conducted the study using the HERITAGE catalogs of point sources available from the Herschel Science Center from both the Photodetector Array Camera and Spectrometer (PACS; 100 and 160 μm) and Spectral and Photometric Imaging Receiver (SPIRE; 250, 350, and 500 μm) cameras. These catalogs are matched to each other to create a Herschel band-merged catalog and then further matched to archival Spitzer IRAC and MIPS catalogs from the Spitzer Surveying the Agents of Galaxy Evolution (SAGE) and SAGE-SMC surveys to create single mid- to far-infrared (far-IR) point source catalogs that span the wavelength range from 3.6 to 500 μm. There are 35,322 unique sources in the LMC and 7503 in the SMC. To be bright in the FIR, a source must be very dusty, and so the sources in the HERITAGE catalogs represent the dustiest populations of sources. The brightest HERITAGE sources are dominated by young stellar objects (YSOs), and the dimmest by background galaxies. We identify the sources most likely to be background galaxies by first considering their morphology (distant galaxies are point-like at the resolution of Herschel) and then comparing the flux distribution to that of the Herschel Astrophysical Terahertz Large Area Survey (ATLAS) survey of galaxies. We find a total of 9745 background galaxy candidates in the LMC HERITAGE images and 5111 in the SMC images, in agreement with the number predicted by extrapolating from the ATLAS flux distribution. The majority of the Magellanic Cloud-residing sources are either very young, embedded forming stars or dusty clumps of the interstellar medium. Using the presence of 24 μm emission as a tracer of star formation, we identify 3518 YSO candidates in the LMC and 663 in the SMC. There are far fewer far-IR bright YSOs in the SMC than the LMC due to both the SMC's smaller size and its lower dust content. The YSO candidate lists may be contaminated at low flux levels by background galaxies, and so we differentiate between sources with a high (“probable”) and moderate (“possible”) likelihood of being a YSO. There are 2493/425 probable YSO candidates in the LMC/SMC. Approximately 73% of the Herschel YSO candidates are newly identified in the LMC, and 35% in the SMC. We further identify a small population of dusty objects in the late stages of stellar evolution including extreme and post-asymptotic giant branch, planetary nebulae, and supernova remnants. These populations are identified by matching the HERITAGE catalogs to lists of previously identified objects in the literature. Approximately half of the LMC sources and one quarter of the SMC sources are too faint to obtain accurate ample FIR photometry and are unclassified.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seale, Jonathan P.; Meixner, Margaret; Sewiło, Marta

    Observations from the HERschel Inventory of the Agents of Galaxy Evolution (HERITAGE) have been used to identify dusty populations of sources in the Large and Small Magellanic Clouds (LMC and SMC). We conducted the study using the HERITAGE catalogs of point sources available from the Herschel Science Center from both the Photodetector Array Camera and Spectrometer (PACS; 100 and 160 μm) and Spectral and Photometric Imaging Receiver (SPIRE; 250, 350, and 500 μm) cameras. These catalogs are matched to each other to create a Herschel band-merged catalog and then further matched to archival Spitzer IRAC and MIPS catalogs from themore » Spitzer Surveying the Agents of Galaxy Evolution (SAGE) and SAGE-SMC surveys to create single mid- to far-infrared (far-IR) point source catalogs that span the wavelength range from 3.6 to 500 μm. There are 35,322 unique sources in the LMC and 7503 in the SMC. To be bright in the FIR, a source must be very dusty, and so the sources in the HERITAGE catalogs represent the dustiest populations of sources. The brightest HERITAGE sources are dominated by young stellar objects (YSOs), and the dimmest by background galaxies. We identify the sources most likely to be background galaxies by first considering their morphology (distant galaxies are point-like at the resolution of Herschel) and then comparing the flux distribution to that of the Herschel Astrophysical Terahertz Large Area Survey (ATLAS) survey of galaxies. We find a total of 9745 background galaxy candidates in the LMC HERITAGE images and 5111 in the SMC images, in agreement with the number predicted by extrapolating from the ATLAS flux distribution. The majority of the Magellanic Cloud-residing sources are either very young, embedded forming stars or dusty clumps of the interstellar medium. Using the presence of 24 μm emission as a tracer of star formation, we identify 3518 YSO candidates in the LMC and 663 in the SMC. There are far fewer far-IR bright YSOs in the SMC than the LMC due to both the SMC's smaller size and its lower dust content. The YSO candidate lists may be contaminated at low flux levels by background galaxies, and so we differentiate between sources with a high (“probable”) and moderate (“possible”) likelihood of being a YSO. There are 2493/425 probable YSO candidates in the LMC/SMC. Approximately 73% of the Herschel YSO candidates are newly identified in the LMC, and 35% in the SMC. We further identify a small population of dusty objects in the late stages of stellar evolution including extreme and post-asymptotic giant branch, planetary nebulae, and supernova remnants. These populations are identified by matching the HERITAGE catalogs to lists of previously identified objects in the literature. Approximately half of the LMC sources and one quarter of the SMC sources are too faint to obtain accurate ample FIR photometry and are unclassified.« less

  4. Data Series Subtraction with Unknown and Unmodeled Background Noise

    NASA Technical Reports Server (NTRS)

    Vitale, Stefano; Congedo, Giuseppe; Dolesi, Rita; Ferroni, Valerio; Hueller, Mauro; Vetrugno, Daniele; Weber, William Joseph; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; hide

    2014-01-01

    LISA Pathfinder (LPF), the precursor mission to a gravitational wave observatory of the European Space Agency, will measure the degree to which two test masses can be put into free fall, aiming to demonstrate a suppression of disturbance forces corresponding to a residual relative acceleration with a power spectral density (PSD) below (30 fm/sq s/Hz)(sup 2) around 1 mHz. In LPF data analysis, the disturbance forces are obtained as the difference between the acceleration data and a linear combination of other measured data series. In many circumstances, the coefficients for this linear combination are obtained by fitting these data series to the acceleration, and the disturbance forces appear then as the data series of the residuals of the fit. Thus the background noise or, more precisely, its PSD, whose knowledge is needed to build up the likelihood function in ordinary maximum likelihood fitting, is here unknown, and its estimate constitutes instead one of the goals of the fit. In this paper we present a fitting method that does not require the knowledge of the PSD of the background noise. The method is based on the analytical marginalization of the posterior parameter probability density with respect to the background noise PSD, and returns an estimate both for the fitting parameters and for the PSD. We show that both these estimates are unbiased, and that, when using averaged Welchs periodograms for the residuals, the estimate of the PSD is consistent, as its error tends to zero with the inverse square root of the number of averaged periodograms. Additionally, we find that the method is equivalent to some implementations of iteratively reweighted least-squares fitting. We have tested the method both on simulated data of known PSD and on data from several experiments performed with the LISA Pathfinder end-to-end mission simulator.

  5. Communicating likelihoods and probabilities in forecasts of volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Doyle, Emma E. H.; McClure, John; Johnston, David M.; Paton, Douglas

    2014-02-01

    The issuing of forecasts and warnings of natural hazard events, such as volcanic eruptions, earthquake aftershock sequences and extreme weather often involves the use of probabilistic terms, particularly when communicated by scientific advisory groups to key decision-makers, who can differ greatly in relative expertise and function in the decision making process. Recipients may also differ in their perception of relative importance of political and economic influences on interpretation. Consequently, the interpretation of these probabilistic terms can vary greatly due to the framing of the statements, and whether verbal or numerical terms are used. We present a review from the psychology literature on how the framing of information influences communication of these probability terms. It is also unclear as to how people rate their perception of an event's likelihood throughout a time frame when a forecast time window is stated. Previous research has identified that, when presented with a 10-year time window forecast, participants viewed the likelihood of an event occurring ‘today’ as being of less than that in year 10. Here we show that this skew in perception also occurs for short-term time windows (under one week) that are of most relevance for emergency warnings. In addition, unlike the long-time window statements, the use of the phrasing “within the next…” instead of “in the next…” does not mitigate this skew, nor do we observe significant differences between the perceived likelihoods of scientists and non-scientists. This finding suggests that effects occurring due to the shorter time window may be ‘masking’ any differences in perception due to wording or career background observed for long-time window forecasts. These results have implications for scientific advice, warning forecasts, emergency management decision-making, and public information as any skew in perceived event likelihood towards the end of a forecast time window may result in an underestimate of the likelihood of an event occurring ‘today’ leading to potentially inappropriate action choices. We thus present some initial guidelines for communicating such eruption forecasts.

  6. Statistics of Sxy estimates

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Pawka, S. S.

    1987-01-01

    The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.

  7. Exact one-sided confidence limits for the difference between two correlated proportions.

    PubMed

    Lloyd, Chris J; Moldovan, Max V

    2007-08-15

    We construct exact and optimal one-sided upper and lower confidence bounds for the difference between two probabilities based on matched binary pairs using well-established optimality theory of Buehler. Starting with five different approximate lower and upper limits, we adjust them to have coverage probability exactly equal to the desired nominal level and then compare the resulting exact limits by their mean size. Exact limits based on the signed root likelihood ratio statistic are preferred and recommended for practical use.

  8. Counselling pregnant women at the crossroads of Europe and Asia: effect of Teratology Information Service in Turkey.

    PubMed

    Kaplan, Yusuf Cem; Karadaş, Barış; Küçüksolak, Gözde; Ediz, Bartu; Demir, Ömer; Sozmen, Kaan; Nordeng, Hedvig

    2017-08-01

    Background Previous studies from western countries demonstrated the effectiveness of Teratology Information Service (TIS) counselling in reducing the teratogenic risk perception of pregnant women. Objective To assess whether TIS counselling would be effective in reducing the teratogenic risk perception of the Turkish pregnant women. Setting A TIS (Terafar) operating in a university hospital in Turkey. Methods A cross-sectional survey study. Pregnant women with non-teratogenic medication exposures were asked to assign scores on visual analogue scales (VAS) in response to the questions aiming to measure their teratogenic risk perception. The mean score before and after counselling were compared and the associations with maternal socio-demographic characteristics were analysed using SPSS (Version 20.0). Main outcome measures The differences in the mean scores of the perception regarding the baseline risk of pregnancy, own teratogenic risk and the likelihood of termination of pregnancy before and after counselling and their possible associations with maternal socio-demographic characteristics. Results 102 pregnant women participated in the study. The counselling significantly reduced the mean own teratogenic risk perception score and the mean score for the likelihood of termination of pregnancy whereas the mean baseline risk perception score was not significantly changed. Pregnancy week <8 and the exposed number of active ingredients <3 were significantly associated with the difference in the mean score for the likelihood of termination of pregnancy. Conclusions TIS counselling lowers the teratogenic risk perception of Turkish pregnant women and increases their likelihood to continue the pregnancy as it does in the western countries.

  9. Extreme data compression for the CMB

    DOE PAGES

    Zablocki, Alan; Dodelson, Scott

    2016-04-28

    We apply the Karhunen-Loéve methods to cosmic microwave background (CMB) data sets, and show that we can recover the input cosmology and obtain the marginalized likelihoods in Λ cold dark matter cosmologies in under a minute, much faster than Markov chain Monte Carlo methods. This is achieved by forming a linear combination of the power spectra at each multipole l, and solving a system of simultaneous equations such that the Fisher matrix is locally unchanged. Instead of carrying out a full likelihood evaluation over the whole parameter space, we need evaluate the likelihood only for the parameter of interest, with themore » data compression effectively marginalizing over all other parameters. The weighting vectors contain insight about the physical effects of the parameters on the CMB anisotropy power spectrum C l. The shape and amplitude of these vectors give an intuitive feel for the physics of the CMB, the sensitivity of the observed spectrum to cosmological parameters, and the relative sensitivity of different experiments to cosmological parameters. We test this method on exact theory C l as well as on a Wilkinson Microwave Anisotropy Probe (WMAP)-like CMB data set generated from a random realization of a fiducial cosmology, comparing the compression results to those from a full likelihood analysis using CosmoMC. Furthermore, after showing that the method works, we apply it to the temperature power spectrum from the WMAP seven-year data release, and discuss the successes and limitations of our method as applied to a real data set.« less

  10. Task Performance with List-Mode Data

    NASA Astrophysics Data System (ADS)

    Caucci, Luca

    This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.

  11. Inference of epidemiological parameters from household stratified data

    PubMed Central

    Walker, James N.; Ross, Joshua V.

    2017-01-01

    We consider a continuous-time Markov chain model of SIR disease dynamics with two levels of mixing. For this so-called stochastic households model, we provide two methods for inferring the model parameters—governing within-household transmission, recovery, and between-household transmission—from data of the day upon which each individual became infectious and the household in which each infection occurred, as might be available from First Few Hundred studies. Each method is a form of Bayesian Markov Chain Monte Carlo that allows us to calculate a joint posterior distribution for all parameters and hence the household reproduction number and the early growth rate of the epidemic. The first method performs exact Bayesian inference using a standard data-augmentation approach; the second performs approximate Bayesian inference based on a likelihood approximation derived from branching processes. These methods are compared for computational efficiency and posteriors from each are compared. The branching process is shown to be a good approximation and remains computationally efficient as the amount of data is increased. PMID:29045456

  12. Modeling pattern in collections of parameters

    USGS Publications Warehouse

    Link, W.A.

    1999-01-01

    Wildlife management is increasingly guided by analyses of large and complex datasets. The description of such datasets often requires a large number of parameters, among which certain patterns might be discernible. For example, one may consider a long-term study producing estimates of annual survival rates; of interest is the question whether these rates have declined through time. Several statistical methods exist for examining pattern in collections of parameters. Here, I argue for the superiority of 'random effects models' in which parameters are regarded as random variables, with distributions governed by 'hyperparameters' describing the patterns of interest. Unfortunately, implementation of random effects models is sometimes difficult. Ultrastructural models, in which the postulated pattern is built into the parameter structure of the original data analysis, are approximations to random effects models. However, this approximation is not completely satisfactory: failure to account for natural variation among parameters can lead to overstatement of the evidence for pattern among parameters. I describe quasi-likelihood methods that can be used to improve the approximation of random effects models by ultrastructural models.

  13. Optical characterization limits of nanoparticle aggregates at different wavelengths using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Eriçok, Ozan Burak; Ertürk, Hakan

    2018-07-01

    Optical characterization of nanoparticle aggregates is a complex inverse problem that can be solved by deterministic or statistical methods. Previous studies showed that there exists a different lower size limit of reliable characterization, corresponding to the wavelength of light source used. In this study, these characterization limits are determined considering a light source wavelength range changing from ultraviolet to near infrared (266-1064 nm) relying on numerical light scattering experiments. Two different measurement ensembles are considered. Collection of well separated aggregates made up of same sized particles and that of having particle size distribution. Filippov's cluster-cluster algorithm is used to generate the aggregates and the light scattering behavior is calculated by discrete dipole approximation. A likelihood-free Approximate Bayesian Computation, relying on Adaptive Population Monte Carlo method, is used for characterization. It is found that when the wavelength range of 266-1064 nm is used, successful characterization limit changes from 21-62 nm effective radius for monodisperse and polydisperse soot aggregates.

  14. Contact with child and adolescent psychiatric services among self-harming and suicidal adolescents in the general population: a cross sectional study

    PubMed Central

    2014-01-01

    Background Studies have shown that adolescents with a history of both suicide attempts and non-suicidal self-harm report more mental health problems and other psychosocial problems than adolescents who report only one or none of these types of self-harm. The current study aimed to examine the use of child and adolescent psychiatric services by adolescents with both suicide attempts and non-suicidal self-harm, compared to other adolescents, and to assess the psychosocial variables that characterize adolescents with both suicide attempts and non-suicidal self-harm who report contact. Methods Data on lifetime self-harm, contact with child and adolescent psychiatric services, and various psychosocial risk factors were collected in a cross-sectional sample (response rate = 92.7%) of 11,440 adolescents aged 14–17 years who participated in a school survey in Oslo, Norway. Results Adolescents who reported any self-harm were more likely than other adolescents to have used child and adolescent psychiatric services, with a particularly elevated likelihood among those with both suicide attempts and non-suicidal self-harm (OR = 9.3). This finding remained significant even when controlling for psychosocial variables. In adolescents with both suicide attempts and non–suicidal self-harm, symptoms of depression, eating problems, and the use of illicit drugs were associated with a higher likelihood of contact with child and adolescent psychiatric services, whereas a non-Western immigrant background was associated with a lower likelihood. Conclusions In this study, adolescents who reported self-harm were significantly more likely than other adolescents to have used child and adolescent psychiatric services, and adolescents who reported a history of both suicide attempts and non-suicidal self-harm were more likely to have used such services, even after controlling for other psychosocial risk factors. In this high-risk subsample, various psychosocial problems increased the probability of contact with child and adolescent psychiatric services, naturally reflecting the core tasks of the services, confirming that they represents an important area for interventions that aim to reduce self-harming behaviour. Such interventions should include systematic screening for early recognition of self-harming behaviours, and treatment programmes tailored to the needs of teenagers with a positive screen. Possible barriers to receive mental health services for adolescents with immigrant backgrounds should be further explored. PMID:24742154

  15. VarBin, a novel method for classifying true and false positive variants in NGS data

    PubMed Central

    2013-01-01

    Background Variant discovery for rare genetic diseases using Illumina genome or exome sequencing involves screening of up to millions of variants to find only the one or few causative variant(s). Sequencing or alignment errors create "false positive" variants, which are often retained in the variant screening process. Methods to remove false positive variants often retain many false positive variants. This report presents VarBin, a method to prioritize variants based on a false positive variant likelihood prediction. Methods VarBin uses the Genome Analysis Toolkit variant calling software to calculate the variant-to-wild type genotype likelihood ratio at each variant change and position divided by read depth. The resulting Phred-scaled, likelihood-ratio by depth (PLRD) was used to segregate variants into 4 Bins with Bin 1 variants most likely true and Bin 4 most likely false positive. PLRD values were calculated for a proband of interest and 41 additional Illumina HiSeq, exome and whole genome samples (proband's family or unrelated samples). At variant sites without apparent sequencing or alignment error, wild type/non-variant calls cluster near -3 PLRD and variant calls typically cluster above 10 PLRD. Sites with systematic variant calling problems (evident by variant quality scores and biases as well as displayed on the iGV viewer) tend to have higher and more variable wild type/non-variant PLRD values. Depending on the separation of a proband's variant PLRD value from the cluster of wild type/non-variant PLRD values for background samples at the same variant change and position, the VarBin method's classification is assigned to each proband variant (Bin 1 to Bin 4). Results To assess VarBin performance, Sanger sequencing was performed on 98 variants in the proband and background samples. True variants were confirmed in 97% of Bin 1 variants, 30% of Bin 2, and 0% of Bin 3/Bin 4. Conclusions These data indicate that VarBin correctly classifies the majority of true variants as Bin 1 and Bin 3/4 contained only false positive variants. The "uncertain" Bin 2 contained both true and false positive variants. Future work will further differentiate the variants in Bin 2. PMID:24266885

  16. Features in the primordial spectrum from WMAP: A wavelet analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafieloo, Arman; Souradeep, Tarun; Manimaran, P.

    2007-06-15

    Precise measurements of the anisotropies in the cosmic microwave background enable us to do an accurate study on the form of the primordial power spectrum for a given set of cosmological parameters. In a previous paper [A. Shafieloo and T. Souradeep, Phys. Rev. D 70, 043523 (2004).], we implemented an improved (error sensitive) Richardson-Lucy deconvolution algorithm on the measured angular power spectrum from the first year of WMAP data to determine the primordial power spectrum assuming a concordance cosmological model. This recovered spectrum has a likelihood far better than a scale invariant, or, 'best fit' scale free spectra ({delta}lnL{approx_equal}25 withmore » respect to the Harrison-Zeldovich spectrum, and, {delta}lnL{approx_equal}11 with respect to the power law spectrum with n{sub s}=0.95). In this paper we use the discrete wavelet transform (DWT) to decompose the local features of the recovered spectrum individually to study their effect and significance on the recovered angular power spectrum and hence the likelihood. We show that besides the infrared cutoff at the horizon scale, the associated features of the primordial power spectrum around the horizon have a significant effect on improving the likelihood. The strong features are localized at the horizon scale.« less

  17. Background Selection in Partially Selfing Populations

    PubMed Central

    Roze, Denis

    2016-01-01

    Self-fertilizing species often present lower levels of neutral polymorphism than their outcrossing relatives. Indeed, selfing automatically increases the rate of coalescence per generation, but also enhances the effects of background selection and genetic hitchhiking by reducing the efficiency of recombination. Approximations for the effect of background selection in partially selfing populations have been derived previously, assuming tight linkage between deleterious alleles and neutral loci. However, loosely linked deleterious mutations may have important effects on neutral diversity in highly selfing populations. In this article, I use a general method based on multilocus population genetics theory to express the effect of a deleterious allele on diversity at a linked neutral locus in terms of moments of genetic associations between loci. Expressions for these genetic moments at equilibrium are then computed for arbitrary rates of selfing and recombination. An extrapolation of the results to the case where deleterious alleles segregate at multiple loci is checked using individual-based simulations. At high selfing rates, the tight linkage approximation underestimates the effect of background selection in genomes with moderate to high map length; however, another simple approximation can be obtained for this situation and provides accurate predictions as long as the deleterious mutation rate is not too high. PMID:27075726

  18. The age profile of the location decision of Australian general practitioners.

    PubMed

    Mu, Chunzhou

    2015-10-01

    The unbalanced distribution of general practitioners (GPs) across geographic areas has been acknowledged as a problem in many countries around the world. Quantitative information regarding GPs' location decision over their lifecycle is essential in developing effective initiatives to address the unbalanced distribution and retention of GPs. This paper describes the age profile of GPs' location decision and relates it to individual characteristics. I use the Medicine in Australia: Balancing Employment and Life (MABEL) survey of doctors (2008-2012) with a sample size of 5810 male and 5797 female GPs. I employ a mixed logit model to estimate GPs' location decision. The results suggest that younger GPs are more prepared to go to rural and remote areas but they tend to migrate back to urban areas as they age. Coming from a rural background increases the likelihood of choosing rural areas, but with heterogeneity: While male GPs from a rural background tend to stay in rural and remote areas regardless of age, female GPs from a rural background are willing to migrate to urban areas as they age. GPs who obtain basic medical degrees overseas are likely to move back to urban areas in the later stage of their careers. Completing a basic medical degree at an older age increases the likelihood of working outside major cities. I also examine factors influencing GPs' location transition patterns and the results further confirm the association of individual characteristics and GPs' location-age profile. The findings can help target GPs who are most likely to practise and remain in rural and remote areas, and tailor policy initiatives to address the undesirable distribution and movement of GPs according to the identified heterogeneous age profile of their location decisions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. A case–control study examining whether neurological deficits and PTSD in combat veterans are related to episodes of mild TBI

    PubMed Central

    Riechers, Ronald George; Wang, Xiao-Feng; Piero, Traci; Ruff, Suzanne Smith

    2012-01-01

    Background Mild traumatic brain injury (mTBI) is a common injury among military personnel serving in Iraq or Afghanistan. The impact of repeated episodes of combat mTBI is unknown. Objective To evaluate relationships among mTBI, post-traumatic stress disorder (PTSD) and neurological deficits (NDs) in US veterans who served in Iraq or Afghanistan. Methods This was a case–control study. From 2091 veterans screened for traumatic brain injury, the authors studied 126 who sustained mTBI with one or more episodes of loss of consciousness (LOC) in combat. Comparison groups: 21 combat veterans who had definite or possible episodes of mTBI without LOC and 21 veterans who sustained mTBI with LOC as civilians. Results Among combat veterans with mTBI, 52% had NDs, 66% had PTSD and 50% had PTSD and an ND. Impaired olfaction was the most common ND, found in 65 veterans. The prevalence of an ND or PTSD correlated with the number of mTBI exposures with LOC. The prevalence of an ND or PTSD was >90% for more than five episodes of LOC. Severity of PTSD and impairment of olfaction increased with number of LOC episodes. The prevalence of an ND for the 34 combat veterans with one episode of LOC (4/34=11.8%) was similar to that of the 21 veterans of similar age and educational background who sustained civilian mTBI with one episode of LOC (2/21=9.5%, p-NS). Conclusions Impaired olfaction was the most frequently recognised ND. Repeated episodes of combat mTBI were associated with increased likelihood of PTSD and an ND. Combat setting may not increase the likelihood of an ND. Two possible connections between mTBI and PTSD are (1) that circumstances leading to combat mTBI likely involve severe psychological trauma and (2) that altered cerebral functioning following mTBI may increase the likelihood that a traumatic event results in PTSD. PMID:22431700

  20. Compact groups in theory and practice - IV. The connection to large-scale structure

    NASA Astrophysics Data System (ADS)

    Mendel, J. Trevor; Ellison, Sara L.; Simard, Luc; Patton, David R.; McConnachie, Alan W.

    2011-12-01

    We investigate the properties of photometrically selected compact groups (CGs) in the Sloan Digital Sky Survey. In this paper, the fourth in a series, we focus on understanding the characteristics of our observed CG sample with particular attention paid to quantifying and removing contamination from projected foreground or background galaxies. Based on a simple comparison of pairwise redshift likelihoods, we find that approximately half of CGs in the parent sample contain one or more projected (interloping) members; our final clean sample contains 4566 galaxies in 1086 CGs. We show that half of the remaining CGs are associated with rich groups (or clusters), i.e. they are embedded sub-structure. The other half have spatial distributions and number-density profiles consistent with the interpretation that they are either independently distributed structures within the field (i.e. they are isolated) or associated with relatively poor structures. Comparisons of late-type and red-sequence fractions in radial annuli show that galaxies around apparently isolated CGs resemble the field population by 300 to 500 kpc from the group centre. In contrast, the galaxy population surrounding embedded CGs appears to remain distinct from the field out beyond 1 to 2 Mpc, consistent with results for rich groups. We take this as additional evidence that the observed distinction between CGs, i.e. isolated versus embedded, is a separation between different host environments.

  1. Prevalence and correlates of indoor tanning and sunless tanning product use among female teens in the United States

    PubMed Central

    Quinn, Megan; Alamian, Arsham; Hillhouse, Joel; Scott, Colleen; Turrisi, Rob; Baker, Katie

    2014-01-01

    Background Indoor tanning (IT) before the age of 35 increases melanoma risk by 75%. Nevertheless, IT and sunless tanning product (STP) use have gained popularity among youth. However, there are limited data on the prevalence and sociodemographic correlates of both IT and STP use in a representative sample of American teens. Methods Teenage females (N = 778) aged 12–18 years were recruited as part of an on-going longitudinal study conducted between May 2011 and May 2013. Descriptive statistics explored IT and STP usage in teen females at baseline. Logistic regression was used to determine sociodemographic correlates of IT and STP use. Results Approximately 16% of female teens engaged in IT behavior and 25% engaged in using STPs. Female teens living in non-metropolitan areas were 82% more likely to indoor tan compared to those in metropolitan areas (OR = 1.82, 95% CI: 1.07–3.10). Age, geographic regions, and race increased the likelihood of IT and STP use. Conclusions Results indicate a significant proportion of teen females engage in IT and STP use. There was evidence that in teens that have never used IT before, STP use precedes IT initiation. Given the evidence for increased IT in rural populations, research focused on rural tanning bed use is needed. PMID:25621199

  2. Evidence-Based Occupational Hearing Screening I: Modeling the Effects of Real-World Noise Environments on the Likelihood of Effective Speech Communication.

    PubMed

    Soli, Sigfrid D; Giguère, Christian; Laroche, Chantal; Vaillancourt, Véronique; Dreschler, Wouter A; Rhebergen, Koenraad S; Harkins, Kevin; Ruckstuhl, Mark; Ramulu, Pradeep; Meyers, Lawrence S

    The objectives of this study were to (1) identify essential hearing-critical job tasks for public safety and law enforcement personnel; (2) determine the locations and real-world noise environments where these tasks are performed; (3) characterize each noise environment in terms of its impact on the likelihood of effective speech communication, considering the effects of different levels of vocal effort, communication distances, and repetition; and (4) use this characterization to define an objective normative reference for evaluating the ability of individuals to perform essential hearing-critical job tasks in noisy real-world environments. Data from five occupational hearing studies performed over a 17-year period for various public safety agencies were analyzed. In each study, job task analyses by job content experts identified essential hearing-critical tasks and the real-world noise environments where these tasks are performed. These environments were visited, and calibrated recordings of each noise environment were made. The extended speech intelligibility index (ESII) was calculated for each 4-sec interval in each recording. These data, together with the estimated ESII value required for effective speech communication by individuals with normal hearing, allowed the likelihood of effective speech communication in each noise environment for different levels of vocal effort and communication distances to be determined. These likelihoods provide an objective norm-referenced and standardized means of characterizing the predicted impact of real-world noise on the ability to perform essential hearing-critical tasks. A total of 16 noise environments for law enforcement personnel and eight noise environments for corrections personnel were analyzed. Effective speech communication was essential to hearing-critical tasks performed in these environments. Average noise levels, ranged from approximately 70 to 87 dBA in law enforcement environments and 64 to 80 dBA in corrections environments. The likelihood of effective speech communication at communication distances of 0.5 and 1 m was often less than 0.50 for normal vocal effort. Likelihood values often increased to 0.80 or more when raised or loud vocal effort was used. Effective speech communication at and beyond 5 m was often unlikely, regardless of vocal effort. ESII modeling of nonstationary real-world noise environments may prove an objective means of characterizing their impact on the likelihood of effective speech communication. The normative reference provided by these measures predicts the extent to which hearing impairments that increase the ESII value required for effective speech communication also decrease the likelihood of effective speech communication. These predictions may provide an objective evidence-based link between the essential hearing-critical job task requirements of public safety and law enforcement personnel and ESII-based hearing assessment of individuals who seek to perform these jobs.

  3. Comparison of two different size needles in endoscopic ultrasound-guided fine-needle aspiration for diagnosing solid pancreatic lesions

    PubMed Central

    Xu, Mei-Mei; Jia, Hong-Yu; Yan, Li-Li; Li, Shan-Shan; Zheng, Yue

    2017-01-01

    Abstract Background: This meta-analysis aimed to provide a pooled analysis of prospective controlled trials comparing the diagnostic accuracy of 22-G and 25-G needles on endoscopic ultrasonography (EUS-FNA) of the solid pancreatic mass. Methods: We established a rigorous study protocol according to Cochrane Collaboration recommendations. We systematically searched the PubMed and Embase databases to identify articles to include in the meta-analysis. Sensitivity, specificity, and corresponding 95% confidence intervals were calculated for 22-G and 25-G needles of individual studies from the contingency tables. Results: Eleven prospective controlled trials included a total of 837 patients (412 with 22-G vs 425 with 25-G). Our outcomes revealed that 25-G needles (92% [95% CI, 89%–95%]) have higher sensitivity than 22-G needles (88% [95% CI, 84%–91%]) on solid pancreatic mass EUS-FNA (P = 0.046). However, there were no significant differences between the 2 groups in overall diagnostic specificity (P = 0.842). The pooled positive and negative likelihood ratio of the 22-G needle were 12.61 (95% CI, 5.65–28.14) and 0.16 (95% CI, 0.12–0.21), respectively. The pooled positive likelihood ratio was 12.61 (95% CI, 5.65–28.14), and the negative likelihood ratio was 0.16 (95% CI, 0.12–0.21) for the 22-G needle. The pooled positive likelihood ratio was 8.44 (95% CI, 3.87–18.42), and the negative likelihood ratio was 0.13 (95% CI, 0.09–0.18) for the 25-G needle. The area under the summary receiver operating characteristic curve was 0.97 for the 22-G needle and 0.96 for the 25-G needle. Conclusion: Compared to the study of 22-G EUS-FNA needles, our study showed that 25-G needles have superior sensitivity in the evaluation of solid pancreatic lesions by EUS–FNA. PMID:28151856

  4. Common Household Chemicals and the Allergy Risks in Pre-School Age Children

    PubMed Central

    Choi, Hyunok; Schmidbauer, Norbert; Sundell, Jan; Hasselgren, Mikael; Spengler, John; Bornehag, Carl-Gustaf

    2010-01-01

    Background The risk of indoor exposure to volatile organic compounds (VOCs) on allergic airway diseases in children remains unknown. Objective We examined the residential concentrations of VOCs, emitted from building materials, paints, furniture, and other lifestyle practices and the risks of multiple allergic diseases as well as the IgE-sensitization in pre-school age children in Sweden. Methods In a case-control investigation (198 case children with asthma and allergy and 202 healthy controls), air samples were collected in the room where the child slept. The air samples were analyzed for the levels of eight classes of VOCs. Results A natural-log unit of summed propylene glycol and glycol ethers (PGEs) in bedroom air (equal to interquartile range, or 3.43 – 15.65 µg/m3) was associated with 1.5-fold greater likelihood of being a case (95% CI, 1.1 – 2.1), 1.5-fold greater likelihood of asthma (95% CI, 1.0 – 2.3), 2.8-fold greater likelihood of rhinitis (95% CI, 1.6 – 4.7), and 1.6-fold greater likelihood of eczema (95% CI, 1.1 – 2.3), accounting for gender, secondhand smoke, allergies in both parents, wet cleaning with chemical agents, construction period of the building, limonene, cat and dog allergens, butyl benzyl phthalate (BBzP), and di(2-ethylhexyl)phthalate (DEHP). When the analysis was restricted to the cases, the same unit concentration was associated with 1.8-fold greater likelihood of IgE-sensitization (95% CI, 1.1 – 2.8) compared to the non-IgE sensitized cases. No similar associations were found for the other classes of VOCs. Conclusion We propose a novel hypothesis that PGEs in indoor air exacerbate and/or induce the multiple allergic symptoms, asthma, rhinitis and eczema, as well as IgE sensitization respectively. PMID:20976153

  5. Who Takes Precautionary Action in the Face of the New H1N1 Influenza? Prediction of Who Collects a Free Hand Sanitizer Using a Health Behavior Model

    PubMed Central

    Reuter, Tabea; Renner, Britta

    2011-01-01

    Background In order to fight the spread of the novel H1N1 influenza, health authorities worldwide called for a change in hygiene behavior. Within a longitudinal study, we examined who collected a free bottle of hand sanitizer towards the end of the first swine flu pandemic wave in December 2009. Methods 629 participants took part in a longitudinal study assessing perceived likelihood and severity of an H1N1 infection, and H1N1 influenza related negative affect (i.e., feelings of threat, concern, and worry) at T1 (October 2009, week 43–44) and T2 (December 2009, week 51–52). Importantly, all participants received a voucher for a bottle of hand sanitizer at T2 which could be redeemed in a university office newly established for this occasion at T3 (ranging between 1–4 days after T2). Results Both a sequential longitudinal model (M2) as well as a change score model (M3) showed that greater perceived likelihood and severity at T1 (M2) or changes in perceived likelihood and severity between T1 and T2 (M3) did not directly drive protective behavior (T3), but showed a significant indirect impact on behavior through H1N1 influenza related negative affect. Specifically, increases in perceived likelihood (β = .12), severity (β = .24) and their interaction (β = .13) were associated with a more pronounced change in negative affect (M3). The more threatened, concerned and worried people felt (T2), the more likely they were to redeem the voucher at T3 (OR = 1.20). Conclusions Affective components need to be considered in health behavior models. Perceived likelihood and severity of an influenza infection represent necessary but not sufficient self-referential knowledge for paving the way for preventive behaviors. PMID:21789224

  6. Color normalization for robust evaluation of microscopy images

    NASA Astrophysics Data System (ADS)

    Švihlík, Jan; Kybic, Jan; Habart, David

    2015-09-01

    This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.

  7. Maximizing the significance in Higgs boson pair analyses [Mad-Maximized Higgs Pair Analyses

    DOE PAGES

    Kling, Felix; Plehn, Tilman; Schichtel, Peter

    2017-02-22

    Here, we study Higgs pair production with a subsequent decay to a pair of photons and a pair of bottoms at the LHC. We use the log-likelihood ratio to identify the kinematic regions which either allow us to separate the di-Higgs signal from backgrounds or to determine the Higgs self-coupling. We find that both regions are separate enough to ensure that details of the background modeling will not affect the determination of the self-coupling. Assuming dominant statistical uncertainties we determine the best precision with which the Higgs self-coupling can be probed in this channel. We finally comment on the samemore » questions at a future 100 TeV collider.« less

  8. Maximizing the significance in Higgs boson pair analyses [Mad-Maximized Higgs Pair Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kling, Felix; Plehn, Tilman; Schichtel, Peter

    Here, we study Higgs pair production with a subsequent decay to a pair of photons and a pair of bottoms at the LHC. We use the log-likelihood ratio to identify the kinematic regions which either allow us to separate the di-Higgs signal from backgrounds or to determine the Higgs self-coupling. We find that both regions are separate enough to ensure that details of the background modeling will not affect the determination of the self-coupling. Assuming dominant statistical uncertainties we determine the best precision with which the Higgs self-coupling can be probed in this channel. We finally comment on the samemore » questions at a future 100 TeV collider.« less

  9. Complication Rates, Hospital Size, and Bias in the CMS Hospital-Acquired Condition Reduction Program.

    PubMed

    Koenig, Lane; Soltoff, Samuel A; Demiralp, Berna; Demehin, Akinluwa A; Foster, Nancy E; Steinberg, Caroline Rossi; Vaz, Christopher; Wetzel, Scott; Xu, Susan

    In 2016, Medicare's Hospital-Acquired Condition Reduction Program (HAC-RP) will reduce hospital payments by $364 million. Although observers have questioned the validity of certain HAC-RP measures, less attention has been paid to the determination of low-performing hospitals (bottom quartile) and the assignment of penalties. This study investigated possible bias in the HAC-RP by simulating hospitals' likelihood of being in the worst-performing quartile for 8 patient safety measures, assuming identical expected complication rates across hospitals. Simulated likelihood of being a poor performer varied with hospital size. This relationship depended on the measure's complication rate. For 3 of 8 measures examined, the equal-quality simulation identified poor performers similarly to empirical data (c-statistic approximately 0.7 or higher) and explained most of the variation in empirical performance by size (Efron's R 2 > 0.85). The Centers for Medicare & Medicaid Services could address potential bias in the HAC-RP by stratifying by hospital size or using a broader "all-harm" measure.

  10. Filtered maximum likelihood expectation maximization based global reconstruction for bioluminescence tomography.

    PubMed

    Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli

    2018-05-17

    The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.

  11. Sensing multiple ligands with single receptor

    NASA Astrophysics Data System (ADS)

    Singh, Vijay; Nemenman, Ilya

    2015-03-01

    Cells use surface receptors to measure concentrations of external ligand molecules. Limits on the accuracy of such sensing are well-known for the scenario where concentration of one molecular species is being determined by one receptor [Endres]. However, in more realistic scenarios, a cognate (high-affinity) ligand competes with many non-cognate (low-affinity) ligands for binding to the receptor. We analyze effects of this competition on the accuracy of sensing. We show that maximum-likelihood statistical inference allows determination of concentrations of multiple ligands, cognate and non-cognate, by the same receptor concurrently. While it is unclear if traditional biochemical circuitry downstream of the receptor can implement such inference exactly, we show that an approximate inference can be performed by coupling the receptor to a kinetic proofreading cascade. We characterize the accuracy of such kinetic proofreading sensing in comparison to the exact maximum-likelihood approach. We acknowledge the support from the James S. McDonnell Foundation and the Human Frontier Science Program.

  12. Polar bears in the Beaufort Sea: A 30-year mark-recapture case history

    USGS Publications Warehouse

    Amstrup, Steven C.; McDonald, T.L.; Stirling, I.

    2001-01-01

    Knowledge of population size and trend is necessary to manage anthropogenic risks to polar bears (Ursus maritimus). Despite capturing over 1,025 females between 1967 and 1998, previously calculated estimates of the size of the southern Beaufort Sea (SBS) population have been unreliable. We improved estimates of numbers of polar bears by modeling heterogeneity in capture probability with covariates. Important covariates referred to the year of the study, age of the bear, capture effort, and geographic location. Our choice of best approximating model was based on the inverse relationship between variance in parameter estimates and likelihood of the fit and suggested a growth from ≈ 500 to over 1,000 females during this study. The mean coefficient of variation on estimates for the last decade of the study was 0.16—the smallest yet derived. A similar model selection approach is recommended for other projects where a best model is not identified by likelihood criteria alone.

  13. Spatial resolution properties of motion-compensated tomographic image reconstruction methods.

    PubMed

    Chun, Se Young; Fessler, Jeffrey A

    2012-07-01

    Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.

  14. A baseline-free procedure for transformation models under interval censorship.

    PubMed

    Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin

    2005-12-01

    An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.

  15. A Measurement of Gravitational Lensing of the Cosmic Microwave Background by Galaxy Clusters Using Data from the South Pole Telescope

    DOE PAGES

    Baxter, E. J.; Keisler, R.; Dodelson, S.; ...

    2015-06-22

    Clusters of galaxies are expected to gravitationally lens the cosmic microwave background (CMB) and thereby generate a distinct signal in the CMB on arcminute scales. Measurements of this effect can be used to constrain the masses of galaxy clusters with CMB data alone. Here we present a measurement of lensing of the CMB by galaxy clusters using data from the South Pole Telescope (SPT). We also develop a maximum likelihood approach to extract the CMB cluster lensing signal and validate the method on mock data. We quantify the effects on our analysis of several potential sources of systematic error andmore » find that they generally act to reduce the best-fit cluster mass. It is estimated that this bias to lower cluster mass is roughly 0.85σ in units of the statistical error bar, although this estimate should be viewed as an upper limit. Furthermore, we apply our maximum likelihood technique to 513 clusters selected via their Sunyaev–Zeldovich (SZ) signatures in SPT data, and rule out the null hypothesis of no lensing at 3.1σ. The lensing-derived mass estimate for the full cluster sample is consistent with that inferred from the SZ flux: M 200,lens = 0.83 +0.38 -0.37 M 200,SZ (68% C.L., statistical error only).« less

  16. Equivalence between Step Selection Functions and Biased Correlated Random Walks for Statistical Inference on Animal Movement.

    PubMed

    Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul

    2015-01-01

    Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis.

  17. FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods.

    PubMed

    Zierke, Stephanie; Bakos, Jason D

    2010-04-12

    Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10x speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs).

  18. Pre-test probability of obstructive coronary stenosis in patients undergoing coronary CT angiography: Comparative performance of the modified diamond-Forrester algorithm versus methods incorporating cardiovascular risk factors.

    PubMed

    Ferreira, António Miguel; Marques, Hugo; Tralhão, António; Santos, Miguel Borges; Santos, Ana Rita; Cardoso, Gonçalo; Dores, Hélder; Carvalho, Maria Salomé; Madeira, Sérgio; Machado, Francisco Pereira; Cardim, Nuno; de Araújo Gonçalves, Pedro

    2016-11-01

    Current guidelines recommend the use of the Modified Diamond-Forrester (MDF) method to assess the pre-test likelihood of obstructive coronary artery disease (CAD). We aimed to compare the performance of the MDF method with two contemporary algorithms derived from multicenter trials that additionally incorporate cardiovascular risk factors: the calculator-based 'CAD Consortium 2' method, and the integer-based CONFIRM score. We assessed 1069 consecutive patients without known CAD undergoing coronary CT angiography (CCTA) for stable chest pain. Obstructive CAD was defined as the presence of coronary stenosis ≥50% on 64-slice dual-source CT. The three methods were assessed for calibration, discrimination, net reclassification, and changes in proposed downstream testing based upon calculated pre-test likelihoods. The observed prevalence of obstructive CAD was 13.8% (n=147). Overestimations of the likelihood of obstructive CAD were 140.1%, 9.8%, and 18.8%, respectively, for the MDF, CAD Consortium 2 and CONFIRM methods. The CAD Consortium 2 showed greater discriminative power than the MDF method, with a C-statistic of 0.73 vs. 0.70 (p<0.001), while the CONFIRM score did not (C-statistic 0.71, p=0.492). Reclassification of pre-test likelihood using the 'CAD Consortium 2' or CONFIRM scores resulted in a net reclassification improvement of 0.19 and 0.18, respectively, which would change the diagnostic strategy in approximately half of the patients. Newer risk factor-encompassing models allow for a more precise estimation of pre-test probabilities of obstructive CAD than the guideline-recommended MDF method. Adoption of these scores may improve disease prediction and change the diagnostic pathway in a significant proportion of patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Efficient simulation and likelihood methods for non-neutral multi-allele models.

    PubMed

    Joyce, Paul; Genz, Alan; Buzbas, Erkan Ozge

    2012-06-01

    Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a , 1994b , 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 10(9) rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection.

  20. Factors Associated With High-Risk Alcohol Consumption Among LGB Older Adults: The Roles of Gender, Social Support, Perceived Stress, Discrimination, and Stigma.

    PubMed

    Bryan, Amanda E B; Kim, Hyun-Jun; Fredriksen-Goldsen, Karen I

    2017-02-01

    Lesbian, gay, and bisexual (LGB) adults have elevated rates of high-risk alcohol consumption compared with heterosexual adults. Although drinking tends to decline with age in the general population, we know little about LGB older adults' drinking. Using 2014 data from Aging with Pride: National Health, Aging, and Sexuality/Gender Study (NHAS), we aimed to identify factors associated with high-risk drinking in LGB older adults. A U.S. sample of 2,351 LGB adults aged 50-98 years completed a survey about personal and social experiences, substance use, and health. Multinomial logistic regression was conducted to identify predictors of past-month high-risk alcohol consumption. Approximately one fifth (20.6%) of LGB older adults reported high-risk drinking, with nonsignificantly different rates between men (22.4%) and women (18.4%). For women, current smoking and greater social support were associated with greater likelihood of high-risk drinking; older age, higher income, recovery from addiction, and greater perceived stress were associated with lower likelihood. For men, higher income, current smoking, and greater day-to-day discrimination were associated with greater likelihood of high-risk drinking; transgender identity and recovery from addiction were associated with lower likelihood. Social contexts and perceived drinking norms may encourage higher levels of alcohol consumption in LGB older women, whereas men's drinking may be linked with discrimination-related stress. Prevention and intervention with this population should take into account gender differences and sexual minority-specific risk factors. With future waves of data, we will be able to examine LGB older adults' drinking trajectories over time. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Cyg X-3: Not seen in high-energy gamma rays by COS-B

    NASA Technical Reports Server (NTRS)

    Hermsen, W.; Bennett, K.; Bignami, G. F.; Bloemen, J. B. G. M.; Buccheri, R.; Caraveo, P. A.; Mayer-Hasselwander, H. A.; Oezel, M. E.; Pollock, A. M. T.; Strong, A. W.

    1985-01-01

    COS-B had Cyg X-3 within its field of view during 7 observation periods between 1975 and 1982 for in total approximately 300 days. In the skymaps (70 meV E 5000 meV) of the Cyg-X region produced for each of these observations and in the summed map, a broad complex structure is visible in the region 72 deg approximately less than 1 approximately less than 85 deg, approximately less than 5 deg. No resolved source structure is visible at the position of Cyg X-3, but a weak signal from Cyg X-3 could be hidden in the structured gamma-ray background. Therefore, the data has been searched for a 4.8 h timing signature, as well as for a source signal in the sky map in addition to the diffuse background structure as estimated from tracers of atomic and molecular gas.

  2. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  3. A Glossy Simultaneous Contrast: Conjoint Measurements of Gloss and Lightness

    PubMed Central

    Mamassian, Pascal

    2017-01-01

    Interactions between the albedo and the gloss on a surface are commonplace. Darker surfaces are perceived glossier (contrast gloss) than lighter surfaces and darker backgrounds can enhance perceived lightness of surfaces. We used maximum likelihood conjoint measurements to simultaneously quantify the strength of those effects. We quantified the extent to which albedo can influence perceived gloss and physical gloss can influence perceived lightness. We modeled the contribution of lightness and gloss and found that increasing lightness reduced perceived gloss by about 32% whereas gloss had a much weaker influence on perceived lightness of about 12%. Moreover, we also investigated how different backgrounds contribute to the perception of lightness and gloss of a surface placed in front. We found that a glossy background reduces slightly perceived lightness of the center and simultaneously enhances its perceived gloss. Lighter backgrounds reduce perceived gloss and perceived lightness. Conjoint measurements lead us to a better understanding of the contextual effects in gloss and lightness perception. Not only do we confirm the importance of contrast in gloss perception and the reduction of the simultaneous contrast with glossy backgrounds, but we also quantify precisely the strength of those effects. PMID:28203352

  4. Object Detection in Natural Backgrounds Predicted by Discrimination Performance and Models

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.; Watson, A. B.; Rohaly, A. M.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    In object detection, an observer looks for an object class member in a set of backgrounds. In discrimination, an observer tries to distinguish two images. Discrimination models predict the probability that an observer detects a difference between two images. We compare object detection and image discrimination with the same stimuli by: (1) making stimulus pairs of the same background with and without the target object and (2) either giving many consecutive trials with the same background (discrimination) or intermixing the stimuli (object detection). Six images of a vehicle in a natural setting were altered to remove the vehicle and mixed with the original image in various proportions. Detection observers rated the images for vehicle presence. Discrimination observers rated the images for any difference from the background image. Estimated detectabilities of the vehicles were found by maximizing the likelihood of a Thurstone category scaling model. The pattern of estimated detectabilities is similar for discrimination and object detection, and is accurately predicted by a Cortex Transform discrimination model. Predictions of a Contrast- Sensitivity- Function filter model and a Root-Mean-Square difference metric based on the digital image values are less accurate. The discrimination detectabilities averaged about twice those of object detection.

  5. Bayesian modelling of the emission spectrum of the Joint European Torus Lithium Beam Emission Spectroscopy system.

    PubMed

    Kwak, Sehyun; Svensson, J; Brix, M; Ghim, Y-C

    2016-02-01

    A Bayesian model of the emission spectrum of the JET lithium beam has been developed to infer the intensity of the Li I (2p-2s) line radiation and associated uncertainties. The detected spectrum for each channel of the lithium beam emission spectroscopy system is here modelled by a single Li line modified by an instrumental function, Bremsstrahlung background, instrumental offset, and interference filter curve. Both the instrumental function and the interference filter curve are modelled with non-parametric Gaussian processes. All free parameters of the model, the intensities of the Li line, Bremsstrahlung background, and instrumental offset, are inferred using Bayesian probability theory with a Gaussian likelihood for photon statistics and electronic background noise. The prior distributions of the free parameters are chosen as Gaussians. Given these assumptions, the intensity of the Li line and corresponding uncertainties are analytically available using a Bayesian linear inversion technique. The proposed approach makes it possible to extract the intensity of Li line without doing a separate background subtraction through modulation of the Li beam.

  6. Statistical characteristics of the sequential detection of signals in correlated noise

    NASA Astrophysics Data System (ADS)

    Averochkin, V. A.; Baranov, P. E.

    1985-10-01

    A solution is given to the problem of determining the distribution of the duration of the sequential two-threshold Wald rule for the time-discrete detection of determinate and Gaussian correlated signals on a background of Gaussian correlated noise. Expressions are obtained for the joint probability densities of the likelihood ratio logarithms, and an analysis is made of the effect of correlation and SNR on the duration distribution and the detection efficiency. Comparison is made with Neumann-Pearson detection.

  7. Maximum angular accuracy of pulsed laser radar in photocounting limit.

    PubMed

    Elbaum, M; Diament, P; King, M; Edelson, W

    1977-07-01

    To estimate the angular position of targets with pulsed laser radars, their images may be sensed with a fourquadrant noncoherent detector and the image photocounting distribution processed to obtain the angular estimates. The limits imposed on the accuracy of angular estimation by signal and background radiation shot noise, dark current noise, and target cross-section fluctuations are calculated. Maximum likelihood estimates of angular positions are derived for optically rough and specular targets and their performances compared with theoretical lower bounds.

  8. Association of Attention-Deficit/Hyperactivity Disorder and Conduct Disorder with Early Tobacco and Alcohol Use

    PubMed Central

    Brinkman, William B.; Epstein, Jeffery N.; Auinger, Peggy; Tamm, Leanne; Froehlich, Tanya E.

    2014-01-01

    Background The association of attention-deficit/hyperactivity disorder (ADHD) and conduct disorder (CD) with tobacco and alcohol use has not been assessed in a young adolescent sample representative of the U.S. population. Methods Data are from the 2000–2004 National Health and Nutrition Examination Survey, a cross-sectional sample representative of the U.S. population. Participants were age 12–15 years (N=2517). Exposure variables included diagnosis of ADHD and CD, and counts of ADHD and CD symptoms based on caregiver responses to the Diagnostic Interview Schedule for Children. Primary outcomes were adolescent-report of any use of tobacco or alcohol and age of initiating use. Multivariate logistic regression and Cox proportional hazard models were conducted. Results Adolescents with ADHD+CD diagnoses had a 3- to 5-fold increased likelihood of using tobacco and alcohol and initiated use at a younger age compared to those with neither disorder. Having ADHD alone was associated with an increased likelihood of tobacco use but not alcohol use. Hyperactive-impulsive symptom counts were not independently associated with any outcome, while every one symptom increase in inattention increased the likelihood of tobacco and alcohol use by 8–10%. Although participants with a diagnosis of CD alone (compared to those without ADHD or CD) did not have a higher likelihood of tobacco or alcohol use, for every one symptom increase in CD symptoms the odds of tobacco use increased by 31%. Conclusions ADHD and CD diagnoses and symptomatology are linked to higher risk for a range of tobacco and alcohol use outcomes among young adolescents in the U.S. PMID:25487225

  9. The Fecal Microbiota Profile and Bronchiolitis in Infants

    PubMed Central

    Linnemann, Rachel W.; Mansbach, Jonathan M.; Ajami, Nadim J.; Espinola, Janice A.; Petrosino, Joseph F.; Piedra, Pedro A.; Stevenson, Michelle D.; Sullivan, Ashley F.; Thompson, Amy D.; Camargo, Carlos A.

    2016-01-01

    BACKGROUND: Little is known about the association of gut microbiota, a potentially modifiable factor, with bronchiolitis in infants. We aimed to determine the association of fecal microbiota with bronchiolitis in infants. METHODS: We conducted a case–control study. As a part of multicenter prospective study, we collected stool samples from 40 infants hospitalized with bronchiolitis. We concurrently enrolled 115 age-matched healthy controls. By applying 16S rRNA gene sequencing and an unbiased clustering approach to these 155 fecal samples, we identified microbiota profiles and determined the association of microbiota profiles with likelihood of bronchiolitis. RESULTS: Overall, the median age was 3 months, 55% were male, and 54% were non-Hispanic white. Unbiased clustering of fecal microbiota identified 4 distinct profiles: Escherichia-dominant profile (30%), Bifidobacterium-dominant profile (21%), Enterobacter/Veillonella-dominant profile (22%), and Bacteroides-dominant profile (28%). The proportion of bronchiolitis was lowest in infants with the Enterobacter/Veillonella-dominant profile (15%) and highest in the Bacteroides-dominant profile (44%), corresponding to an odds ratio of 4.59 (95% confidence interval, 1.58–15.5; P = .008). In the multivariable model, the significant association between the Bacteroides-dominant profile and a greater likelihood of bronchiolitis persisted (odds ratio for comparison with the Enterobacter/Veillonella-dominant profile, 4.24; 95% confidence interval, 1.56–12.0; P = .005). In contrast, the likelihood of bronchiolitis in infants with the Escherichia-dominant or Bifidobacterium-dominant profile was not significantly different compared with those with the Enterobacter/Veillonella-dominant profile. CONCLUSIONS: In this case–control study, we identified 4 distinct fecal microbiota profiles in infants. The Bacteroides-dominant profile was associated with a higher likelihood of bronchiolitis. PMID:27354456

  10. Traumatic physical health consequences of intimate partner violence against women: what is the role of community-level factors?

    PubMed Central

    2011-01-01

    Background Intimate partner violence (IPV) against women is a serious public health issue with recognizable direct health consequences. This study assessed the association between IPV and traumatic physical health consequences on women in Nigeria, given that communities exert significant influence on the individuals that are embedded within them, with the nature of influence varying between communities. Methods Cross-sectional nationally-representative data of women aged 15 - 49 years in the 2008 Nigeria Demographic and Health Survey was used in this study. Multilevel logistic regression analysis was used to assess the association between IPV and several forms of physical health consequences. Results Bruises were the most common form of traumatic physical health consequences. In the adjusted models, the likelihood of sustaining bruises (OR = 1.91, 95% CI = 1.05 - 3.46), wounds (OR = 2.54, 95% CI = 1.31 - 4.95), and severe burns (OR = 3.20, 95% CI = 1.63 - 6.28) was significantly higher for women exposed to IPV compared to those not exposed to IPV. However, after adjusting for individual- and community-level factors, women with husbands/partners with controlling behavior, those with primary or no education, and those resident in communities with high tolerance for wife beating had a higher likelihood of experiencing IPV, whilst mean community-level education and women 24 years or younger were at lower likelihood of experiencing IPV. Conclusions Evidence from this study shows that exposure to IPV is associated with increased likelihood of traumatic physical consequences for women in Nigeria. Education and justification of wife beating were significant community-level factors associated with traumatic physical consequences, suggesting the importance of increasing women's levels of education and changing community norms that justify controlling behavior and IPV. PMID:22185323

  11. Maximum likelihood inference implies a high, not a low, ancestral haploid chromosome number in Araceae, with a critique of the bias introduced by ‘x’

    PubMed Central

    Cusimano, Natalie; Sousa, Aretuza; Renner, Susanne S.

    2012-01-01

    Background and Aims For 84 years, botanists have relied on calculating the highest common factor for series of haploid chromosome numbers to arrive at a so-called basic number, x. This was done without consistent (reproducible) reference to species relationships and frequencies of different numbers in a clade. Likelihood models that treat polyploidy, chromosome fusion and fission as events with particular probabilities now allow reconstruction of ancestral chromosome numbers in an explicit framework. We have used a modelling approach to reconstruct chromosome number change in the large monocot family Araceae and to test earlier hypotheses about basic numbers in the family. Methods Using a maximum likelihood approach and chromosome counts for 26 % of the 3300 species of Araceae and representative numbers for each of the other 13 families of Alismatales, polyploidization events and single chromosome changes were inferred on a genus-level phylogenetic tree for 113 of the 117 genera of Araceae. Key Results The previously inferred basic numbers x = 14 and x = 7 are rejected. Instead, maximum likelihood optimization revealed an ancestral haploid chromosome number of n = 16, Bayesian inference of n = 18. Chromosome fusion (loss) is the predominant inferred event, whereas polyploidization events occurred less frequently and mainly towards the tips of the tree. Conclusions The bias towards low basic numbers (x) introduced by the algebraic approach to inferring chromosome number changes, prevalent among botanists, may have contributed to an unrealistic picture of ancestral chromosome numbers in many plant clades. The availability of robust quantitative methods for reconstructing ancestral chromosome numbers on molecular phylogenetic trees (with or without branch length information), with confidence statistics, makes the calculation of x an obsolete approach, at least when applied to large clades. PMID:22210850

  12. CT Pulmonary Angiography: Increasingly Diagnosing Less Severe Pulmonary Emboli

    PubMed Central

    Schissler, Andrew J.; Rozenshtein, Anna; Kulon, Michal E.; Pearson, Gregory D. N.; Green, Robert A.; Stetson, Peter D.; Brenner, David J.; D'Souza, Belinda; Tsai, Wei-Yann; Schluger, Neil W.; Einstein, Andrew J.

    2013-01-01

    Background It is unknown whether the observed increase in computed tomography pulmonary angiography (CTPA) utilization has resulted in increased detection of pulmonary emboli (PEs) with a less severe disease spectrum. Methods Trends in utilization, diagnostic yield, and disease severity were evaluated for 4,048 consecutive initial CTPAs performed in adult patients in the emergency department of a large urban academic medical center between 1/1/2004 and 10/31/2009. Transthoracic echocardiography (TTE) findings and peak serum troponin levels were evaluated to assess for the presence of PE-associated right ventricular (RV) abnormalities (dysfunction or dilatation) and myocardial injury, respectively. Statistical analyses were performed using multivariate logistic regression. Results 268 CTPAs (6.6%) were positive for acute PE, and 3,780 (93.4%) demonstrated either no PE or chronic PE. There was a significant increase in the likelihood of undergoing CTPA per year during the study period (odds ratio [OR] 1.05, 95% confidence interval [CI] 1.04–1.07, P<0.01). There was no significant change in the likelihood of having a CTPA diagnostic of an acute PE per year (OR 1.03, 95% CI 0.95–1.11, P = 0.49). The likelihood of diagnosing a less severe PE on CTPA with no associated RV abnormalities or myocardial injury increased per year during the study period (OR 1.39, 95% CI 1.10–1.75, P = 0.01). Conclusions CTPA utilization has risen with no corresponding change in diagnostic yield, resulting in an increase in PE detection. There is a concurrent rise in the likelihood of diagnosing a less clinically severe spectrum of PEs. PMID:23776522

  13. Income and Subjective Well-Being: New Insights from Relatively Healthy American Women, Ages 49-79.

    PubMed

    Wyshak, Grace

    2016-01-01

    The interests of economists, psychologists, social scientists and others on the relations of income, demographics, religion and subjective well-being, have generated a vast global literature. It is apparent that biomedical research has focused on white with men. The Women's Health Initiative and Observational Study (WHI OS) was initiated in 1992. The OS represents the scientific need for social priorities to improve the health and welfare of women; it includes 93.676 relatively healthy postmenopausal women, 49 to 79, from diverse backgrounds. The objective of this study is to examine how lifestyle and other factors influence women's health. Data from the WHI OS questionnaire were analyzed. Statistical methods included descriptive statistics square, correlations, linear regression and analyses of covariance (GLM). New findings and insights relate primarily to general health, religion, club attendance, and likelihood of depression. The most important predictor of excellent or very good health is quality of life and general health is a major predictor of quality of life. A great deal of strength and comfort from religion was reported by 62.98% of the women, with little variation by denomination. More from religion related to poorer health, and less likelihood of depression. Religion and lower income are in accord with of across country studies. Attendance at clubs was associated with religion and with all factors associated with religion, except income. Though general health and likelihood of depression are highly correlated, better health is associated with higher income; however, likelihood of depression is not associated with income--contrary to conventional wisdom about socioeconomic disparities and mental health. Subjective well-being variables, with the exception of quality of life, were not associated with income. Social networks--religion and clubs--among a diverse population, warrant further attention from economists, psychologists, sociologists, and others.

  14. The background model in the energy range from 0.1 MeV up to several MeV for low altitude and high inclination satellites.

    NASA Astrophysics Data System (ADS)

    Arkhangelskaja, I. V.; Arkhangelskiy, A. I.

    2016-02-01

    The gamma-ray background physical origin for low altitude orbits defined by: diffuse cosmic gamma-emission, atmospheric gamma-rays, gamma-emission formed in interactions of charged particles (both prompt and activation) and transient events such as electrons precipitations and solar flares. The background conditions in the energy range from 0.1 MeV up to several MeV for low altitude orbits differ due to frequency of Earth Radiation Belts - ERBs (included South Atlantic Anomaly - SAA) passes and cosmic rays rigidity. The detectors and satellite constructive elements are activated by trapped in ERBs and moving along magnetic lines charged particles. In this case we propose simplified polynomial model separately for polar and equatorial orbits parts: background count rate temporal profile approximation by 4-5 order polynomials in equatorial regions, and linear approximations, parabolas or constants in polar caps. The polynomials’ coefficients supposed to be similar for identical spectral channels for each analyzed equatorial part taken into account normalization coefficients defined due to Kp-indexes study within period corresponding to calibration coefficients being approximately constants. The described model was successfully applied for the solar flares hard X-ray and gamma-ray emission characteristic studies by AVS-F apparatus data onboard CORONAS-F satellite.

  15. Inter-annual rainfall variations and suicide in New South Wales, Australia, 1964-2001

    NASA Astrophysics Data System (ADS)

    Nicholls, Neville; Butler, Colin D.; Hanigan, Ivan

    2006-01-01

    The suicide rate in New South Wales is shown to be related to annual precipitation, supporting a widespread and long-held assumption that drought in Australia increases the likelihood of suicide. The relationship, although statistically significant, is not especially strong and is confounded by strong, long-term variations in the suicide rate not related to precipitation variations. A decrease in precipitation of about 300 mm would lead to an increase in the suicide rate of approximately 8% of the long-term mean suicide rate.

  16. Process for estimating likelihood and confidence in post detonation nuclear forensics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.; Craft, Charles M.

    2014-07-01

    Technical nuclear forensics (TNF) must provide answers to questions of concern to the broader community, including an estimate of uncertainty. There is significant uncertainty associated with post-detonation TNF. The uncertainty consists of a great deal of epistemic (state of knowledge) as well as aleatory (random) uncertainty, and many of the variables of interest are linguistic (words) and not numeric. We provide a process by which TNF experts can structure their process for answering questions and provide an estimate of uncertainty. The process uses belief and plausibility, fuzzy sets, and approximate reasoning.

  17. Surface-mediated nucleation in the solid-state polymorph transformation of terephthalic acid.

    PubMed

    Beckham, Gregg T; Peters, Baron; Starbuck, Cindy; Variankaval, Narayan; Trout, Bernhardt L

    2007-04-18

    A molecular mechanism for nucleation for the solid-state polymorph transformation of terephthalic acid is presented. New methods recently developed in our group, aimless shooting and likelihood maximization, are employed to construct a model for the reaction coordinate for the two system sizes studied. The reaction coordinate approximation is validated using the committor probability analysis. The transformation proceeds via a localized, elongated nucleus along the crystal edge formed by fluctuations in the supramolecular synthons, suggesting a nucleation and growth mechanism in the macroscopic system.

  18. STANSBURY ROADLESS AREAS, UTAH.

    USGS Publications Warehouse

    Sorensen, Martin L.; Kness, Richard F.

    1984-01-01

    A mineral-resource survey of the Stansbury Roadless Areas, Utah was conducted and showed that there is little likelihood for the occurrence of metallic mineral resources in the areas. Limestone and dolomite underlie approximately 50 acres in the roadless areas and constitute a nonmetallic mineral resource of undetermined value. The oil and gas potential is not known and cannot be assessed without exploratory geophysical and drilling programs. There are no known geothermal resources. An extensive program of geophysical exploration and exploratory drilling would be necessary to determine the potential for oil and gas in the Stansbury Roadless Areas.

  19. Sleep Problems, Suicidality and Depression among American Indian Youth

    PubMed Central

    Arnold, Elizabeth Mayfield; McCall, Vaughn W; Anderson, Andrea; Bryant, Alfred; Bell, Ronny

    2014-01-01

    Study background Mental health and sleep problems are important public health concerns among adolescents yet little is known about the relationship between sleep, depressive symptoms, and suicidality among American Indian youth. Methods This study examined the impact of sleep and other factors on depressive symptoms and suicidality among Lumbee American Indian adolescents (N=80) ages 11–18. Results At the bivariate level, sleepiness, was associated with depression but not with suicidality. Time in bed (TIB) was not associated with depression, but more TIB decreased the likelihood of suicidality. Higher levels of depressive symptoms were associated with increased likelihood of suicidality. At the multivariate level, sleepiness, suicidality, and self-esteem were associated with depression. TIB and depressive symptoms were the only variables associated with suicidality. Conclusion In working with American Indian youth, it may be helpful to consider sleep patterns as part of a comprehensive assessment process for youth who have or are at risk for depression and suicide. PMID:25309936

  20. The perceived employability of ex-prisoners and offenders.

    PubMed

    Graffam, Joseph; Shinkfield, Alison J; Hardcastle, Lesley

    2008-12-01

    A large-scale study was conducted to examine the perceived employability of ex-prisoners and offenders. Four participant groups comprising 596 (50.4%) employers, 234 (19.8%) employment service workers, 176 (14.9%) corrections workers, and 175 (14.8%) prisoners and offenders completed a questionnaire assessing the likelihood of a hypothetical job seeker's both obtaining and maintaining employment; the importance of specific skills and characteristics to employability; and the likelihood that ex-prisoners, offenders, and the general workforce exhibit these skills and characteristics. Apart from people with an intellectual or psychiatric disability, those with a criminal background were rated as being less likely than other disadvantaged groups to obtain and maintain employment. In addition, ex-prisoners were rated as being less likely than offenders and the general workforce to exhibit the skills and characteristics relevant to employability. Implications for the preparation and support of ex-prisoners and offenders into employment are discussed, together with broader community-wide initiatives to promote reintegration.

  1. Automated detection of exudates for diabetic retinopathy screening

    NASA Astrophysics Data System (ADS)

    Fleming, Alan D.; Philip, Sam; Goatman, Keith A.; Williams, Graeme J.; Olson, John A.; Sharp, Peter F.

    2007-12-01

    Automated image analysis is being widely sought to reduce the workload required for grading images resulting from diabetic retinopathy screening programmes. The recognition of exudates in retinal images is an important goal for automated analysis since these are one of the indicators that the disease has progressed to a stage requiring referral to an ophthalmologist. Candidate exudates were detected using a multi-scale morphological process. Based on local properties, the likelihoods of a candidate being a member of classes exudate, drusen or background were determined. This leads to a likelihood of the image containing exudates which can be thresholded to create a binary decision. Compared to a clinical reference standard, images containing exudates were detected with sensitivity 95.0% and specificity 84.6% in a test set of 13 219 images of which 300 contained exudates. Depending on requirements, this method could form part of an automated system to detect images showing either any diabetic retinopathy or referable diabetic retinopathy.

  2. Walking and Metabolic Syndrome in Older Adults

    PubMed Central

    Strath, Scott; Swartz, Ann; Parker, Sarah; Miller, Nora; Cieslik, Linda

    2010-01-01

    Background Little data exists describing the impact that walking has on metabolic syndrome (MetS) in a multicultural sample of older adults. Methods Walking was measured via pedometer in 150 older adults from 4 different ethnic categories. Steps per day were classified as low (<3100 steps/d) or high (≥3100 steps/d) for statistical analyses. Results Occurrence of MetS was lower in the white (33%) versus non-white population (50%). Low steps/d were related to an increase in MetS for both white (OR = 96.8, 95% CI 12.3–764.6) and non-white individuals (OR = 4.5, 95% CI 1.8–11.3). Low steps/d also increased the odds for selected components of MetS in both the white and non-white groups. Conclusion Low levels of walking increase the likelihood of having MetS in both white and non-white older adults. Efforts to increase walking in older adults may decrease the likelihood of developing this clustering of disease risk factors. PMID:18209231

  3. Legume genomics: Understanding biology through DNA and RNA sequencing

    USDA-ARS?s Scientific Manuscript database

    Background. The legume family (Leguminosae) consists of approximately 17,000 species. A few of these species including, but not limited to; Phaseolus vulgaris, Cicer arietinum, and Cajanus cajan, are important dietary components, providing the dietary protein for approximately 300 million people wor...

  4. Numerical Demons in Monte Carlo Estimation of Bayesian Model Evidence with Application to Soil Respiration Models

    NASA Astrophysics Data System (ADS)

    Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.

    2016-12-01

    Bayesian multimodel inference is increasingly being used in hydrology. Estimating Bayesian model evidence (BME) is of central importance in many Bayesian multimodel analysis such as Bayesian model averaging and model selection. BME is the overall probability of the model in reproducing the data, accounting for the trade-off between the goodness-of-fit and the model complexity. Yet estimating BME is challenging, especially for high dimensional problems with complex sampling space. Estimating BME using the Monte Carlo numerical methods is preferred, as the methods yield higher accuracy than semi-analytical solutions (e.g. Laplace approximations, BIC, KIC, etc.). However, numerical methods are prone the numerical demons arising from underflow of round off errors. Although few studies alluded to this issue, to our knowledge this is the first study that illustrates these numerical demons. We show that the precision arithmetic can become a threshold on likelihood values and Metropolis acceptance ratio, which results in trimming parameter regions (when likelihood function is less than the smallest floating point number that a computer can represent) and corrupting of the empirical measures of the random states of the MCMC sampler (when using log-likelihood function). We consider two of the most powerful numerical estimators of BME that are the path sampling method of thermodynamic integration (TI) and the importance sampling method of steppingstone sampling (SS). We also consider the two most widely used numerical estimators, which are the prior sampling arithmetic mean (AS) and posterior sampling harmonic mean (HM). We investigate the vulnerability of these four estimators to the numerical demons. Interesting, the most biased estimator, namely the HM, turned out to be the least vulnerable. While it is generally assumed that AM is a bias-free estimator that will always approximate the true BME by investing in computational effort, we show that arithmetic underflow can hamper AM resulting in severe underestimation of BME. TI turned out to be the most vulnerable, resulting in BME overestimation. Finally, we show how SS can be largely invariant to rounding errors, yielding the most accurate and computational efficient results. These research results are useful for MC simulations to estimate Bayesian model evidence.

  5. Essays on pricing dynamics, price dispersion, and nested logit modelling

    NASA Astrophysics Data System (ADS)

    Verlinda, Jeremy Alan

    The body of this dissertation comprises three standalone essays, presented in three respective chapters. Chapter One explores the possibility that local market power contributes to the asymmetric relationship observed between wholesale costs and retail prices in gasoline markets. I exploit an original data set of weekly gas station prices in Southern California from September 2002 to May 2003, and take advantage of highly detailed station and local market-level characteristics to determine the extent to which spatial differentiation influences price-response asymmetry. I find that brand identity, proximity to rival stations, bundling and advertising, operation type, and local market features and demographics each influence a station's predicted asymmetric relationship between prices and wholesale costs. Chapter Two extends the existing literature on the effect of market structure on price dispersion in airline fares by modeling the effect at the disaggregate ticket level. Whereas past studies rely on aggregate measures of price dispersion such as the Gini coefficient or the standard deviation of fares, this paper estimates the entire empirical distribution of airline fares and documents how the shape of the distribution is determined by market structure. Specifically, I find that monopoly markets favor a wider distribution of fares with more mass in the tails while duopoly and competitive markets exhibit a tighter fare distribution. These findings indicate that the dispersion of airline fares may result from the efforts of airlines to practice second-degree price discrimination. Chapter Three adopts a Bayesian approach to the problem of tree structure specification in nested logit modelling, which requires a heavy computational burden in calculating marginal likelihoods. I compare two different techniques for estimating marginal likelihoods: (1) the Laplace approximation, and (2) reversible jump MCMC. I apply the techniques to both a simulated and a travel mode choice data set, and find that model selection is invariant to prior specification, while model derivatives like willingness-to-pay are notably sensitive to model choice. I also find that the Laplace approximation is remarkably accurate in spite of the potential for nested logit models to have irregular likelihood surfaces.

  6. Evidence for the associated production of a W boson and a top quark at ATLAS

    NASA Astrophysics Data System (ADS)

    Koll, James

    This thesis discusses a search for the Standard Model single top Wt-channel process. An analysis has been performed searching for the Wt-channel process using 4.7 fb-1 of integrated luminosity collected with the ATLAS detector at the Large Hadron Collider. A boosted decision tree is trained using machine learning techniques to increase the separation between signal and background. A profile likelihood fit is used to measure the cross-section of the Wt-channel process at sigma(pp → Wt + X) = 16.8 +/-2.9 (stat) +/- 4.9(syst) pb, consistent with the Standard Model prediction. This fit is also used to generate pseudoexperiments to calculate the significance, finding an observed (expected) 3.3 sigma (3.4 sigma) excess over background.

  7. Extending the Li&Ma method to include PSF information

    NASA Astrophysics Data System (ADS)

    Nievas-Rosillo, M.; Contreras, J. L.

    2016-02-01

    The so called Li&Ma formula is still the most frequently used method for estimating the significance of observations carried out by Imaging Atmospheric Cherenkov Telescopes. In this work a straightforward extension of the method for point sources that profits from the good imaging capabilities of current instruments is proposed. It is based on a likelihood ratio under the assumption of a well-known PSF and a smooth background. Its performance is tested with Monte Carlo simulations based on real observations and its sensitivity is compared to standard methods which do not incorporate PSF information. The gain of significance that can be attributed to the inclusion of the PSF is around 10% and can be boosted if a background model is assumed or a finer binning is used.

  8. Neighborhood disorder and screen time among 10-16 year old Canadian youth: A cross-sectional study

    PubMed Central

    2012-01-01

    Background Screen time activities (e.g., television, computers, video games) have been linked to several negative health outcomes among young people. In order to develop evidence-based interventions to reduce screen time, the factors that influence the behavior need to be better understood. High neighborhood disorder, which may encourage young people to stay indoors where screen time activities are readily available, is one potential factor to consider. Methods Results are based on 15,917 youth in grades 6-10 (aged 10-16 years old) who participated in the Canadian 2009/10 Health Behaviour in School-aged Children Survey (HBSC). Total hours per week of television, video games, and computer use were reported by the participating students in the HBSC student questionnaire. Ten items of neighborhood disorder including safety, neighbors taking advantage, drugs/drinking in public, ethnic tensions, gangs, crime, conditions of buildings/grounds, abandoned buildings, litter, and graffiti were measured using the HBSC student questionnaire, the HBSC administrator questionnaire, and Geographic Information Systems. Based upon these 10 items, social and physical neighborhood disorder variables were derived using principal component analysis. Multivariate multilevel logistic regression analyses were used to examine the relationship between social and physical neighborhood disorder and individual screen time variables. Results High (top quartile) social neighborhood disorder was associated with approximately 35-45% increased risk of high (top quartile) television, computer, and video game use. Physical neighborhood disorder was not associated with screen time activities after adjusting for social neighborhood disorder. However, high social and physical neighborhood disorder combined was associated with approximately 40-60% increased likelihood of high television, computer, and video game use. Conclusion High neighborhood disorder is one environmental factor that may be important to consider for future public health interventions and strategies aiming to reduce screen time among youth. PMID:22651908

  9. Cosmic background radiation anisotropy in an open inflation, cold dark matter cosmogony

    NASA Technical Reports Server (NTRS)

    Kamionkowski, Marc; Ratra, Bharat; Spergel, David N.; Sugiyama, Naoshi

    1994-01-01

    We compute the cosmic background radiation anisotropy, produced by energy-density fluctuations generated during an early epoch of inflation, in an open cosmological model based on the cold dark matter scenario. At Omega(sub 0) is approximately 0.3-0.4, the Cosmic Background Explorer (COBE) normalized open model appears to be consistent with most observations.

  10. Identical twins in forensic genetics - Epidemiology and risk based estimation of weight of evidence.

    PubMed

    Tvedebrink, Torben; Morling, Niels

    2015-12-01

    The increase in the number of forensic genetic loci used for identification purposes results in infinitesimal random match probabilities. These probabilities are computed under assumptions made for rather simple population genetic models. Often, the forensic expert reports likelihood ratios, where the alternative hypothesis is assumed not to encompass close relatives. However, this approach implies that important factors present in real human populations are discarded. This approach may be very unfavourable to the defendant. In this paper, we discuss some important aspects concerning the closest familial relationship, i.e., identical (monozygotic) twins, when reporting the weight of evidence. This can be done even when the suspect has no knowledge of an identical twin or when official records hold no twin information about the suspect. The derived expressions are not original as several authors previously have published results accounting for close familial relationships. However, we revisit the discussion to increase the awareness among forensic genetic practitioners and include new information on medical and societal factors to assess the risk of not considering a monozygotic twin as the true perpetrator. If accounting for a monozygotic twin in the weight of evidence, it implies that the likelihood ratio is truncated at a maximal value depending on the prevalence of monozygotic twins and the societal efficiency of recognising a monozygotic twin. If a monozygotic twin is considered as an alternative proposition, then data relevant for the Danish society suggests that the threshold of likelihood ratios should approximately be between 150,000 and 2,000,000 in order to take the risk of an unrecognised identical, monozygotic twin into consideration. In other societies, the threshold of the likelihood ratio in crime cases may reach other, often lower, values depending on the recognition of monozygotic twins and the age of the suspect. In general, more strictly kept registries will imply larger thresholds on the likelihood ratio as the monozygotic twin explanation gets less probable. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  11. Estimation of parameters of dose volume models and their confidence limits

    NASA Astrophysics Data System (ADS)

    van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.

    2003-07-01

    Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.

  12. Optical Properties of Boreal Region Biomass Burning Aerosols in Central Alaska and Seasonal Variation of Aerosol Optical Depth at an Arctic Coastal Site

    NASA Technical Reports Server (NTRS)

    Eck, T. F.; Holben, B. N.; Reid, J. S.; Sinyuk, A.; Hyer, E. J.; O'Neill, N. T.; Shaw, G. E.; VandeCastle, J. R.; Chapin, F. S.; Dubovik, O.; hide

    2010-01-01

    Long-term monitoring of aerosol optical properties at a boreal forest AERONET site in interior Alaska was performed from 1994 through 2008 (excluding winter). Large interannual variability was observed, with some years showing near background aerosol optical depth (AOD) levels (<0.1 at 500 nm) while 2004 and 2005 had August monthly means similar in magnitude to peak months at major tropical biomass burning regions. Single scattering albedo (omega (sub 0); 440 nm) at the boreal forest site ranged from approximately 0.91 to 0.99 with an average of approximately 0.96 for observations in 2004 and 2005. This suggests a significant amount of smoldering combustion of woody fuels and peat/soil layers that would result in relatively low black carbon mass fractions for smoke particles. The fine mode particle volume median radius during the heavy burning years was quite large, averaging approximately 0.17 micron at AOD(440 nm) = 0.1 and increasing to approximately 0.25 micron at AOD(440 nm) = 3.0. This large particle size for biomass burning aerosols results in a greater relative scattering component of extinction and, therefore, also contributes to higher omega (sub 0). Additionally, monitoring at an Arctic Ocean coastal site (Barrow, Alaska) suggested transport of smoke to the Arctic in summer resulting in individual events with much higher AOD than that occurring during typical spring Arctic haze. However, the springtime mean AOD(500 nm) is higher during late March through late May (approximately 0.150) than during summer months (approximately 0.085) at Barrow partly due to very few days with low background AOD levels in spring compared with many days with clean background conditions in summer.

  13. Large-angle cosmic microwave background anisotropies in an open universe

    NASA Technical Reports Server (NTRS)

    Kamionkowski, Marc; Spergel, David N.

    1994-01-01

    If the universe is open, scales larger than the curvature scale may be probed by observation of large-angle fluctuations in the cosmic microwave background (CMB). We consider primordial adiabatic perturbations and discuss power spectra that are power laws in volume, wavelength, and eigenvalue of the Laplace operator. Such spectra may have arisen if, for example, the universe underwent a period of `frustated' inflation. The resulting large-angle anisotropies of the CMB are computed. The amplitude generally increases as Omega is decreased but decreases as h is increased. Interestingly enough, for all three Ansaetze, anisotropies on angular scales larger than the curvature scale are suppressed relative to the anisotropies on scales smaller than the curvature scale, but cosmic variance makes discrimination between various models difficult. Models with 0.2 approximately less than Omega h approximately less than 0.3 appear compatible with CMB fluctuations detected by Cosmic Background Explorer Satellite (COBE) and the Tenerife experiment and with the amplitude and spectrum of fluctuations of galaxy counts in the APM, CfA, and 1.2 Jy IRAS surveys. COBE normalization for these models yields sigma(sub 8) approximately = 0.5 - 0.7. Models with smaller values of Omega h when normalized to COBE require bias factors in excess of 2 to be compatible with the observed galaxy counts on the 8/h Mpc scale. Requiring that the age of the universe exceed 10 Gyr implies that Omega approximately greater than 0.25, while requiring that from the last-scattering term in the Sachs-Wolfe formula, large-angle anisotropies come primarily from the decay of potential fluctuations at z approximately less than 1/Omega. Thus, if the universe is open, COBE has been detecting temperature fluctuations produced at moderate redshift rather than at z approximately 1300.

  14. The Relationship Between Hope and Adolescent Likelihood to Endorse Substance Use Behaviors in a Sample of Marginalized Youth.

    PubMed

    Brooks, Merrian J; Marshal, Michael P; McCauley, Heather L; Douaihy, Antoine; Miller, Elizabeth

    2016-11-09

    Hopefulness has been associated with increased treatment retention and reduced substance abuse among adults, and may be a promising modifiable factor to leverage in substance abuse treatment settings. Few studies have assessed the relationship between hopefulness and substance use in adolescents, particularly those with high-risk backgrounds. We explored whether high hope is associated with less likelihood for engaging in a variety of substance use behaviors in a sample of marginalized adolescents. Using logistic regression, we assessed results from a cross-sectional anonymous youth behavior survey (n = 256 youth, ages 14 to 19). We recruited from local youth serving agencies (e.g., homeless shelters, group homes, short-term detention). The sample was almost 60% male and two thirds African American. Unadjusted models showed youth with higher hope had a 50-58% (p = <.05) decreased odds of endorsing heavy episodic drinking, daily tobacco use, recent or lifetime marijuana use, and sex after using substances. Adjusted models showed a 52% decreased odds of lifetime marijuana use with higher hope, and a trend towards less sex after substance use (AOR 0.481; p = 0.065). No other substance use behaviors remained significantly associated with higher hope scores in adjusted models. Hopefulness may contribute to decreased likelihood of substance use in adolescents. Focusing on hope may be one modifiable target in a comprehensive primary or secondary substance use prevention program.

  15. Analysis of case-parent trios at a locus with a deletion allele: association of GSTM1 with autism

    PubMed Central

    Buyske, Steven; Williams, Tanishia A; Mars, Audrey E; Stenroos, Edward S; Ming, Sue X; Wang, Rong; Sreenath, Madhura; Factura, Marivic F; Reddy, Chitra; Lambert, George H; Johnson, William G

    2006-01-01

    Background Certain loci on the human genome, such as glutathione S-transferase M1 (GSTM1), do not permit heterozygotes to be reliably determined by commonly used methods. Association of such a locus with a disease is therefore generally tested with a case-control design. When subjects have already been ascertained in a case-parent design however, the question arises as to whether the data can still be used to test disease association at such a locus. Results A likelihood ratio test was constructed that can be used with a case-parents design but has somewhat less power than a Pearson's chi-squared test that uses a case-control design. The test is illustrated on a novel dataset showing a genotype relative risk near 2 for the homozygous GSTM1 deletion genotype and autism. Conclusion Although the case-control design will remain the mainstay for a locus with a deletion, the likelihood ratio test will be useful for such a locus analyzed as part of a larger case-parent study design. The likelihood ratio test has the advantage that it can incorporate complete and incomplete case-parent trios as well as independent cases and controls. Both analyses support (p = 0.046 for the proposed test, p = 0.028 for the case-control analysis) an association of the homozygous GSTM1 deletion genotype with autism. PMID:16472391

  16. On the insufficiency of arbitrarily precise covariance matrices: non-Gaussian weak-lensing likelihoods

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heavens, Alan F.

    2018-01-01

    We investigate whether a Gaussian likelihood, as routinely assumed in the analysis of cosmological data, is supported by simulated survey data. We define test statistics, based on a novel method that first destroys Gaussian correlations in a data set, and then measures the non-Gaussian correlations that remain. This procedure flags pairs of data points that depend on each other in a non-Gaussian fashion, and thereby identifies where the assumption of a Gaussian likelihood breaks down. Using this diagnosis, we find that non-Gaussian correlations in the CFHTLenS cosmic shear correlation functions are significant. With a simple exclusion of the most contaminated data points, the posterior for s8 is shifted without broadening, but we find no significant reduction in the tension with s8 derived from Planck cosmic microwave background data. However, we also show that the one-point distributions of the correlation statistics are noticeably skewed, such that sound weak-lensing data sets are intrinsically likely to lead to a systematically low lensing amplitude being inferred. The detected non-Gaussianities get larger with increasing angular scale such that for future wide-angle surveys such as Euclid or LSST, with their very small statistical errors, the large-scale modes are expected to be increasingly affected. The shifts in posteriors may then not be negligible and we recommend that these diagnostic tests be run as part of future analyses.

  17. Bayesian Hierarchical Random Effects Models in Forensic Science.

    PubMed

    Aitken, Colin G G

    2018-01-01

    Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  18. Detection methods for stochastic gravitational-wave backgrounds: a unified treatment

    NASA Astrophysics Data System (ADS)

    Romano, Joseph D.; Cornish, Neil. J.

    2017-04-01

    We review detection methods that are currently in use or have been proposed to search for a stochastic background of gravitational radiation. We consider both Bayesian and frequentist searches using ground-based and space-based laser interferometers, spacecraft Doppler tracking, and pulsar timing arrays; and we allow for anisotropy, non-Gaussianity, and non-standard polarization states. Our focus is on relevant data analysis issues, and not on the particular astrophysical or early Universe sources that might give rise to such backgrounds. We provide a unified treatment of these searches at the level of detector response functions, detection sensitivity curves, and, more generally, at the level of the likelihood function, since the choice of signal and noise models and prior probability distributions are actually what define the search. Pedagogical examples are given whenever possible to compare and contrast different approaches. We have tried to make the article as self-contained and comprehensive as possible, targeting graduate students and new researchers looking to enter this field.

  19. Background characterization of an ultra-low background liquid scintillation counter

    DOE PAGES

    Erchinger, J. L.; Orrell, John L.; Aalseth, C. E.; ...

    2017-01-26

    The Ultra-Low Background Liquid Scintillation Counter developed by Pacific Northwest National Laboratory will expand the application of liquid scintillation counting by enabling lower detection limits and smaller sample volumes. By reducing the overall count rate of the background environment approximately 2 orders of magnitude below that of commercially available systems, backgrounds on the order of tens of counts per day over an energy range of ~3–3600 keV can be realized. Finally, initial test results of the ULB LSC show promising results for ultra-low background detection with liquid scintillation counting.

  20. Estimate of the cosmological bispectrum from the MAXIMA-1 cosmic microwave background map.

    PubMed

    Santos, M G; Balbi, A; Borrill, J; Ferreira, P G; Hanany, S; Jaffe, A H; Lee, A T; Magueijo, J; Rabii, B; Richards, P L; Smoot, G F; Stompor, R; Winant, C D; Wu, J H P

    2002-06-17

    We use the measurement of the cosmic microwave background taken during the MAXIMA-1 flight to estimate the bispectrum of cosmological perturbations. We propose an estimator for the bispectrum that is appropriate in the flat sky approximation, apply it to the MAXIMA-1 data, and evaluate errors using bootstrap methods. We compare the estimated value with what would be expected if the sky signal were Gaussian and find that it is indeed consistent, with a chi(2) per degree of freedom of approximately unity. This measurement places constraints on models of inflation.

  1. Limits, discovery and cut optimization for a Poisson process with uncertainty in background and signal efficiency: TRolke 2.0

    NASA Astrophysics Data System (ADS)

    Lundberg, J.; Conrad, J.; Rolke, W.; Lopez, A.

    2010-03-01

    A C++ class was written for the calculation of frequentist confidence intervals using the profile likelihood method. Seven combinations of Binomial, Gaussian, Poissonian and Binomial uncertainties are implemented. The package provides routines for the calculation of upper and lower limits, sensitivity and related properties. It also supports hypothesis tests which take uncertainties into account. It can be used in compiled C++ code, in Python or interactively via the ROOT analysis framework. Program summaryProgram title: TRolke version 2.0 Catalogue identifier: AEFT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: MIT license No. of lines in distributed program, including test data, etc.: 3431 No. of bytes in distributed program, including test data, etc.: 21 789 Distribution format: tar.gz Programming language: ISO C++. Computer: Unix, GNU/Linux, Mac. Operating system: Linux 2.6 (Scientific Linux 4 and 5, Ubuntu 8.10), Darwin 9.0 (Mac-OS X 10.5.8). RAM:˜20 MB Classification: 14.13. External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with statistical or systematic uncertainties in signal efficiency or background. Solution method: Profile likelihood method, Analytical Running time:<10 seconds per extracted limit.

  2. A Measurement of the Top Quark Mass in 1.96 TeV Proton-Antiproton Collisions Using a Novel Matrix Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, John

    A measurement of the top quark mass in tmore » $$\\bar{t}$$ → l + jets candidate events, obtained from p$$\\bar{p}$$ collisions at √s = 1.96 TeV at the Fermilab Tevatron using the CDF II detector, is presented. The measurement approach is that of a matrix element method. For each candidate event, a two dimensional likelihood is calculated in the top pole mass and a constant scale factor, 'JES', where JES multiplies the input particle jet momenta and is designed to account for the systematic uncertainty of the jet momentum reconstruction. As with all matrix element techniques, the method involves an integration using the Standard Model matrix element for t$$\\bar{t}$$ production and decay. However, the technique presented is unique in that the matrix element is modified to compensate for kinematic assumptions which are made to reduce computation time. Background events are dealt with through use of an event observable which distinguishes signal from background, as well as through a cut on the value of an event's maximum likelihood. Results are based on a 955 pb -1 data sample, using events with a high-p T lepton and exactly four high-energy jets, at least one of which is tagged as coming from a b quark; 149 events pass all the selection requirements. They find M meas = 169.8 ± 2.3(stat.) ± 1.4(syst.) GeV/c 2.« less

  3. Improved Analysis of Time Series with Temporally Correlated Errors: An Algorithm that Reduces the Computation Time.

    NASA Astrophysics Data System (ADS)

    Langbein, J. O.

    2016-12-01

    Most time series of geophysical phenomena are contaminated with temporally correlated errors that limit the precision of any derived parameters. Ignoring temporal correlations will result in biased and unrealistic estimates of velocity and its error estimated from geodetic position measurements. Obtaining better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model when there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fn , with frequency, f. Time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. [2012] demonstrate one technique that substantially increases the efficiency of the MLE methods, but it provides only an approximate solution for power-law indices greater than 1.0. That restriction can be removed by simply forming a data-filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified and it provides robust results for a wide range of power-law indices. With the new formulation, the efficiency is typically improved by about a factor of 8 over previous MLE algorithms [Langbein, 2004]. The new algorithm can be downloaded at http://earthquake.usgs.gov/research/software/#est_noise. The main program provides a number of basic functions that can be used to model the time-dependent part of time series and a variety of models that describe the temporal covariance of the data. In addition, the program is packaged with a few companion programs and scripts that can help with data analysis and with interpretation of the noise modeling.

  4. Spin-diffusions and diffusive molecular dynamics

    NASA Astrophysics Data System (ADS)

    Farmer, Brittan; Luskin, Mitchell; Plecháč, Petr; Simpson, Gideon

    2017-12-01

    Metastable configurations in condensed matter typically fluctuate about local energy minima at the femtosecond time scale before transitioning between local minima after nanoseconds or microseconds. This vast scale separation limits the applicability of classical molecular dynamics (MD) methods and has spurned the development of a host of approximate algorithms. One recently proposed method is diffusive MD which aims at integrating a system of ordinary differential equations describing the likelihood of occupancy by one of two species, in the case of a binary alloy, while quasistatically evolving the locations of the atoms. While diffusive MD has shown itself to be efficient and provide agreement with observations, it is fundamentally a model, with unclear connections to classical MD. In this work, we formulate a spin-diffusion stochastic process and show how it can be connected to diffusive MD. The spin-diffusion model couples a classical overdamped Langevin equation to a kinetic Monte Carlo model for exchange amongst the species of a binary alloy. Under suitable assumptions and approximations, spin-diffusion can be shown to lead to diffusive MD type models. The key assumptions and approximations include a well-defined time scale separation, a choice of spin-exchange rates, a low temperature approximation, and a mean field type approximation. We derive several models from different assumptions and show their relationship to diffusive MD. Differences and similarities amongst the models are explored in a simple test problem.

  5. Exact likelihood evaluations and foreground marginalization in low resolution WMAP data

    NASA Astrophysics Data System (ADS)

    Slosar, Anže; Seljak, Uroš; Makarov, Alexey

    2004-06-01

    The large scale anisotropies of Wilkinson Microwave Anisotropy Probe (WMAP) data have attracted a lot of attention and have been a source of controversy, with many favorite cosmological models being apparently disfavored by the power spectrum estimates at low l. All the existing analyses of theoretical models are based on approximations for the likelihood function, which are likely to be inaccurate on large scales. Here we present exact evaluations of the likelihood of the low multipoles by direct inversion of the theoretical covariance matrix for low resolution WMAP maps. We project out the unwanted galactic contaminants using the WMAP derived maps of these foregrounds. This improves over the template based foreground subtraction used in the original analysis, which can remove some of the cosmological signal and may lead to a suppression of power. As a result we find an increase in power at low multipoles. For the quadrupole the maximum likelihood values are rather uncertain and vary between 140 and 220 μK2. On the other hand, the probability distribution away from the peak is robust and, assuming a uniform prior between 0 and 2000 μK2, the probability of having the true value above 1200 μK2 (as predicted by the simplest cold dark matter model with a cosmological constant) is 10%, a factor of 2.5 higher than predicted by the WMAP likelihood code. We do not find the correlation function to be unusual beyond the low quadrupole value. We develop a fast likelihood evaluation routine that can be used instead of WMAP routines for low l values. We apply it to the Markov chain Monte Carlo analysis to compare the cosmological parameters between the two cases. The new analysis of WMAP either alone or jointly with the Sloan Digital Sky Survey (SDSS) and the Very Small Array (VSA) data reduces the evidence for running to less than 1σ, giving αs=-0.022±0.033 for the combined case. The new analysis prefers about a 1σ lower value of Ωm, a consequence of an increased integrated Sachs-Wolfe (ISW) effect contribution required by the increase in the spectrum at low l. These results suggest that the details of foreground removal and full likelihood analysis are important for parameter estimation from the WMAP data. They are robust in the sense that they do not change significantly with frequency, mask, or details of foreground template marginalization. The marginalization approach presented here is the most conservative method to remove the foregrounds and should be particularly useful in the analysis of polarization, where foreground contamination may be much more severe.

  6. Students' Moral Reasoning as Related to Cultural Background and Educational Experience.

    ERIC Educational Resources Information Center

    Bar-Yam, Miriam; And Others

    The relationship between moral development and cultural and educational background is examined. Approximately 120 Israeli youth representing different social classes, sex, religious affiliation, and educational experience were interviewed. The youth interviewed included urban middle and lower class students, Kibbutz-born, Youth Aliyah…

  7. Habitual physical activity and the risk for depressive and anxiety disorders among older men and women.

    PubMed

    Pasco, Julie A; Williams, Lana J; Jacka, Felice N; Henry, Margaret J; Coulson, Carolyn E; Brennan, Sharon L; Leslie, Eva; Nicholson, Geoffrey C; Kotowicz, Mark A; Berk, Michael

    2011-03-01

    Regular physical activity is generally associated with psychological well-being, although there are relatively few prospective studies in older adults. We investigated habitual physical activity as a risk factor for de novo depressive and anxiety disorders in older men and women from the general population. In this nested case-control study, subjects aged 60 years or more were identified from randomly selected cohorts being followed prospectively in the Geelong Osteoporosis Study. Cases were individuals with incident depressive or anxiety disorders, diagnosed using the Structured Clinical Interview for DSM-IV-TR (SCID-I/NP); controls had no history of these disorders. Habitual physical activity, measured using a validated questionnaire, and other exposures were documented at baseline, approximately four years prior to psychiatric interviews. Those with depressive or anxiety disorders that pre-dated baseline were excluded. Of 547 eligible subjects, 14 developed de novo depressive or anxiety disorders and were classified as cases; 533 controls remained free of disease. Physical activity was protective against the likelihood of depressive and anxiety disorders; OR = 0.55 (95% CI 0.32-0.94), p = 0.03; each standard deviation increase in the transformed physical activity score was associated with an approximate halving in the likelihood of developing depressive or anxiety disorders. Leisure-time physical activity contributed substantially to the overall physical activity score. Age, gender, smoking, alcohol consumption, weight and socioeconomic status did not substantially confound the association. This study provides evidence consistent with the notion that higher levels of habitual physical activity are protective against the subsequent risk of development of de novo depressive and anxiety disorders.

  8. Measuring Preferences for a Diabetes Pay-for-Performance for Patient (P4P4P) Program using a Discrete Choice Experiment.

    PubMed

    Chen, Tsung-Tai; Tung, Tao-Hsin; Hsueh, Ya-Seng Arthur; Tsai, Ming-Han; Liang, Hsiu-Mei; Li, Kay-Lun; Chung, Kuo-Piao; Tang, Chao-Hsiun

    2015-07-01

    To elicit a patient's willingness to participate in a diabetes pay-for-performance for patient (P4P4P) program using a discrete choice experiment method. The survey was conducted in March 2013. Our sample was drawn from patients with diabetes at five hospitals in Taiwan (International Classification of Diseases, Ninth Revision, Clinical Modification code 250). The sample size was 838 patients. The discrete choice experiment questionnaire included the attributes monthly cash rewards, exercise time, diet control, and program duration. We estimated a bivariate probit model to derive willingness-to-accept levels after accounting for the characteristics (e.g., severity and comorbidity) of patients with diabetes. The preferred program was a 3-year program involving 30 minutes of exercise per day and flexible diet control. Offering an incentive of approximately US $67 in cash per month appears to increase the likelihood that patients with diabetes will participate in the preferred P4P4P program by approximately 50%. Patients with more disadvantageous characteristics (e.g., elderly, low income, greater comorbidity, and severity) could have less to gain from participating in the program and thus require a higher monetary incentive to compensate for the disutility caused by participating in the program's activities. Our result demonstrates that a modest financial incentive could increase the likelihood of program participation after accounting for the attributes of the P4P4P program and patients' characteristics. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  9. Localization of short-range acoustic and seismic wideband sources: Algorithms and experiments

    NASA Astrophysics Data System (ADS)

    Stafsudd, J. Z.; Asgari, S.; Hudson, R.; Yao, K.; Taciroglu, E.

    2008-04-01

    We consider the determination of the location (source localization) of a disturbance source which emits acoustic and/or seismic signals. We devise an enhanced approximate maximum-likelihood (AML) algorithm to process data collected at acoustic sensors (microphones) belonging to an array of, non-collocated but otherwise identical, sensors. The approximate maximum-likelihood algorithm exploits the time-delay-of-arrival of acoustic signals at different sensors, and yields the source location. For processing the seismic signals, we investigate two distinct algorithms, both of which process data collected at a single measurement station comprising a triaxial accelerometer, to determine direction-of-arrival. The direction-of-arrivals determined at each sensor station are then combined using a weighted least-squares approach for source localization. The first of the direction-of-arrival estimation algorithms is based on the spectral decomposition of the covariance matrix, while the second is based on surface wave analysis. Both of the seismic source localization algorithms have their roots in seismology; and covariance matrix analysis had been successfully employed in applications where the source and the sensors (array) are typically separated by planetary distances (i.e., hundreds to thousands of kilometers). Here, we focus on very-short distances (e.g., less than one hundred meters) instead, with an outlook to applications in multi-modal surveillance, including target detection, tracking, and zone intrusion. We demonstrate the utility of the aforementioned algorithms through a series of open-field tests wherein we successfully localize wideband acoustic and/or seismic sources. We also investigate a basic strategy for fusion of results yielded by acoustic and seismic arrays.

  10. Efficiency of the human observer for detecting a Gaussian signal at a known location in non-Gaussian distributed lumpy backgrounds.

    PubMed

    Park, Subok; Gallas, Bradon D; Badano, Aldo; Petrick, Nicholas A; Myers, Kyle J

    2007-04-01

    A previous study [J. Opt. Soc. Am. A22, 3 (2005)] has shown that human efficiency for detecting a Gaussian signal at a known location in non-Gaussian distributed lumpy backgrounds is approximately 4%. This human efficiency is much less than the reported 40% efficiency that has been documented for Gaussian-distributed lumpy backgrounds [J. Opt. Soc. Am. A16, 694 (1999) and J. Opt. Soc. Am. A18, 473 (2001)]. We conducted a psychophysical study with a number of changes, specifically in display-device calibration and data scaling, from the design of the aforementioned study. Human efficiency relative to the ideal observer was found again to be approximately 5%. Our variance analysis indicates that neither scaling nor display made a statistically significant difference in human performance for the task. We conclude that the non-Gaussian distributed lumpy background is a major factor in our low human-efficiency results.

  11. Diagnostic accuracy of history and physical examination in bacterial acute rhinosinusitis.

    PubMed

    Autio, Timo J; Koskenkorva, Timo; Närkiö, Mervi; Leino, Tuomo K; Koivunen, Petri; Alho, Olli-Pekka

    2015-07-01

    To evaluate the diagnostic accuracy of symptoms, the symptom progression pattern, and clinical signs in identifying bacterial acute rhinosinusitis (ARS). We conducted an inception cohort study among 50 military recruits with ARS. We collected symptoms daily from the onset of symptoms to approximately 10 days. At 9 to 10 days, standardized data on symptoms and physical findings were gathered. A positive culture of maxillary sinus aspirate was considered to be the reference standard for bacterial ARS. At 9 to 10 days, the presence or deterioration after 5 days of any of the symptoms could not be used to diagnose bacterial ARS. Toothache had an adequate positive likelihood ratio (positive likelihood ratio [LR+] 4.4) but was too rare to be used for screening. In contrast, several physical findings at 9 to 10 days were of more diagnostic use and frequent enough for screening. Moderate or profuse (vs. none/minimal) amount of secretion in nasal passage seen in anterior rhinoscopy satisfactorily either ruled in, if present (LR+ 3.2), or ruled out, if absent (negative likelihood ratio 0.2), bacterial ARS. If any secretion was seen in the posterior pharynx or middle meatus, the probability of bacterial ARS increased markedly (LR+ 5.3 and LR+ 11.0, respectively). We found symptoms or their change to be of little use in identifying bacterial ARS. In contrast, we observed several clinical findings after 9 to 10 days of symptoms to predict bacterial ARS quite accurately. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  12. Age and aphasia: a review of presence, type, recovery and clinical outcomes.

    PubMed

    Ellis, Charles; Urban, Stephanie

    2016-12-01

    Each year approximately 100,000 stroke survivors are diagnosed with aphasia. Although stroke is associated with age, the relationship between age and aphasia is less clear. To complete a review of the literature to examine the relationship between age and: (a) presence or likelihood of aphasia after stroke, (b) aphasia type, (c) aphasia recovery patterns, and (d) aphasia clinical outcomes. Articles were identified by a comprehensive search of "OneSearch," PubMed, and individual journals: Aphasiology, Stroke and the Journal of Stroke and Cerebrovascular Diseases. Inclusion criteria included: age and incidence of aphasia, likelihood of aphasia, aphasia recovery, and aphasia clinical outcome. Independent searches were completed by the authors. Each author independently assessed the full text of reports meeting inclusion criteria. Differences regarding study eligibility and need to proceed with data extraction were resolved by consensus. 1617 articles were identified during the initial search. Forty studies including 14,795 study participants were included in the review. The review generally demonstrated that: (a) stroke patients with aphasia are typically older than stroke with patients without aphasia and (b) aphasia type and age are associated as younger patients with aphasia are more likely to exhibit non-fluent or Broca's type of aphasia. In contrast, studies examining aphasia recovery and aphasia clinical outcomes did not demonstrate a positive relationship between age and recovery or clinical outcomes. Stroke is a condition of the elderly. However, age appears to only influence likelihood of aphasia and aphasia type.

  13. Fast automated analysis of strong gravitational lenses with convolutional neural networks.

    PubMed

    Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J

    2017-08-30

    Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.

  14. Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2017-07-01

    For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α<β and β>1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | α̂, β̂) as a validity measure (r is a random variable that follows a gamma distribution; α̂ and β̂ are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions β̂>(22α̂)0.50 for 0.02<α̂<2 as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | α̂, β̂) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | α̂, β̂), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta-Poisson model dose-response curve. © 2016 Society for Risk Analysis.

  15. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  16. Analysis of an all-digital maximum likelihood carrier phase and clock timing synchronizer for eight phase-shift keying modulation

    NASA Astrophysics Data System (ADS)

    Degaudenzi, Riccardo; Vanghi, Vieri

    1994-02-01

    In all-digital Trellis-Coded 8PSK (TC-8PSK) demodulator well suited for VLSI implementation, including maximum likelihood estimation decision-directed (MLE-DD) carrier phase and clock timing recovery, is introduced and analyzed. By simply removing the trellis decoder the demodulator can efficiently cope with uncoded 8PSK signals. The proposed MLE-DD synchronization algorithm requires one sample for the phase and two samples per symbol for the timing loop. The joint phase and timing discriminator characteristics are analytically derived and numerical results checked by means of computer simulations. An approximated expression for steady-state carrier phase and clock timing mean square error has been derived and successfully checked with simulation findings. Synchronizer deviation from the Cramer Rao bound is also discussed. Mean acquisition time for the digital synchronizer has also been computed and checked, using the Monte Carlo simulation technique. Finally, TC-8PSK digital demodulator performance in terms of bit error rate and mean time to lose lock, including digital interpolators and synchronization loops, is presented.

  17. Northwest Climate Risk Assessment

    NASA Astrophysics Data System (ADS)

    Mote, P.; Dalton, M. M.; Snover, A. K.

    2012-12-01

    As part of the US National Climate Assessment, the Northwest region undertook a process of climate risk assessment. This process included an expert evaluation of previously identified impacts, their likelihoods, and consequences, and engaged experts from both academia and natural resource management practice (federal, tribal, state, local, private, and non-profit) in a workshop setting. An important input was a list of 11 risks compiled by state agencies in Oregon and similar adaptation efforts in Washington. By considering jointly the likelihoods, consequences, and adaptive capacity, participants arrived at an approximately ranked list of risks which was further assessed and prioritized through a series of risk scoring exercises to arrive at the top three climate risks facing the Northwest: 1) changes in amount and timing of streamflow related to snowmelt, causing far-reaching ecological and socioeconomic consequences; 2) coastal erosion and inundation, and changing ocean acidity, combined with low adaptive capacity in the coastal zone to create large risks; and 3) the combined effects of wildfire, insect outbreaks, and diseases will cause large areas of forest mortality and long-term transformation of forest landscapes.

  18. Automated thematic mapping and change detection of ERTS-A images. [digital interpretation of Arizona imagery

    NASA Technical Reports Server (NTRS)

    Gramenopoulos, N. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. For the recognition of terrain types, spatial signatures are developed from the diffraction patterns of small areas of ERTS-1 images. This knowledge is exploited for the measurements of a small number of meaningful spatial features from the digital Fourier transforms of ERTS-1 image cells containing 32 x 32 picture elements. Using these spatial features and a heuristic algorithm, the terrain types in the vicinity of Phoenix, Arizona were recognized by the computer with a high accuracy. Then, the spatial features were combined with spectral features and using the maximum likelihood criterion the recognition accuracy of terrain types increased substantially. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. Nonlinear transformations of the feature vectors are required so that the terrain class statistics become approximately Gaussian. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month but vary substantially between seasons.

  19. Box-Cox transformation for QTL mapping.

    PubMed

    Yang, Runqing; Yi, Nengjun; Xu, Shizhong

    2006-01-01

    The maximum likelihood method of QTL mapping assumes that the phenotypic values of a quantitative trait follow a normal distribution. If the assumption is violated, some forms of transformation should be taken to make the assumption approximately true. The Box-Cox transformation is a general transformation method which can be applied to many different types of data. The flexibility of the Box-Cox transformation is due to a variable, called transformation factor, appearing in the Box-Cox formula. We developed a maximum likelihood method that treats the transformation factor as an unknown parameter, which is estimated from the data simultaneously along with the QTL parameters. The method makes an objective choice of data transformation and thus can be applied to QTL analysis for many different types of data. Simulation studies show that (1) Box-Cox transformation can substantially increase the power of QTL detection; (2) Box-Cox transformation can replace some specialized transformation methods that are commonly used in QTL mapping; and (3) applying the Box-Cox transformation to data already normally distributed does not harm the result.

  20. Maximum likelihood sequence estimation for optical complex direct modulation.

    PubMed

    Che, Di; Yuan, Feng; Shieh, William

    2017-04-17

    Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.

  1. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  2. Memory for faces: the effect of facial appearance and the context in which the face is encountered.

    PubMed

    Mattarozzi, Katia; Todorov, Alexander; Codispoti, Maurizio

    2015-03-01

    We investigated the effects of appearance of emotionally neutral faces and the context in which the faces are encountered on incidental face memory. To approximate real-life situations as closely as possible, faces were embedded in a newspaper article, with a headline that specified an action performed by the person pictured. We found that facial appearance affected memory so that faces perceived as trustworthy or untrustworthy were remembered better than neutral ones. Furthermore, the memory of untrustworthy faces was slightly better than that of trustworthy faces. The emotional context of encoding affected the details of face memory. Faces encountered in a neutral context were more likely to be recognized as only familiar. In contrast, emotionally relevant contexts of encoding, whether pleasant or unpleasant, increased the likelihood of remembering semantic and even episodic details associated with faces. These findings suggest that facial appearance (i.e., perceived trustworthiness) affects face memory. Moreover, the findings support prior evidence that the engagement of emotion processing during memory encoding increases the likelihood that events are not only recognized but also remembered.

  3. Plate tectonics and biogeographical patterns of the Pseudophoxinus (Pisces: Cypriniformes) species complex of central Anatolia, Turkey.

    PubMed

    Hrbek, Tomas; Stölting, Kai N; Bardakci, Fevzi; Küçük, Fahrettin; Wildekamp, Rudolf H; Meyer, Axel

    2004-07-01

    We investigated the phylogenetic relationships of Pseudophoxinus (Cyprinidae: Leuciscinae) species from central Anatolia, Turkey to test the hypothesis of geographic speciation driven by early Pliocene orogenic events. We analyzed 1141 aligned base pairs of the complete cytochrome b mitochondrial gene. Phylogenetic relationships reconstructed by maximum likelihood, Bayesian likelihood, and maximum parsimony methods are identical, and generally well supported. Species and clades are restricted to geologically well-defined units, and are deeply divergent from each other. The basal diversification of central Anatolian Pseudophoxinus is estimated to have occurred approximately 15 million years ago. Our results are in agreement with a previous study of the Anatolian fish genus Aphanius that also shows a diversification pattern driven by the Pliocene orogenic events. The distribution of clades of Aphanius and Pseudophoxinus overlap, and areas of distribution comprise the same geological units. The geological history of Anatolia is likely to have had a major impact on the diversification history of many taxa occupying central Anatolia; many of these taxa are likely to be still unrecognized as distinct. Copyright 2004 Elsevier Inc.

  4. Modeling, estimation and identification methods for static shape determination of flexible structures. [for large space structure design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1986-01-01

    This paper outlines methods for modeling, identification and estimation for static determination of flexible structures. The shape estimation schemes are based on structural models specified by (possibly interconnected) elliptic partial differential equations. The identification techniques provide approximate knowledge of parameters in elliptic systems. The techniques are based on the method of maximum-likelihood that finds parameter values such that the likelihood functional associated with the system model is maximized. The estimation methods are obtained by means of a function-space approach that seeks to obtain the conditional mean of the state given the data and a white noise characterization of model errors. The solutions are obtained in a batch-processing mode in which all the data is processed simultaneously. After methods for computing the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the related estimation error is conducted. In addition to outlining the above theoretical results, the paper presents typical flexible structure simulations illustrating performance of the shape determination methods.

  5. Accuracy of maximum likelihood and least-squares estimates in the lidar slope method with noisy data.

    PubMed

    Eberhard, Wynn L

    2017-04-01

    The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erchinger, J. L.; Orrell, John L.; Aalseth, C. E.

    The Ultra-Low Background Liquid Scintillation Counter developed by Pacific Northwest National Laboratory will expand the application of liquid scintillation counting by enabling lower detection limits and smaller sample volumes. By reducing the overall count rate of the background environment approximately 2 orders of magnitude below that of commercially available systems, backgrounds on the order of tens of counts per day over an energy range of ~3–3600 keV can be realized. Finally, initial test results of the ULB LSC show promising results for ultra-low background detection with liquid scintillation counting.

  7. Is knowing believing? The role of event plausibility and background knowledge in planting false beliefs about the personal past.

    PubMed

    Pezdek, Kathy; Blandon-Gitlin, Iris; Lam, Shirley; Hart, Rhiannon Ellis; Schooler, Jonathan W

    2006-12-01

    False memories are more likely to be planted for plausible than for implausible events, but does just knowing about an implausible event make individuals more likely to think that the event happened to them? Two experiments assessed the independent contributions o f plausibility a nd background knowledge to planting false beliefs. In Experiment 1, subjects rated 20 childhood events as to the likelihood of each event having happened to them. The list included the implausible target event "received an enema," a critical target event of Pezdek, Finger, and Hodge (1997). Two weeks later, subjects were presented with (1) information regarding the high prevalence rate of enemas; (2) background information on how to administer an enema; (3) neither type of information; or (4) both. Immediately or 2 weeks later, they rated the 20 childhood events again. Only plausibility significantly increased occurrence ratings. In Experiment 2, the target event was changed from "barium enema administered in a hospital" to "home enema for constipation"; significant effects of both plausibility and background knowledge resulted. The results suggest that providing background knowledge can increase beliefs about personal events, but that its impact is limited by the extent of the individual's familiarity with the context of the suggested target event.

  8. Airborne detection of diffuse carbon dioxide emissions at Mammoth Mountain, California

    USGS Publications Warehouse

    Gerlach, T.M.; Doukas, M.P.; McGee, K.A.; Kessler, R.

    1999-01-01

    We report the first airborne detection of CO2 degassing from diffuse volcanic sources. Airborne measurement of diffuse CO2 degassing offers a rapid alternative for monitoring CO2 emission rates at Mammoth Mountain. CO2 concentrations, temperatures, and barometric pressures were measured at ~2,500 GPS-referenced locations during a one-hour, eleven-orbit survey of air around Mammoth Mountain at ~3 km from the summit and altitudes of 2,895-3,657 m. A volcanic CO2 anomaly 4-5 km across with CO2 levels ~1 ppm above background was revealed downwind of tree-kill areas. It contained a 1-km core with concentrations exceeding background by >3 ppm. Emission rates of ~250 t d-1 are indicated. Orographic winds may play a key role in transporting the diffusely degassed CO2 upslope to elevations where it is lofted into the regional wind system.We report the first airborne detection of CO2 degassing from diffuse volcanic sources. Airborne measurement of diffuse CO2 degassing offers a rapid alternative for monitoring CO2 emission rates at Mammoth Mountain. CO2 concentrations, temperatures, and barometric pressures were measured at approximately 2,500 GPS-referenced locations during a one-hour, eleven-orbit survey of air around Mammoth Mountain at approximately 3 km from the summit and altitudes of 2,895-3,657 m. A volcanic CO2 anomaly 4-5 km across with CO2 levels approximately 1 ppm above background was revealed downwind of tree-kill areas. It contained a 1-km core with concentrations exceeding background by >3 ppm. Emission rates of approximately 250 t d-1 are indicated. Orographic winds may play a key role in transporting the diffusely degassed CO2 upslope to elevations where it is lofted into the regional wind system.

  9. Secret loss of unitarity due to the classical background

    NASA Astrophysics Data System (ADS)

    Yang, I.-Sheng

    2017-07-01

    We show that a quantum subsystem can become significantly entangled with a classical background through a process with few or no semiclassical backreactions. We study two quantum harmonic oscillators coupled to each other in a time-independent Hamiltonian. We compare it to its semiclassical approximation in which one of the oscillators is treated as the classical background. In this approximation, the remaining quantum oscillator has an effective Hamiltonian which is time-dependent, and its evolution appears to be unitary. However, in the fully quantum model, the two oscillators can entangle each other. Thus, the unitarity of either individual oscillator is never guaranteed. We derive the critical time scale after which the unitarity of either individual oscillator is irrevocably lost. In particular, we give an example that in the adiabatic limit, unitarity is lost before other relevant questions can be addressed.

  10. Background stratified Poisson regression analysis of cohort data.

    PubMed

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  11. Childhood traumatic events and adolescent overgeneral autobiographical memory: Findings in a UK cohort

    PubMed Central

    Crane, Catherine; Heron, Jon; Gunnell, David; Lewis, Glyn; Evans, Jonathan; Williams, J. Mark G.

    2014-01-01

    Background Overgeneral autobiographical memory has repeatedly been identified as a risk factor for adolescent and adult psychopathology but the factors that cause such over-generality remain unclear. This study examined the association between childhood exposure to traumatic events and early adolescent overgeneral autobiographical memory in a large population sample. Methods Thirteen-year-olds, n = 5,792, participating in an ongoing longitudinal cohort study (ALSPAC) completed a written version of the Autobiographical Memory Test. Performance on this task was examined in relation to experience of traumatic events, using data recorded by caregivers close to the time of exposure. Results Results indicated that experiencing a severe event in middle childhood increased the likelihood of an adolescent falling into the lowest quartile for autobiographical memory specificity (retrieving 0 or 1 specific memory) at age 13 by approximately 60%. The association persisted after controlling for a range of potential socio-demographic confounders. Limitations Data on the traumatic event exposures was limited by the relatively restricted range of traumas examined, and the lack of contextual details surrounding both the traumatic event exposures themselves and the severity of children's post-traumatic stress reactions. Conclusions This is the largest study to date of the association between childhood trauma exposure and overgeneral autobiographical memory in adolescence. Findings suggest a modest association between exposure to traumatic events and later overgeneral autobiographical memory, a psychological variable that has been linked to vulnerability to clinical depression. PMID:24657714

  12. Influenza Vaccination Among US Children With Asthma, 2005–2013

    PubMed Central

    Simon, Alan E.; Ahrens, Katherine A.; Akinbami, Lara J.

    2016-01-01

    Background Children with asthma face higher risk of complications from influenza. Trends in influenza vaccination among children with asthma are unknown. Methods We used 2005–2013 National Health Interview Survey data for children 2 to 17 years of age. We assessed, separately for children with and without asthma, any vaccination (received August through May) during each of the 2005–2006 through 2012–2013 influenza seasons and, for the 2010–2011 through 2012–2013 seasons only, early vaccination (received August through October). We used April–July interviews each year (n = 31,668) to assess vaccination during the previous influenza season. Predictive margins from logistic regression with time as the independent and vaccination status as the dependent variable were used to assess time trends. We also estimated the association between several sociodemographic variables and the likelihood of influenza vaccination. Results From 2005 to 2013, among children with asthma, influenza vaccination receipt increased about 3 percentage points per year (P < .001), reaching 55% in 2012–2013. The percentage of all children with asthma vaccinated by October (early vaccination) was slightly above 30% in 2012–2013. In 2010–2013, adolescents, the uninsured, children of parents with some college education, and those living in the Midwest, South, and West were less likely to be vaccinated. Conclusions The percentage of children 2 to 17 years of age with asthma receiving influenza vaccination has increased since 2004–2005, reaching approximately 55% in 2012–2013. PMID:26518382

  13. [INVITED] Luminescent QR codes for smart labelling and sensing

    NASA Astrophysics Data System (ADS)

    Ramalho, João F. C. B.; António, L. C. F.; Correia, S. F. H.; Fu, L. S.; Pinho, A. S.; Brites, C. D. S.; Carlos, L. D.; André, P. S.; Ferreira, R. A. S.

    2018-05-01

    QR (Quick Response) codes are two-dimensional barcodes composed of special geometric patterns of black modules in a white square background that can encode different types of information with high density and robustness, correct errors and physical damages, thus keeping the stored information protected. Recently, these codes have gained increased attention as they offer a simple physical tool for quick access to Web sites for advertising and social interaction. Challenges encompass the increase of the storage capacity limit, even though they can store approximately 350 times more information than common barcodes, and encode different types of characters (e.g., numeric, alphanumeric, kanji and kana). In this work, we fabricate luminescent QR codes based on a poly(methyl methacrylate) substrate coated with organic-inorganic hybrid materials doped with trivalent terbium (Tb3+) and europium (Eu3+) ions, demonstrating the increase of storage capacity per unit area by a factor of two by using the colour multiplexing, when compared to conventional QR codes. A novel methodology to decode the multiplexed QR codes is developed based on a colour separation threshold where a decision level is calculated through a maximum-likelihood criteria to minimize the error probability of the demultiplexed modules, maximizing the foreseen total storage capacity. Moreover, the thermal dependence of the emission colour coordinates of the Eu3+/Tb3+-based hybrids enables the simultaneously QR code colour-multiplexing and may be used to sense temperature (reproducibility higher than 93%), opening new fields of applications for QR codes as smart labels for sensing.

  14. Modeling the effects of exercise during 100% oxygen prebreathe on the risk of hypobaric decompression sickness

    NASA Technical Reports Server (NTRS)

    Loftin, K. C.; Conkin, J.; Powell, M. R.

    1997-01-01

    BACKGROUND: Several previous studies indicated that exercise during prebreathe with 100% O2 decreased the incidence of hypobaric decompression sickness (DCS). We report a meta-analysis of these investigations combined with a new study in our laboratory to develop a statistical model as a predictive tool for DCS. HYPOTHESIS: Exercise during prebreathe increases N2 elimination in a theoretical 360-min half-time compartment decreasing the incidence of DCS. METHODS: A dose-response probability tissue ratio (TR) model with 95% confidence limits was created for two groups, prebreathe with exercise (n = 113) and resting prebreathe (n = 113), using nonlinear regression analysis with maximum likelihood optimization. RESULTS: The model predicted that prebreathe exercise would reduce the residual N2 in a 360-min half-time compartment to a level analogous to that in a 180-min compartment. This finding supported the hypothesis. The incidence of DCS for the exercise prebreathe group was significantly decreased (Chi-Square = 17.1, p < 0.0001) from the resting prebreathe group. CONCLUSIONS: The results suggested that exercise during prebreathe increases tissue perfusion and N2 elimination approximately 2-fold and markedly lowers the risk of DCS. Based on the model, the prebreathe duration may be reduced from 240 min to a predicted 91 min for the protocol in our study, but this remains to be verified. The model provides a useful planning tool to develop and test appropriate prebreathe exercise protocols and to predict DCS risks for astronauts.

  15. Real-time forecasting of an epidemic using a discrete time stochastic model: a case study of pandemic influenza (H1N1-2009)

    PubMed Central

    2011-01-01

    Background Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. Methods A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. Results The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Conclusions Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance. PMID:21324153

  16. Alcohol-induced blackouts: A review of recent clinical research with practical implications and recommendations for future studies

    PubMed Central

    Wetherill, Reagan R.; Fromme, Kim

    2016-01-01

    Background Alcohol-induced blackouts, or memory loss for all or portions of events that occurred during a drinking episode, are reported by approximately 50% of drinkers and are associated with a wide range of negative consequences, including injury and death. As such, identifying the factors that contribute to and result from alcohol-induced blackouts is critical in developing effective prevention programs. Here, we provide an updated review (2010–2015) of clinical research focused on alcohol-induced blackouts, outline practical and clinical implications, and provide recommendations for future research. Methods A comprehensive, systematic literature review was conducted to examine all articles published between January 2010 through August 2015 that focused on examined vulnerabilities, consequences, and possible mechanisms for alcohol-induced blackouts. Results Twenty-sex studies reported on alcohol-induced blackouts. Fifteen studies examined prevalence and/or predictors of alcohol-induced blackouts. Six publications described consequences of alcohol-induced blackouts, and five studies explored potential cognitive and neurobiological mechanisms underlying alcohol-induced blackouts. Conclusions Recent research on alcohol-induced blackouts suggests that individual differences, not just alcohol consumption, increase the likelihood of experiencing an alcohol-induced blackout, and the consequences of alcohol-induced blackouts extend beyond the consequences related to the drinking episode to include psychiatric symptoms and neurobiological abnormalities. Prospective studies and a standardized assessment of alcohol-induced blackouts are needed to fully characterize factors associated with alcohol-induced blackouts and to improve prevention strategies. PMID:27060868

  17. Spot the match – wildlife photo-identification using information theory

    PubMed Central

    Speed, Conrad W; Meekan, Mark G; Bradshaw, Corey JA

    2007-01-01

    Background Effective approaches for the management and conservation of wildlife populations require a sound knowledge of population demographics, and this is often only possible through mark-recapture studies. We applied an automated spot-recognition program (I3S) for matching natural markings of wildlife that is based on a novel information-theoretic approach to incorporate matching uncertainty. Using a photo-identification database of whale sharks (Rhincodon typus) as an example case, the information criterion (IC) algorithm we developed resulted in a parsimonious ranking of potential matches of individuals in an image library. Automated matches were compared to manual-matching results to test the performance of the software and algorithm. Results Validation of matched and non-matched images provided a threshold IC weight (approximately 0.2) below which match certainty was not assured. Most images tested were assigned correctly; however, scores for the by-eye comparison were lower than expected, possibly due to the low sample size. The effect of increasing horizontal angle of sharks in images reduced matching likelihood considerably. There was a negative linear relationship between the number of matching spot pairs and matching score, but this relationship disappeared when using the IC algorithm. Conclusion The software and use of easily applied information-theoretic scores of match parsimony provide a reliable and freely available method for individual identification of wildlife, with wide applications and the potential to improve mark-recapture studies without resorting to invasive marking techniques. PMID:17227581

  18. Host-driven diversification of gall-inducing Acacia thrips and the aridification of Australia

    PubMed Central

    McLeish, Michael J; Chapman, Thomas W; Schwarz, Michael P

    2007-01-01

    Background Insects that feed on plants contribute greatly to the generation of biodiversity. Hypotheses explaining rate increases in phytophagous insect diversification and mechanisms driving speciation in such specialists remain vexing despite considerable attention. The proliferation of plant-feeding insects and their hosts are expected to broadly parallel one another where climate change over geological timescales imposes consequences for the diversification of flora and fauna via habitat modification. This work uses a phylogenetic approach to investigate the premise that the aridification of Australia, and subsequent expansion and modification of arid-adapted host flora, has implications for the diversification of insects that specialise on them. Results Likelihood ratio tests indicated the possibility of hard molecular polytomies within two co-radiating gall-inducing species complexes specialising on the same set of host species. Significant tree asymmetry is indicated at a branch adjacent to an inferred transition to a Plurinerves ancestral host species. Lineage by time diversification plots indicate gall-thrips that specialise on Plurinerves hosts differentially experienced an explosive period of speciation contemporaneous with climatic cycling during the Quaternary period. Chronological analyses indicated that the approximate age of origin of gall-inducing thrips on Acacia might be as recent as 10 million years ago during the Miocene, as truly arid landscapes first developed in Australia. Conclusion Host-plant diversification and spatial heterogeneity of hosts have increased the potential for specialisation, resource partitioning, and unoccupied ecological niche availability for gall-thrips on Australian Acacia. PMID:17257412

  19. Application of permanents of square matrices for DNA identification in multiple-fatality cases

    PubMed Central

    2013-01-01

    Background DNA profiling is essential for individual identification. In forensic medicine, the likelihood ratio (LR) is commonly used to identify individuals. The LR is calculated by comparing two hypotheses for the sample DNA: that the sample DNA is identical or related to a reference DNA, and that it is randomly sampled from a population. For multiple-fatality cases, however, identification should be considered as an assignment problem, and a particular sample and reference pair should therefore be compared with other possibilities conditional on the entire dataset. Results We developed a new method to compute the probability via permanents of square matrices of nonnegative entries. As the exact permanent is known as a #P-complete problem, we applied the Huber–Law algorithm to approximate the permanents. We performed a computer simulation to evaluate the performance of our method via receiver operating characteristic curve analysis compared with LR under the assumption of a closed incident. Differences between the two methods were well demonstrated when references provided neither obligate alleles nor impossible alleles. The new method exhibited higher sensitivity (0.188 vs. 0.055) at a threshold value of 0.999, at which specificity was 1, and it exhibited higher area under a receiver operating characteristic curve (0.990 vs. 0.959, P = 9.6E-15). Conclusions Our method therefore offers a solution for a computationally intensive assignment problem and may be a viable alternative to LR-based identification for closed-incident multiple-fatality cases. PMID:23962363

  20. Addressing Data Analysis Challenges in Gravitational Wave Searches Using the Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Weerathunga, Thilina Shihan

    2017-08-01

    Gravitational waves are a fundamental prediction of Einstein's General Theory of Relativity. The first experimental proof of their existence was provided by the Nobel Prize winning discovery by Taylor and Hulse of orbital decay in a binary pulsar system. The first detection of gravitational waves incident on earth from an astrophysical source was announced in 2016 by the LIGO Scientific Collaboration, launching the new era of gravitational wave (GW) astronomy. The signal detected was from the merger of two black holes, which is an example of sources called Compact Binary Coalescences (CBCs). Data analysis strategies used in the search for CBC signals are derivatives of the Maximum-Likelihood (ML) method. The ML method applied to data from a network of geographically distributed GW detectors--called fully coherent network analysis--is currently the best approach for estimating source location and GW polarization waveforms. However, in the case of CBCs, especially for lower mass systems (O(1M solar masses)) such as double neutron star binaries, fully coherent network analysis is computationally expensive. The ML method requires locating the global maximum of the likelihood function over a nine dimensional parameter space, where the computation of the likelihood at each point requires correlations involving O(104) to O(106) samples between the data and the corresponding candidate signal waveform template. Approximations, such as semi-coherent coincidence searches, are currently used to circumvent the computational barrier but incur a concomitant loss in sensitivity. We explored the effectiveness of Particle Swarm Optimization (PSO), a well-known algorithm in the field of swarm intelligence, in addressing the fully coherent network analysis problem. As an example, we used a four-detector network consisting of the two LIGO detectors at Hanford and Livingston, Virgo and Kagra, all having initial LIGO noise power spectral densities, and show that PSO can locate the global maximum with less than 240,000 likelihood evaluations for a component mass range of 1.0 to 10.0 solar masses at a realistic coherent network signal to noise ratio of 9.0. Our results show that PSO can successfully deliver a fully-coherent all-sky search with < (1/10 ) the number of likelihood evaluations needed for a grid-based search. Used as a follow-up step, the savings in the number of likelihood evaluations may also reduce latency in obtaining ML estimates of source parameters in semi-coherent searches.

  1. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution.

    PubMed

    Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn

    2013-03-06

    Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perea, Philip Michael

    I have performed a search for t-channel single top quark production in pmore » $$\\bar{p}$$ single number sub collisions at 1.96 TeV on a 366 pb -1 dataset collected with the D0 detector from 2002-2005. The analysis is restricted to the leptonic decay of the W boson from the top quark to an electron or muon, tq$$\\bar{b}$$ → lv lb q$$\\bar{b}$$ (l = e,μ). A powerful b-quark tagging algorithm derived from neural networks is used to identify b jets and significantly reduce background. I further use neural networks to discriminate signal from background, and apply a binned likelihood calculation to the neural network output distributions to derive the final limits. No direct observation of single top quark production has been made, and I report expected/measured 95% confidence level limits of 3.5/8.0 pb.« less

  3. Bounds on isocurvature perturbations from cosmic microwave background and large scale structure data.

    PubMed

    Crotty, Patrick; García-Bellido, Juan; Lesgourgues, Julien; Riazuelo, Alain

    2003-10-24

    We obtain very stringent bounds on the possible cold dark matter, baryon, and neutrino isocurvature contributions to the primordial fluctuations in the Universe, using recent cosmic microwave background and large scale structure data. Neglecting the possible effects of spatial curvature, tensor perturbations, and reionization, we perform a Bayesian likelihood analysis with nine free parameters, and find that the amplitude of the isocurvature component cannot be larger than about 31% for the cold dark matter mode, 91% for the baryon mode, 76% for the neutrino density mode, and 60% for the neutrino velocity mode, at 2sigma, for uncorrelated models. For correlated adiabatic and isocurvature components, the fraction could be slightly larger. However, the cross-correlation coefficient is strongly constrained, and maximally correlated/anticorrelated models are disfavored. This puts strong bounds on the curvaton model.

  4. GRO/EGRET data analysis software: An integrated system of custom and commercial software using standard interfaces

    NASA Technical Reports Server (NTRS)

    Laubenthal, N. A.; Bertsch, D.; Lal, N.; Etienne, A.; Mcdonald, L.; Mattox, J.; Sreekumar, P.; Nolan, P.; Fierro, J.

    1992-01-01

    The Energetic Gamma Ray Telescope Experiment (EGRET) on the Compton Gamma Ray Observatory has been in orbit for more than a year and is being used to map the full sky for gamma rays in a wide energy range from 30 to 20,000 MeV. Already these measurements have resulted in a wide range of exciting new information on quasars, pulsars, galactic sources, and diffuse gamma ray emission. The central part of the analysis is done with sky maps that typically cover an 80 x 80 degree section of the sky for an exposure time of several days. Specific software developed for this program generates the counts, exposure, and intensity maps. The analysis is done on a network of UNIX based workstations and takes full advantage of a custom-built user interface called X-dialog. The maps that are generated are stored in the FITS format for a collection of energies. These, along with similar diffuse emission background maps generated from a model calculation, serve as input to a maximum likelihood program that produces maps of likelihood with optional contours that are used to evaluate regions for sources. Likelihood also evaluates the background corrected intensity at each location for each energy interval from which spectra can be generated. Being in a standard FITS format permits all of the maps to be easily accessed by the full complement of tools available in several commercial astronomical analysis systems. In the EGRET case, IDL is used to produce graphics plots in two and three dimensions and to quickly implement any special evaluation that might be desired. Other custom-built software, such as the spectral and pulsar analyses, take advantage of the XView toolkit for display and Postscript output for the color hard copy. This poster paper outlines the data flow and provides examples of the user interfaces and output products. It stresses the advantages that are derived from the integration of the specific instrument-unique software and powerful commercial tools for graphics and statistical evaluation. This approach has several proven advantages including flexibility, a minimum of development effort, ease of use, and portability.

  5. Nutritional and developmental status among 6- to 8-month-old children in southwestern Uganda: a cross-sectional study.

    PubMed

    Muhoozi, Grace K M; Atukunda, Prudence; Mwadime, Robert; Iversen, Per Ole; Westerberg, Ane C

    2016-01-01

    Undernutrition continues to pose challenges to Uganda's children, but there is limited knowledge on its association with physical and intellectual development. In this cross-sectional study, we assessed the nutritional status and milestone development of 6- to 8-month-old children and associated factors in two districts of southwestern Uganda. Five hundred and twelve households with mother-infant (6-8 months) pairs were randomly sampled. Data about background variables (e.g. household characteristics, poverty likelihood, and child dietary diversity scores (CDDS)) were collected using questionnaires. Bayley Scales of Infant and Toddler Development (BSID III) and Ages and Stages questionnaires (ASQ) were used to collect data on child development. Anthropometric measures were used to determine z-scores for weight-for-age (WAZ), length-for-age (LAZ), weight-for-length (WLZ), head circumference (HCZ), and mid-upper arm circumference. Chi-square tests, correlation coefficients, and linear regression analyses were used to relate background variables, nutritional status indicators, and infant development. The prevalence of underweight, stunting, and wasting was 12.1, 24.6, and 4.7%, respectively. Household head education, gender, sanitation, household size, maternal age and education, birth order, poverty likelihood, and CDDS were associated (p<0.05) with WAZ, LAZ, and WLZ. Regression analysis showed that gender, sanitation, CDDS, and likelihood to be below the poverty line were predictors (p<0.05) of undernutrition. BSID III indicated development delay of 1.3% in cognitive and language, and 1.6% in motor development. The ASQ indicated delayed development of 24, 9.1, 25.2, 12.2, and 15.1% in communication, fine motor, gross motor, problem solving, and personal social ability, respectively. All nutritional status indicators except HCZ were positively and significantly associated with development domains. WAZ was the main predictor for all development domains. Undernutrition among infants living in impoverished rural Uganda was associated with household sanitation, poverty, and low dietary diversity. Development domains were positively and significantly associated with nutritional status. Nutritional interventions might add value to improvement of child growth and development.

  6. Measuring and partitioning the high-order linkage disequilibrium by multiple order Markov chains.

    PubMed

    Kim, Yunjung; Feng, Sheng; Zeng, Zhao-Bang

    2008-05-01

    A map of the background levels of disequilibrium between nearby markers can be useful for association mapping studies. In order to assess the background levels of linkage disequilibrium (LD), multilocus LD measures are more advantageous than pairwise LD measures because the combined analysis of pairwise LD measures is not adequate to detect simultaneous allele associations among multiple markers. Various multilocus LD measures based on haplotypes have been proposed. However, most of these measures provide a single index of association among multiple markers and does not reveal the complex patterns and different levels of LD structure. In this paper, we employ non-homogeneous, multiple order Markov Chain models as a statistical framework to measure and partition the LD among multiple markers into components due to different orders of marker associations. Using a sliding window of multiple markers on phased haplotype data, we compute corresponding likelihoods for different Markov Chain (MC) orders in each window. The log-likelihood difference between the lowest MC order model (MC0) and the highest MC order model in each window is used as a measure of the total LD or the overall deviation from the gametic equilibrium for the window. Then, we partition the total LD into lower order disequilibria and estimate the effects from two-, three-, and higher order disequilibria. The relationship between different orders of LD and the log-likelihood difference involving two different orders of MC models are explored. By applying our method to the phased haplotype data in the ENCODE regions of the HapMap project, we are able to identify high/low multilocus LD regions. Our results reveal that the most LD in the HapMap data is attributed to the LD between adjacent pairs of markers across the whole region. LD between adjacent pairs of markers appears to be more significant in high multilocus LD regions than in low multilocus LD regions. We also find that as the multilocus total LD increases, the effects of high-order LD tends to get weaker due to the lack of observed multilocus haplotypes. The overall estimates of first, second, third, and fourth order LD across the ENCODE regions are 64, 23, 9, and 3%.

  7. Tagging radon daughters in low-energy scintillation detectors

    NASA Astrophysics Data System (ADS)

    McCarty, Kevin B.

    2011-12-01

    One problematic source of background in scintillator-based low-energy solar neutrino experiments such as Borexino is the presence of radon gas and its daughters. The mean lifetime of the α-emitter 214Po in the radon chain is sufficiently short, 0.24 ms, that its decay, together with that immediately preceding of 214Bi, is easily recognized as a “coincidence event.” This fact, combined with the capability of α/β pulse-shape discrimination, makes it possible to tag decays of 222Rn and its first four daughters via a likelihood-based method.

  8. Statistics and Discoveries at the LHC (1/4)

    ScienceCinema

    Cowan, Glen

    2018-02-09

    The lectures will give an introduction to statistics as applied in particle physics and will provide all the necessary basics for data analysis at the LHC. Special emphasis will be placed on the the problems and questions that arise when searching for new phenomena, including p-values, discovery significance, limit setting procedures, treatment of small signals in the presence of large backgrounds. Specific issues that will be addressed include the advantages and drawbacks of different statistical test procedures (cut-based, likelihood-ratio, etc.), the look-elsewhere effect and treatment of systematic uncertainties.

  9. Statistics and Discoveries at the LHC (3/4)

    ScienceCinema

    Cowan, Glen

    2018-02-19

    The lectures will give an introduction to statistics as applied in particle physics and will provide all the necessary basics for data analysis at the LHC. Special emphasis will be placed on the the problems and questions that arise when searching for new phenomena, including p-values, discovery significance, limit setting procedures, treatment of small signals in the presence of large backgrounds. Specific issues that will be addressed include the advantages and drawbacks of different statistical test procedures (cut-based, likelihood-ratio, etc.), the look-elsewhere effect and treatment of systematic uncertainties.

  10. Statistics and Discoveries at the LHC (4/4)

    ScienceCinema

    Cowan, Glen

    2018-05-22

    The lectures will give an introduction to statistics as applied in particle physics and will provide all the necessary basics for data analysis at the LHC. Special emphasis will be placed on the the problems and questions that arise when searching for new phenomena, including p-values, discovery significance, limit setting procedures, treatment of small signals in the presence of large backgrounds. Specific issues that will be addressed include the advantages and drawbacks of different statistical test procedures (cut-based, likelihood-ratio, etc.), the look-elsewhere effect and treatment of systematic uncertainties.

  11. Remote chance of recontact

    NASA Technical Reports Server (NTRS)

    Elkin, D.; Abeyagunawardene, S.; Defazio, R.

    1988-01-01

    The ejection of appendages with uncertain drag characteristics presents a concern for eventual recontact. Recontact shortly after release can be prevented by avoiding ejection in a plane perpendicular to the velocity. For ejection tangential to the orbit, the likelihood of recontact within a year is high in the absence of drag and oblateness. The optimum direction of ejection of the thermal shield cable and an overestimate of the recontact probability are determined for the Cosmic Background Explorer (COBE) mission when drag, oblateness, and solar/lunar perturbations are present. The probability is small but possibly significant.

  12. Statistics and Discoveries at the LHC (2/4)

    ScienceCinema

    Cowan, Glen

    2018-04-26

    The lectures will give an introduction to statistics as applied in particle physics and will provide all the necessary basics for data analysis at the LHC. Special emphasis will be placed on the the problems and questions that arise when searching for new phenomena, including p-values, discovery significance, limit setting procedures, treatment of small signals in the presence of large backgrounds. Specific issues that will be addressed include the advantages and drawbacks of different statistical test procedures (cut-based, likelihood-ratio, etc.), the look-elsewhere effect and treatment of systematic uncertainties.

  13. Clopper-Pearson bounds from HEP data cuts

    NASA Astrophysics Data System (ADS)

    Berg, B. A.

    2001-08-01

    For the measurement of Ns signals in N events rigorous confidence bounds on the true signal probability pexact were established in a classical paper by Clopper and Pearson [Biometrica 26, 404 (1934)]. Here, their bounds are generalized to the HEP situation where cuts on the data tag signals with probability Ps and background data with likelihood Pb

  14. Structural characterization and gas reactions of small metal particles by high resolution in-situ TEM (Transmission Electron Microscopy) and TED (Transmission Electron Diffraction)

    NASA Technical Reports Server (NTRS)

    Heinemann, K.

    1987-01-01

    The detection and size analysis of small metal particles supported on amorphous substrates becomes increasingly difficult when the particle size approaches that of the phase contrast background structures of the support. An approach of digital image analysis, involving Fourier transformation of the original image, filtering, and image reconstruction was studied with respect to the likelihood of unambiguously detecting particles of less than 1 nm diameter on amorphous substrates from a single electron micrograph.

  15. Beyond the audiogram: application of models of auditory fitness for duty to assess communication in the real world.

    PubMed

    Dubno, Judy R

    2018-05-01

    This manuscript provides a Commentary on a paper published in the current issue of the International Journal of Audiology and the companion paper published in Ear and Hearing by Soli et al. These papers report background, rationale and results of a novel modelling approach to assess "auditory fitness for duty," or an individual's ability to perform hearing-critical tasks related to their job, based on their likelihood of effective speech communication in the listening environment in which the task is performed.

  16. Parafoveal Target Detectability Reversal Predicted by Local Luminance and Contrast Gain Control

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    This project is part of a program to develop image discrimination models for the prediction of the detectability of objects in a range of backgrounds. We wanted to see if the models could predict parafoveal object detection as well as they predict detection in foveal vision. We also wanted to make our simplified models more general by local computation of luminance and contrast gain control. A signal image (0.78 x 0.17 deg) was made by subtracting a simulated airport runway scene background image (2.7 deg square) from the same scene containing an obstructing aircraft. Signal visibility contrast thresholds were measured in a fully crossed factorial design with three factors: eccentricity (0 deg or 4 deg), background (uniform or runway scene background), and fixed-pattern white noise contrast (0%, 5%, or 10%). Three experienced observers responded to three repetitions of 60 2IFC trials in each condition and thresholds were estimated by maximum likelihood probit analysis. In the fovea the average detection contrast threshold was 4 dB lower for the runway background than for the uniform background, but in the parafovea, the average threshold was 6 dB higher for the runway background than for the uniform background. This interaction was similar across the different noise levels and for all three observers. A likely reason for the runway background giving a lower threshold in the fovea is the low luminance near the signal in that scene. In our model, the local luminance computation is controlled by a spatial spread parameter. When this parameter and a corresponding parameter for the spatial spread of contrast gain were increased for the parafoveal predictions, the model predicts the interaction of background with eccentricity.

  17. Risk factors for classical hysterotomy by gestational age.

    PubMed

    Osmundson, Sarah S; Garabedian, Matthew J; Lyell, Deirdre J

    2013-10-01

    To examine the likelihood of classical hysterotomy across preterm gestational ages and to identify factors that increase its occurrence. This is a secondary analysis of a prospective observational cohort collected by the Maternal-Fetal Medicine Network of all women with singleton gestations who underwent a cesarean delivery with a known hysterotomy. Comparisons were made based on gestational age. Factors thought to influence hysterotomy type were studied, including maternal age, body mass index, parity, birth weight, small for gestational age (SGA) status, fetal presentation, labor preceding delivery, and emergent delivery. Approximately 36,000 women were eligible for analysis, of whom 34,454 (95.7%) underwent low transverse hysterotomy and 1,562 (4.3%) underwent classical hysterotomy. The median gestational age of women undergoing a classical hysterotomy was 32 weeks and the incidence peaked between 24 0/7 weeks and 25 6/7 weeks (53.2%), declining with each additional week of gestation thereafter (P for trend <.001). In multivariable regression, the likelihood of classical hysterotomy was increased with SGA (n=258; odds ratio [OR] 2.71; confidence interval [CI] 1.78-4.13), birth weight 1,000 g or less (n=467; OR 1.51; CI 1.03-2.24), and noncephalic presentation (n=783; OR 2.03; CI 1.52-2.72). The likelihood of classical hysterotomy was decreased between 23 0/7 and 27 6/7 weeks of gestation and after 32 weeks of gestation when labor preceded delivery, and increased between 28 0/7 and 31 6/7 weeks of gestation and after 32 weeks of gestation by multiparity and previous cesarean delivery. Emergent delivery did not predict classical hysterotomy. Fifty percent of women at 23-26 weeks of gestation who undergo cesarean delivery have a classical hysterotomy, and the risk declines steadily thereafter. This likelihood is increased by fetal factors, especially SGA and noncephalic presentation. : II.

  18. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    PubMed

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.

  19. Distribution of Model-based Multipoint Heterogeneity Lod Scores

    PubMed Central

    Xing, Chao; Morris, Nathan; Xing, Guan

    2011-01-01

    The distribution of two-point heterogeneity lod scores (HLOD) has been intensively investigated because the conventional χ2 approximation to the likelihood ratio test is not directly applicable. However, there was no study investigating the distribution of the multipoint HLOD despite its wide application. Here we want to point out that, compared with the two-point HLOD, the multipoint HLOD essentially tests for homogeneity given linkage and follows a relatively simple limiting distribution 12χ02+12χ12, which can be obtained by established statistical theory. We further examine the theoretical result by simulation studies. PMID:21104892

  20. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  1. Titan's Elusive Lakes? Properties and Context of Dark Spots in Cassini TA Radar Data

    NASA Technical Reports Server (NTRS)

    Lorenz, R. D.; Elachi, C.; Stiles, B.; West, R.; Janssen, M.; Lopes, R.; Stofan, E.; Paganelli, F.; Wood, C.; Kirk, R.

    2005-01-01

    Titan's atmospheric methane abundance suggests the likelihood of a surface reservoir of methane and a surface sink for its photochemical products, which might also be predominantly liquid. Although large expanses of obvious hydrocarbon seas have not been unambiguously observed, a number of rather radar-dark spots up to approximately 30 km across are observed in the Synthetic Aperture Radar (SAR) data acquired during the Cassini TA encounter on October 26th 2004. Here we review the properties and setting of these dark spots to explore whether these may be hydrocarbon lakes.

  2. A limit to the X-ray luminosity of nearby normal galaxies

    NASA Technical Reports Server (NTRS)

    Worrall, D. M.; Marshall, F. E.; Boldt, E. A.

    1979-01-01

    Emission is studied at luminosities lower than those for which individual discrete sources can be studied. It is shown that normal galaxies do not appear to provide the numerous low luminosity X-ray sources which could make up the 2-60 keV diffuse background. Indeed, upper limits suggest luminosities comparable with, or a little less than, that of the galaxy. This is consistent with the fact that the average optical luminosity of the sample galaxies within approximately 20 Mpc is slightly lower than that of the galaxy. An upper limit of approximately 1% of the diffuse background from such sources is derived.

  3. Quantum gravitational contributions to the cosmic microwave background anisotropy spectrum.

    PubMed

    Kiefer, Claus; Krämer, Manuel

    2012-01-13

    We derive the primordial power spectrum of density fluctuations in the framework of quantum cosmology. For this purpose we perform a Born-Oppenheimer approximation to the Wheeler-DeWitt equation for an inflationary universe with a scalar field. In this way, we first recover the scale-invariant power spectrum that is found as an approximation in the simplest inflationary models. We then obtain quantum gravitational corrections to this spectrum and discuss whether they lead to measurable signatures in the cosmic microwave background anisotropy spectrum. The nonobservation so far of such corrections translates into an upper bound on the energy scale of inflation.

  4. Cosmic microwave background bispectrum from primordial magnetic fields on large angular scales.

    PubMed

    Seshadri, T R; Subramanian, Kandaswamy

    2009-08-21

    Primordial magnetic fields lead to non-Gaussian signals in the cosmic microwave background (CMB) even at the lowest order, as magnetic stresses and the temperature anisotropy they induce depend quadratically on the magnetic field. In contrast, CMB non-Gaussianity due to inflationary scalar perturbations arises only as a higher-order effect. We propose a novel probe of stochastic primordial magnetic fields that exploits the characteristic CMB non-Gaussianity that they induce. We compute the CMB bispectrum (b(l1l2l3)) induced by such fields on large angular scales. We find a typical value of l1(l1 + 1)l3(l3 + 1)b(l1l2l3) approximately 10(-22), for magnetic fields of strength B0 approximately 3 nG and with a nearly scale invariant magnetic spectrum. Observational limits on the bispectrum allow us to set upper limits on B0 approximately 35 nG.

  5. Studies of the extreme ultraviolet/soft x-ray background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stern, R.A.

    1978-01-01

    The results of an extensive sky survey of the extreme ultraviolet (EUV)/soft x-ray background are reported. The data were obtained with a focusing telescope designed and calibrated at U.C. Berkeley which observed EUV sources and the diffuse background as part of the Apollo-Soyuz mission in July, 1975. With a primary field-of-view of 2.3 + 0.1/sup 0/ FWHM and four EUV bandpass filters (16 to 25, 20 to 73, 80 to 108, and 80 to 250 eV) the EUV telescope obtained background data included in the final observational sample for 21 discrete sky locations and 11 large angular scans, as wellmore » as for a number of shorter observations. Analysis of the data reveals as intense flux above 80 eV energy, with upper limits to the background intensity given for the lower energy filters Ca 2 x 10/sup 4/ and 6 x 10/sup 2/ ph cm/sup -2/ sec/sup -1/ ster/sup -1/ eV/sup -1/ at 21 and 45 eV respectively). The 80 to 108 eV flux agrees within statistical errors with the earlier results of Cash, Malina and Stern (1976): the Apollo-Soyuz average reported intensity is 4.0 +- 1.3 ph cm/sup -2/ sec/sup -1/ ster/sup -1/ eV/sup -1/ at Ca 100 eV, or roughly a factor of ten higher than the corresponding 250 eV intensity. The uniformity of the background flux is uncertain due to limitations in the statistical accuracy of the data; upper limits to the point-to-point standard deviation of the background intensity are (..delta..I/I approximately less than 0.8 +- 0.4 (80 to 108 eV) and approximately less than 0.4 +- 0.2 (80 to 250 eV). No evidence is found for a correlation between the telescope count rate and earth-based parameters (zenith angle, sun angle, etc.) for E approximately greater than 80 eV (the lower energy bandpasses are significantly affected by scattered solar radiation. Unlike some previous claims for the soft x-ray background, no simple dependence upon galactic latitude is seen.« less

  6. Hybridization parameters revisited: solutions containing SDS.

    PubMed

    Rose, Ken; Mason, John O; Lathe, Richard

    2002-07-01

    Salt concentration governs nucleic acid hybridization according to the Schildkraut-Lifson equation. High concentrations of SDS are used in some common protocols, but the effects of SDS on hybridization stringency have not been reported. We investigated hybridization parameters in solutions containing SDS. With targets immobilized on nylon membranes and PCR- or transcription-generated probes, we report that the 50% dissociation temperature (Tm*) in the absence of SDS was 15 degrees C-17degrees C lower than the calculated Tm. SDS had only modest effects on Tm* [1% (w/v) equating to 8 mM NaCl]. RNA/DNA hybrids were approximately 11 degrees C more stable than DNA/DNA hybrids. Incomplete homology (69%) significantly reduced the Tm* for DNA/DNA hybrids (approximately /4degrees C; 0.45 degrees C/% nonhomology) but far less so for RNA/DNA hybrids (approximately 2.3 degrees C; approximately 0.07 degrees C/% non-homology); incomplete homology also markedly reduced the extent of hybridization. On these nylonfilters, SDS had a major effect on nonspecific binding. Buffers lacking SDS, or with low salt concentration, gave high hybridization backgrounds; buffers containing SDS, or high-salt buffers, gave reproducibly low backgrounds.

  7. Race and weight change in US women: the roles of socioeconomic and marital status.

    PubMed Central

    Kahn, H S; Williamson, D F; Stevens, J A

    1991-01-01

    BACKGROUND. The prevalence of overweight among Black women in the US is higher than among White women, but the causes are unknown. METHODS. We examined the weight change for 514 Black and 2,770 White women who entered the first Health and Nutrtion Examination Survey (1971-75) at ages 25-44 years and were weighed again a decade later. We used multivariate analyses to estimate the weight-change effectgs associated with race, family income, education, and marital change. RESULTS. After multiple adjustments, Black race, education below college level, and becoming married during the follow-up interval were each independently associated with an increased mean weight change. Using multivariate logistic analyses, Black race was not independently associated with an increased risk of major weight gain (change greater than or equal to +13 kg), but it was associated with a reduced likelihood of major weight loss (change less than or equal to -7 kg) (odds ratio - 0.64 [95% CI -0.41, 0.97])]. Very low family income was independently associated with the likelihood of both major weight gain (OR - 1.71 [95% CI - 1.15, 2.55]) and major weight loss (OR - 1.86 [95% CI - 1.18, 2.95]). CONCLUSIONS. Amont US women, Black race is independently associated with a reduced likelihood of major weight loss, but not with major weight gain. Women at greatest risk of weight gain are those with education below college level, those entering marriage, and those with very low family income. PMID:2036117

  8. Likelihood of Breastfeeding Within the USDA’s Food and Nutrition Service Special Supplemental Nutrition Program for Women, Infants, and Children Population: A Systematic Review of the Literature

    PubMed Central

    Houghtaling, Bailey; Shanks, Carmen Byker; Jenkins, Mica

    2017-01-01

    Background Breastfeeding is an important public health initiative. Low-income women benefiting from the U.S. Department of Agriculture’s Food and Nutrition Service Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) are a prime population for breastfeeding promotion efforts. Research aim This study aims to determine factors associated with increased likelihood of breastfeeding for WIC participants. Methods The Preferred Reporting Items for Systematic Reviews and Meta-Analysis statement guided the systematic review of literature. Database searches occurred in September and October 2014 and included studies limited to the previous 10 years. The following search terms were used: low-income; WIC; women, infants, and children; breastfeeding; breast milk; and maternal and child health. The criterion for inclusion was a study sample of women and children enrolled in the WIC program, thereby excluding non-United States–based research. Results Factors that increased the likelihood of breastfeeding for WIC participants included sociodemographic and health characteristics (n = 17); environmental and media support (n = 4); government policy (n = 2); intention to breastfeed, breastfeeding in hospital, or previous breastfeeding experience (n = 9); attitudes toward and knowledge of breastfeeding benefits (n = 6); health care provider or social support; and time exposure to WIC services (n = 5). Conclusion The complexity of breastfeeding behaviors within this population is clear. Results provide multisectored insight for future research, policies, and practices in support of increasing breastfeeding rates among WIC participants. PMID:28135478

  9. Social Rejection and Alcohol Use in Daily Life

    PubMed Central

    Laws, Holly B.; Ellerbeck, Nicole E.; Rodrigues, Alyne S.; Simmons, Jessica A; Ansell, Emily B.

    2017-01-01

    Background Prior studies have found that social rejection is associated with increases in negative affect, distress, and hostility. Fewer studies, however, have examined the impact of social rejection on alcohol use, and no known studies have tested whether the impact of social rejection by close others differs from social rejection by acquaintances in its association with subsequent drinking. Methods Participants completed event-contingent reports of their social interactions and alcohol use for 14 consecutive days on smartphones. Multilevel negative binomial regression models tested whether experiencing more social rejection than usual was associated with increased drinking, and whether this association was stronger when participants were rejected by close others (e.g. friends, spouses, family members) versus strangers or acquaintances. Results Results showed a significant interaction between social rejection and relationship closeness. On days characterized by rejection by close others, the likelihood of drinking significantly increased. On days characterized by rejection by acquaintances, by contrast, there was no increase in the likelihood of drinking. There was no main effect of rejection on likelihood of drinking. Conclusions These results suggest that relationship type is a key factor in whether social rejection translates to potentially harmful behaviors, such as increased alcohol use. This finding is in contrast to many laboratory paradigms of rejection, which emphasize rejection and ostracism by strangers rather than known others. In the more naturalistic setting of measuring social interactions on smartphone in daily life, however, our findings suggest that only social rejection delivered by close others, and not strangers, led to subsequent drinking. PMID:28253539

  10. Inferring relationships between pairs of individuals from locus heterozygosities

    PubMed Central

    Presciuttini, Silvano; Toni, Chiara; Tempestini, Elena; Verdiani, Simonetta; Casarino, Lucia; Spinetti, Isabella; Stefano, Francesco De; Domenici, Ranieri; Bailey-Wilson, Joan E

    2002-01-01

    Background The traditional exact method for inferring relationships between individuals from genetic data is not easily applicable in all situations that may be encountered in several fields of applied genetics. This study describes an approach that gives affordable results and is easily applicable; it is based on the probabilities that two individuals share 0, 1 or both alleles at a locus identical by state. Results We show that these probabilities (zi) depend on locus heterozygosity (H), and are scarcely affected by variation of the distribution of allele frequencies. This allows us to obtain empirical curves relating zi's to H for a series of common relationships, so that the likelihood ratio of a pair of relationships between any two individuals, given their genotypes at a locus, is a function of a single parameter, H. Application to large samples of mother-child and full-sib pairs shows that the statistical power of this method to infer the correct relationship is not much lower than the exact method. Analysis of a large database of STR data proves that locus heterozygosity does not vary significantly among Caucasian populations, apart from special cases, so that the likelihood ratio of the more common relationships between pairs of individuals may be obtained by looking at tabulated zi values. Conclusions A simple method is provided, which may be used by any scientist with the help of a calculator or a spreadsheet to compute the likelihood ratios of common alternative relationships between pairs of individuals. PMID:12441003

  11. Identifying Patterns of Situational Antecedents to Heavy Drinking among College Students

    PubMed Central

    Lau-Barraco, Cathy; Linden-Carmichael, Ashley N.; Braitman, Abby L.; Stamates, Amy L.

    2016-01-01

    Background Emerging adults have the highest prevalence of heavy drinking as compared to all other age groups. Given the negative consequences associated with such drinking, additional research efforts focused on at-risk consumption are warranted. The current study sought to identify patterns of situational antecedents to drinking and to examine their associations with drinking motivations, alcohol involvement, and mental health functioning in a sample of heavy drinking college students. Method Participants were 549 (65.8% women) college student drinkers. Results Latent profile analysis identified three classes based on likelihood of heavy drinking across eight situational precipitants. The “High Situational Endorsement” group reported the greatest likelihood of heavy drinking in most situations assessed. This class experienced the greatest level of alcohol-related harms as compared to the “Low Situational Endorsement” and “Moderate Situational Endorsement” groups. The Low Situational Endorsement class was characterized by the lowest likelihood of heavy drinking across all situational antecedents and they experienced the fewest alcohol-related harms, relative to the other classes. Class membership was related to drinking motivations with the “High Situational Endorsement” class endorsing the highest coping- and conformity-motivated drinking. The “High Situational Endorsement” class also reported experiencing more mental health symptoms than other groups. Conclusions The current study contributed to the larger drinking literature by identifying profiles that may signify a particularly risky drinking style. Findings may help guide intervention work with college heavy drinkers. PMID:28163666

  12. Measurement of the cosmic microwave background spectrum by the COBE FIRAS instrument

    NASA Technical Reports Server (NTRS)

    Mather, J. C.; Cheng, E. S.; Cottingham, D. A.; Eplee, R. E., Jr.; Fixsen, D. J.; Hewagama, T.; Isaacman, R. B.; Jensen, K. A.; Meyer, S. S.; Noerdlinger, P. D.

    1994-01-01

    The cosmic microwave background radiation (CMBR) has a blackbody spectrum within 3.4 x 10(exp -8) ergs/sq cm/s/sr cm over the frequency range from 2 to 20/cm (5-0.5 mm). These measurements, derived from the Far-Infrared Absolute Spectrophotomer (FIRAS) instrument on the Cosmic Background Explorer (COBE) satellite, imply stringent limits on energy release in the early universe after t approximately 1 year and redshift z approximately 3 x 10(exp 6). The deviations are less than 0.30% of the peak brightness, with an rms value of 0.01%, and the dimensionless cosmological distortion parameters are limited to the absolute value of y is less than 2.5 x 10(exp -5) and the absolute value of mu is less than 3.3 x 10(exp -4) (95% confidence level). The temperature of the CMBR is 2.726 +/- 0.010 K (95% confidence level systematic).

  13. Modeling of large amplitude plasma blobs in three-dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angus, Justin R.; Umansky, Maxim V.

    2014-01-15

    Fluctuations in fusion boundary and similar plasmas often have the form of filamentary structures, or blobs, that convectively propagate radially. This may lead to the degradation of plasma facing components as well as plasma confinement. Theoretical analysis of plasma blobs usually takes advantage of the so-called Boussinesq approximation of the potential vorticity equation, which greatly simplifies the treatment analytically and numerically. This approximation is only strictly justified when the blob density amplitude is small with respect to that of the background plasma. However, this is not the case for typical plasma blobs in the far scrape-off layer region, where themore » background density is small compared to that of the blob, and results obtained based on the Boussinesq approximation are questionable. In this report, the solution of the full vorticity equation, without the usual Boussinesq approximation, is proposed via a novel numerical approach. The method is used to solve for the evolution of 2D and 3D plasma blobs in a regime where the Boussinesq approximation is not valid. The Boussinesq solution under predicts the cross field transport in 2D. However, in 3D, for parameters typical of current tokamaks, the disparity between the radial cross field transport from the Boussinesq approximation and full solution is virtually non-existent due to the effects of the drift wave instability.« less

  14. Search for neutron-antineutron oscillations at the Sudbury Neutrino Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aharmim, B.; Ahmed, S. N.; Anthony, A. E.

    Tests on B–L symmetry breaking models are important probes to search for new physics. One proposed model with Δ(B–L)=2 involves the oscillations of a neutron to an antineutron. In this paper, a new limit on this process is derived for the data acquired from all three operational phases of the Sudbury Neutrino Observatory experiment. The search concentrated on oscillations occurring within the deuteron, and 23 events were observed against a background expectation of 30.5 events. These translated to a lower limit on the nuclear lifetime of 1.48 × 10 31 yr at 90% C.L. when no restriction was placed onmore » the signal likelihood space (unbounded). Alternatively, a lower limit on the nuclear lifetime was found to be 1.18 × 10 31 yr at 90% C.L. when the signal was forced into a positive likelihood space (bounded). Values for the free oscillation time derived from various models are also provided in this article. Furthermore, this is the first search for neutron-antineutron oscillation with the deuteron as a target.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, Adam Paul

    The authors present a measurement of the mass of the top quark. The event sample is selected from proton-antiproton collisions, at 1.96 TeV center-of-mass energy, observed with the CDF detector at Fermilab's Tevatron. They consider a 318 pb -1 dataset collected between March 2002 and August 2004. They select events that contain one energetic lepton, large missing transverse energy, exactly four energetic jets, and at least one displaced vertex b tag. The analysis uses leading-order tmore » $$\\bar{t}$$ and background matrix elements along with parameterized parton showering to construct event-by-event likelihoods as a function of top quark mass. From the 63 events observed with the 318 pb -1 dataset they extract a top quark mass of 172.0 ± 2.6(stat) ± 3.3(syst) GeV/c 2 from the joint likelihood. The mean expected statistical uncertainty is 3.2 GeV/c 2 for m $$\\bar{t}$$ = 178 GTeV/c 2 and 3.1 GeV/c 2 for m $$\\bar{t}$$ = 172.5 GeV/c 2. The systematic error is dominated by the uncertainty of the jet energy scale.« less

  16. Quantifying (dis)agreement between direct detection experiments in a halo-independent way

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldstein, Brian; Kahlhoefer, Felix, E-mail: brian.feldstein@physics.ox.ac.uk, E-mail: felix.kahlhoefer@physics.ox.ac.uk

    We propose an improved method to study recent and near-future dark matter direct detection experiments with small numbers of observed events. Our method determines in a quantitative and halo-independent way whether the experiments point towards a consistent dark matter signal and identifies the best-fit dark matter parameters. To achieve true halo independence, we apply a recently developed method based on finding the velocity distribution that best describes a given set of data. For a quantitative global analysis we construct a likelihood function suitable for small numbers of events, which allows us to determine the best-fit particle physics properties of darkmore » matter considering all experiments simultaneously. Based on this likelihood function we propose a new test statistic that quantifies how well the proposed model fits the data and how large the tension between different direct detection experiments is. We perform Monte Carlo simulations in order to determine the probability distribution function of this test statistic and to calculate the p-value for both the dark matter hypothesis and the background-only hypothesis.« less

  17. Search for neutron-antineutron oscillations at the Sudbury Neutrino Observatory

    DOE PAGES

    Aharmim, B.; Ahmed, S. N.; Anthony, A. E.; ...

    2017-11-20

    Tests on B–L symmetry breaking models are important probes to search for new physics. One proposed model with Δ(B–L)=2 involves the oscillations of a neutron to an antineutron. In this paper, a new limit on this process is derived for the data acquired from all three operational phases of the Sudbury Neutrino Observatory experiment. The search concentrated on oscillations occurring within the deuteron, and 23 events were observed against a background expectation of 30.5 events. These translated to a lower limit on the nuclear lifetime of 1.48 × 10 31 yr at 90% C.L. when no restriction was placed onmore » the signal likelihood space (unbounded). Alternatively, a lower limit on the nuclear lifetime was found to be 1.18 × 10 31 yr at 90% C.L. when the signal was forced into a positive likelihood space (bounded). Values for the free oscillation time derived from various models are also provided in this article. Furthermore, this is the first search for neutron-antineutron oscillation with the deuteron as a target.« less

  18. Search for neutron-antineutron oscillations at the Sudbury Neutrino Observatory

    NASA Astrophysics Data System (ADS)

    Aharmim, B.; Ahmed, S. N.; Anthony, A. E.; Barros, N.; Beier, E. W.; Bellerive, A.; Beltran, B.; Bergevin, M.; Biller, S. D.; Boudjemline, K.; Boulay, M. G.; Cai, B.; Chan, Y. D.; Chauhan, D.; Chen, M.; Cleveland, B. T.; Cox, G. A.; Dai, X.; Deng, H.; Detwiler, J. A.; Doe, P. J.; Doucas, G.; Drouin, P.-L.; Duncan, F. A.; Dunford, M.; Earle, E. D.; Elliott, S. R.; Evans, H. C.; Ewan, G. T.; Farine, J.; Fergani, H.; Fleurot, F.; Ford, R. J.; Formaggio, J. A.; Gagnon, N.; Goon, J. TM.; Graham, K.; Guillian, E.; Habib, S.; Hahn, R. L.; Hallin, A. L.; Hallman, E. D.; Harvey, P. J.; Hazama, R.; Heintzelman, W. J.; Heise, J.; Helmer, R. L.; Hime, A.; Howard, C.; Huang, M.; Jagam, P.; Jamieson, B.; Jelley, N. A.; Jerkins, M.; Keeter, K. J.; Klein, J. R.; Kormos, L. L.; Kos, M.; Krüger, A.; Kraus, C.; Krauss, C. B.; Kutter, T.; Kyba, C. C. M.; Lange, R.; Law, J.; Lawson, I. T.; Lesko, K. T.; Leslie, J. R.; Levine, I.; Loach, J. C.; MacLellan, R.; Majerus, S.; Mak, H. B.; Maneira, J.; Martin, R. D.; McCauley, N.; McDonald, A. B.; McGee, S. R.; Miller, M. L.; Monreal, B.; Monroe, J.; Nickel, B. G.; Noble, A. J.; O'Keeffe, H. M.; Oblath, N. S.; Okada, C. E.; Ollerhead, R. W.; Orebi Gann, G. D.; Oser, S. M.; Ott, R. A.; Peeters, S. J. M.; Poon, A. W. P.; Prior, G.; Reitzner, S. D.; Rielage, K.; Robertson, B. C.; Robertson, R. G. H.; Schwendener, M. H.; Secrest, J. A.; Seibert, S. R.; Simard, O.; Simpson, J. J.; Sinclair, D.; Skensved, P.; Sonley, T. J.; Stonehill, L. C.; Tešić, G.; Tolich, N.; Tsui, T.; Van Berg, R.; VanDevender, B. A.; Virtue, C. J.; Wall, B. L.; Waller, D.; Wan Chan Tseung, H.; Wark, D. L.; Wendland, J.; West, N.; Wilkerson, J. F.; Wilson, J. R.; Wright, A.; Yeh, M.; Zhang, F.; Zuber, K.; SNO Collaboration

    2017-11-01

    Tests on B -L symmetry breaking models are important probes to search for new physics. One proposed model with Δ (B -L )=2 involves the oscillations of a neutron to an antineutron. In this paper, a new limit on this process is derived for the data acquired from all three operational phases of the Sudbury Neutrino Observatory experiment. The search concentrated on oscillations occurring within the deuteron, and 23 events were observed against a background expectation of 30.5 events. These translated to a lower limit on the nuclear lifetime of 1.48 ×1031 yr at 90% C.L. when no restriction was placed on the signal likelihood space (unbounded). Alternatively, a lower limit on the nuclear lifetime was found to be 1.18 ×1031 yr at 90% C.L. when the signal was forced into a positive likelihood space (bounded). Values for the free oscillation time derived from various models are also provided in this article. This is the first search for neutron-antineutron oscillation with the deuteron as a target.

  19. Effects of Background Noise on Cortical Encoding of Speech in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Russo, Nicole; Zecker, Steven; Trommer, Barbara; Chen, Julia; Kraus, Nina

    2009-01-01

    This study provides new evidence of deficient auditory cortical processing of speech in noise in autism spectrum disorders (ASD). Speech-evoked responses (approximately 100-300 ms) in quiet and background noise were evaluated in typically-developing (TD) children and children with ASD. ASD responses showed delayed timing (both conditions) and…

  20. Graviton propagator from background-independent quantum gravity.

    PubMed

    Rovelli, Carlo

    2006-10-13

    We study the graviton propagator in Euclidean loop quantum gravity. We use spin foam, boundary-amplitude, and group-field-theory techniques. We compute a component of the propagator to first order, under some approximations, obtaining the correct large-distance behavior. This indicates a way for deriving conventional spacetime quantities from a background-independent theory.

Top