Sample records for joint likelihood function

  1. Maximum-likelihood techniques for joint segmentation-classification of multispectral chromosome images.

    PubMed

    Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L

    2005-12-01

    Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.

  2. Accounting for informatively missing data in logistic regression by means of reassessment sampling.

    PubMed

    Lin, Ji; Lyles, Robert H

    2015-05-20

    We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Joint penalized-likelihood reconstruction of time-activity curves and regions-of-interest from projection data in brain PET

    NASA Astrophysics Data System (ADS)

    Krestyannikov, E.; Tohka, J.; Ruotsalainen, U.

    2008-06-01

    This paper presents a novel statistical approach for joint estimation of regions-of-interest (ROIs) and the corresponding time-activity curves (TACs) from dynamic positron emission tomography (PET) brain projection data. It is based on optimizing the joint objective function that consists of a data log-likelihood term and two penalty terms reflecting the available a priori information about the human brain anatomy. The developed local optimization strategy iteratively updates both the ROI and TAC parameters and is guaranteed to monotonically increase the objective function. The quantitative evaluation of the algorithm is performed with numerically and Monte Carlo-simulated dynamic PET brain data of the 11C-Raclopride and 18F-FDG tracers. The results demonstrate that the method outperforms the existing sequential ROI quantification approaches in terms of accuracy, and can noticeably reduce the errors in TACs arising due to the finite spatial resolution and ROI delineation.

  4. Model-based estimation with boundary side information or boundary regularization [cardiac emission CT].

    PubMed

    Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.

  5. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  6. A black box optimization approach to parameter estimation in a model for long/short term variations dynamics of commodity prices

    NASA Astrophysics Data System (ADS)

    De Santis, Alberto; Dellepiane, Umberto; Lucidi, Stefano

    2012-11-01

    In this paper we investigate the estimation problem for a model of the commodity prices. This model is a stochastic state space dynamical model and the problem unknowns are the state variables and the system parameters. Data are represented by the commodity spot prices, very seldom time series of Futures contracts are available for free. Both the system joint likelihood function (state variables and parameters) and the system marginal likelihood (the state variables are eliminated) function are addressed.

  7. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  8. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  9. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-07-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.

  10. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    PubMed

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  11. Top-quark mass measurement from dilepton events at CDF II.

    PubMed

    Abulencia, A; Acosta, D; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Ambrose, D; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arguin, J-F; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Bachacou, H; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Bedeschi, F; Behari, S; Belforte, S; Bellettini, G; Bellinger, J; Belloni, A; Ben-Haim, E; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bishai, M; Blair, R E; Blocker, C; Bloom, K; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Bourov, S; Boveia, A; Brau, B; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carron, S; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chapman, J; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Chu, P H; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciljak, M; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Coca, M; Connolly, A; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Cruz, A; Cuevas, J; Culbertson, R; Cyr, D; DaRonco, S; D'Auria, S; D'Onofrio, M; Dagenhart, D; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; Dell'Orso, M; Demers, S; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Dionisi, C; Dittmann, J; Dituro, P; Dörr, C; Dominguez, A; Donati, S; Donega, M; Dong, P; Donini, J; Dorigo, T; Dube, S; Ebina, K; Efron, J; Ehlers, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Flores-Castillo, L R; Foland, A; Forrester, S; Foster, G W; Franklin, M; Freeman, J C; Fujii, Y; Furic, I; Gajjar, A; Gallinaro, M; Galyardt, J; Garcia, J E; Garcia Sciverez, M; Garfinkel, A F; Gay, C; Gerberich, H; Gerchtein, E; Gerdes, D; Giagu, S; Giannetti, P; Gibson, A; Gibson, K; Ginsburg, C; Giolo, K; Giordani, M; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, J; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Gotra, Y; Goulianos, K; Gresele, A; Griffiths, M; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Haber, C; Hahn, S R; Hahn, K; Halkiadakis, E; Hamilton, A; Han, B-Y; Handler, R; Happacher, F; Hara, K; Hare, M; Harper, S; Harr, R F; Harris, R M; Hatakeyama, K; Hauser, J; Hays, C; Hayward, H; Heijboer, A; Heinemann, B; Heinrich, J; Hennecke, M; Herndon, M; Heuser, J; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Holloway, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Huston, J; Ikado, K; Incandela, J; Introzzi, G; Iori, M; Ishizawa, Y; Ivanov, A; Iyutin, B; James, E; Jang, D; Jayatilaka, B; Jeans, D; Jensen, H; Jeon, E J; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Kang, J; Karagoz-Unel, M; Karchin, P E; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, M S; Kim, S B; Kim, S H; Kim, Y K; Kirby, M; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Kobayashi, H; Kondo, K; Kong, D J; Konigsberg, J; Kordas, K; Korytov, A; Kotwal, A V; Kovalev, A; Kraus, J; Kravchenko, I; Kreps, M; Kreymer, A; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kuhlmann, S E; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecci, C; LeCompte, T; Lee, J; Lee, J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Li, K; Lin, C; Lin, C S; Lindgren, M; Lipeles, E; Liss, T M; Lister, A; Litvintsev, D O; Liu, T; Liu, Y; Lockyer, N S; Loginov, A; Loreti, M; Loverre, P; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Maki, T; Maksimovic, P; Manca, G; Margaroli, F; Marginean, R; Marino, C; Martin, A; Martin, M; Martin, V; Martínez, M; Maruyama, T; Matsunaga, H; Mattson, M E; Mazini, R; Mazzanti, P; McFarland, K S; McGivern, D; McIntyre, P; McNamara, P; McNulty, R; Mehta, A; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; von der Mey, M; Miao, T; Miladinovic, N; Miles, J; Miller, R; Miller, J S; Mills, C; Milnik, M; Miquel, R; Miscetti, S; Mitselmakher, G; Miyamoto, A; Moggi, N; Mohr, B; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Mulhearn, M; Muller, Th; Mumford, R; Murat, P; Nachtman, J; Nahn, S; Nakano, I; Napier, A; Naumov, D; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nigmanov, T; Nodulman, L; Norniella, O; Ogawa, T; Oh, S H; Oh, Y D; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagliarone, C; Palencia, E; Paoletti, R; Papadimitriou, V; Papikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pitts, K; Plager, C; Pondrom, L; Pope, G; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Rakitin, A; Rappoccio, S; Ratnikov, F; Reisert, B; Rekovic, V; van Remortel, N; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Rinnert, K; Ristori, L; Robertson, W J; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Rott, C; Ruiz, A; Russ, J; Rusu, V; Ryan, D; Saarikko, H; Sabik, S; Safonov, A; Sakumoto, W K; Salamanna, G; Salto, O; Saltzberg, D; Sanchez, C; Santi, L; Sarkar, S; Sato, K; Savard, P; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Semeria, F; Sexton-Kennedy, L; Sfiligoi, I; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sill, A; Sinervo, P; Sisakyan, A; Sjolin, J; Skiba, A; Slaughter, A J; Sliwa, K; Smirnov, D; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Squillacioti, P; Stanitzki, M; Staveris-Polykalas, A; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sumorok, K; Sun, H; Suzuki, T; Taffard, A; Tafirout, R; Takashima, R; Takeuchi, Y; Takikawa, K; Tanaka, M; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Tether, S; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tollefson, K; Tomura, T; Tonelli, D; Tönnesmann, M; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tsuchiya, R; Tsuno, S; Turini, N; Ukegawa, F; Unverhau, T; Uozumi, S; Usynin, D; Vacavant, L; Vaiciulis, A; Vallecorsa, S; Varganov, A; Vataga, E; Velev, G; Veramendi, G; Veszpremi, V; Vickey, T; Vidal, R; Vila, I; Vilar, R; Vollrath, I; Volobouev, I; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner, W; Wallny, R; Walter, T; Wan, Z; Wang, M J; Wang, S M; Warburton, A; Ward, B; Waschke, S; Waters, D; Watts, T; Weber, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Worm, S; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, Y; Yang, C; Yang, U K; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zetti, F; Zhang, X; Zhou, J; Zucchelli, S

    2006-04-21

    We report a measurement of the top-quark mass using events collected by the CDF II detector from pp collisions at square root of s = 1.96 TeV at the Fermilab Tevatron. We calculate a likelihood function for the top-quark mass in events that are consistent with tt --> bl(-)nu(l)bl'+ nu'(l) decays. The likelihood is formed as the convolution of the leading-order matrix element and detector resolution functions. The joint likelihood is the product of likelihoods for each of 33 events collected in 340 pb(-1) of integrated luminosity, yielding a top-quark mass M(t) = 165.2 +/- 6.1(stat) +/- 3.4(syst) GeV/c2. This first application of a matrix-element technique to tt --> bl+ nu(l)bl'- nu(l') decays gives the most precise single measurement of M(t) in dilepton events. Combined with other CDF run II measurements using dilepton events, we measure M(t) = 167.9 +/- 5.2(stat) +/- 3.7(syst) GeV/c2.

  12. Assessing compatibility of direct detection data: halo-independent global likelihood analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2016-10-18

    We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be comparedmore » with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.« less

  13. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  14. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  15. Bivariate categorical data analysis using normal linear conditional multinomial probability model.

    PubMed

    Sun, Bingrui; Sutradhar, Brajendra

    2015-02-10

    Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  17. Quantum-state reconstruction by maximizing likelihood and entropy.

    PubMed

    Teo, Yong Siah; Zhu, Huangjun; Englert, Berthold-Georg; Řeháček, Jaroslav; Hradil, Zdeněk

    2011-07-08

    Quantum-state reconstruction on a finite number of copies of a quantum system with informationally incomplete measurements, as a rule, does not yield a unique result. We derive a reconstruction scheme where both the likelihood and the von Neumann entropy functionals are maximized in order to systematically select the most-likely estimator with the largest entropy, that is, the least-bias estimator, consistent with a given set of measurement data. This is equivalent to the joint consideration of our partial knowledge and ignorance about the ensemble to reconstruct its identity. An interesting structure of such estimators will also be explored.

  18. dPIRPLE: a joint estimation framework for deformable registration and penalized-likelihood CT image reconstruction using prior images

    NASA Astrophysics Data System (ADS)

    Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.

    2014-09-01

    Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.

  19. Joint Stability in Total Knee Arthroplasty: What Is the Target for a Stable Knee?

    PubMed

    Wright, Timothy M

    2017-02-01

    Instability remains a common cause of failure in total knee arthroplasty. Although approaches for surgical treatment of instability exist, the target for initial stability remains elusive, increasing the likelihood that failures will persist because adequate stability is not restored when performing the primary arthroplasty. Although the mechanisms that stabilize the knee joint-contact between the articular surfaces, ligamentous constraints, and muscle forces-are well-defined, their relative importance and the interplay among them throughout functions of daily living are poorly understood. The problem is exacerbated by the complex multiplanar motions that occur across the joint and the large variations in these motions across the population, suggesting that stability targets may need to be patient-specific.

  20. Determinants of shoulder and elbow flexion range: results from the San Antonio Longitudinal Study of Aging.

    PubMed

    Escalante, A; Lichtenstein, M J; Hazuda, H P

    1999-08-01

    To gain a knowledge of factors associated with impaired upper extremity range of motion (ROM) in order to understand pathways that lead to disability. Shoulder and elbow flexion range was measured in a cohort of 695 community-dwelling subjects aged 65 to 74 years. Associations between subjects' shoulder and elbow flexion ranges and their demographic and anthropometric characteristics, as well as the presence of diabetes mellitus or self-reported physician-diagnosed arthritis, were examined using multivariate regression models. The relationship between shoulder or elbow flexion range and subjects' functional reach was examined to explore the functional significance of ROM in these joints. The flexion range for the 4 joints studied was at least 120 degrees in nearly all subjects (> or = 99% of the subjects for each of the 4 joints). Multivariate models revealed significant associations between male sex, Mexican American ethnic background, the use of oral hypoglycemic drugs or insulin to treat diabetes mellitus, and a lower shoulder flexion range. A lower elbow flexion range was associated with male sex, increasing body mass index, and the use of oral hypoglycemic drugs or insulin. A higher shoulder or elbow flexion range was associated with a lower likelihood of having a short functional reach. The great majority of community-dwelling elderly have a flexion range of shoulder and elbow joints that can be considered functional. Diabetes mellitus and obesity are two potentially treatable factors associated with reduced flexion range of these two functionally important joints.

  1. Research on adaptive optics image restoration algorithm based on improved joint maximum a posteriori method

    NASA Astrophysics Data System (ADS)

    Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying

    2018-03-01

    In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.

  2. A composite likelihood approach for spatially correlated survival data

    PubMed Central

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450

  3. A composite likelihood approach for spatially correlated survival data.

    PubMed

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.

  4. Joint Symbol Timing and CFO Estimation for OFDM/OQAM Systems in Multipath Channels

    NASA Astrophysics Data System (ADS)

    Fusco, Tilde; Petrella, Angelo; Tanda, Mario

    2009-12-01

    The problem of data-aided synchronization for orthogonal frequency division multiplexing (OFDM) systems based on offset quadrature amplitude modulation (OQAM) in multipath channels is considered. In particular, the joint maximum-likelihood (ML) estimator for carrier-frequency offset (CFO), amplitudes, phases, and delays, exploiting a short known preamble, is derived. The ML estimators for phases and amplitudes are in closed form. Moreover, under the assumption that the CFO is sufficiently small, a closed form approximate ML (AML) CFO estimator is obtained. By exploiting the obtained closed form solutions a cost function whose peaks provide an estimate of the delays is derived. In particular, the symbol timing (i.e., the delay of the first multipath component) is obtained by considering the smallest estimated delay. The performance of the proposed joint AML estimator is assessed via computer simulations and compared with that achieved by the joint AML estimator designed for AWGN channel and that achieved by a previously derived joint estimator for OFDM systems.

  5. Bounded influence function based inference in joint modelling of ordinal partial linear model and accelerated failure time model.

    PubMed

    Chakraborty, Arindom

    2016-12-01

    A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.

  6. A retrospective likelihood approach for efficient integration of multiple omics factors in case-control association studies.

    PubMed

    Balliu, Brunilda; Tsonaka, Roula; Boehringer, Stefan; Houwing-Duistermaat, Jeanine

    2015-03-01

    Integrative omics, the joint analysis of outcome and multiple types of omics data, such as genomics, epigenomics, and transcriptomics data, constitute a promising approach for powerful and biologically relevant association studies. These studies often employ a case-control design, and often include nonomics covariates, such as age and gender, that may modify the underlying omics risk factors. An open question is how to best integrate multiple omics and nonomics information to maximize statistical power in case-control studies that ascertain individuals based on the phenotype. Recent work on integrative omics have used prospective approaches, modeling case-control status conditional on omics, and nonomics risk factors. Compared to univariate approaches, jointly analyzing multiple risk factors with a prospective approach increases power in nonascertained cohorts. However, these prospective approaches often lose power in case-control studies. In this article, we propose a novel statistical method for integrating multiple omics and nonomics factors in case-control association studies. Our method is based on a retrospective likelihood function that models the joint distribution of omics and nonomics factors conditional on case-control status. The new method provides accurate control of Type I error rate and has increased efficiency over prospective approaches in both simulated and real data. © 2015 Wiley Periodicals, Inc.

  7. A study of parameter identification

    NASA Technical Reports Server (NTRS)

    Herget, C. J.; Patterson, R. E., III

    1978-01-01

    A set of definitions for deterministic parameter identification ability were proposed. Deterministic parameter identificability properties are presented based on four system characteristics: direct parameter recoverability, properties of the system transfer function, properties of output distinguishability, and uniqueness properties of a quadratic cost functional. Stochastic parameter identifiability was defined in terms of the existence of an estimation sequence for the unknown parameters which is consistent in probability. Stochastic parameter identifiability properties are presented based on the following characteristics: convergence properties of the maximum likelihood estimate, properties of the joint probability density functions of the observations, and properties of the information matrix.

  8. Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions

    NASA Astrophysics Data System (ADS)

    Peacock, Sheila; Douglas, Alan; Bowers, David

    2017-08-01

    Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.

  9. Efficient Exploration of the Space of Reconciled Gene Trees

    PubMed Central

    Szöllősi, Gergely J.; Rosikiewicz, Wojciech; Boussau, Bastien; Tannier, Eric; Daubin, Vincent

    2013-01-01

    Gene trees record the combination of gene-level events, such as duplication, transfer and loss (DTL), and species-level events, such as speciation and extinction. Gene tree–species tree reconciliation methods model these processes by drawing gene trees into the species tree using a series of gene and species-level events. The reconstruction of gene trees based on sequence alone almost always involves choosing between statistically equivalent or weakly distinguishable relationships that could be much better resolved based on a putative species tree. To exploit this potential for accurate reconstruction of gene trees, the space of reconciled gene trees must be explored according to a joint model of sequence evolution and gene tree–species tree reconciliation. Here we present amalgamated likelihood estimation (ALE), a probabilistic approach to exhaustively explore all reconciled gene trees that can be amalgamated as a combination of clades observed in a sample of gene trees. We implement the ALE approach in the context of a reconciliation model (Szöllősi et al. 2013), which allows for the DTL of genes. We use ALE to efficiently approximate the sum of the joint likelihood over amalgamations and to find the reconciled gene tree that maximizes the joint likelihood among all such trees. We demonstrate using simulations that gene trees reconstructed using the joint likelihood are substantially more accurate than those reconstructed using sequence alone. Using realistic gene tree topologies, branch lengths, and alignment sizes, we demonstrate that ALE produces more accurate gene trees even if the model of sequence evolution is greatly simplified. Finally, examining 1099 gene families from 36 cyanobacterial genomes we find that joint likelihood-based inference results in a striking reduction in apparent phylogenetic discord, with respectively. 24%, 59%, and 46% reductions in the mean numbers of duplications, transfers, and losses per gene family. The open source implementation of ALE is available from https://github.com/ssolo/ALE.git. [amalgamation; gene tree reconciliation; gene tree reconstruction; lateral gene transfer; phylogeny.] PMID:23925510

  10. Analysis of the Effects of Normal Walking on Ankle Joint Contact Characteristics After Acute Inversion Ankle Sprain.

    PubMed

    Bae, Ji Yong; Park, Kyung Soon; Seon, Jong Keun; Jeon, Insu

    2015-12-01

    To show the causal relationship between normal walking after various lateral ankle ligament (LAL) injuries caused by acute inversion ankle sprains and alterations in ankle joint contact characteristics, finite element simulations of normal walking were carried out using an intact ankle joint model and LAL injury models. A walking experiment using a volunteer with a normal ankle joint was performed to obtain the boundary conditions for the simulations and to support the appropriateness of the simulation results. Contact pressure and strain on the talus articular cartilage and anteroposterior and mediolateral translations of the talus were calculated. Ankles with ruptured anterior talofibular ligaments (ATFLs) had a higher likelihood of experiencing increased ankle joint contact pressures, strains and translations than ATFL-deficient ankles. In particular, ankles with ruptured ATFL + calcaneofibular ligaments and all ruptured ankles had a similar likelihood as the ATFL-ruptured ankles. The push off stance phase was the most likely situation for increased ankle joint contact pressures, strains and translations in LAL-injured ankles.

  11. Pseudo and conditional score approach to joint analysis of current count and current status data.

    PubMed

    Wen, Chi-Chung; Chen, Yi-Hau

    2018-04-17

    We develop a joint analysis approach for recurrent and nonrecurrent event processes subject to case I interval censorship, which are also known in literature as current count and current status data, respectively. We use a shared frailty to link the recurrent and nonrecurrent event processes, while leaving the distribution of the frailty fully unspecified. Conditional on the frailty, the recurrent event is assumed to follow a nonhomogeneous Poisson process, and the mean function of the recurrent event and the survival function of the nonrecurrent event are assumed to follow some general form of semiparametric transformation models. Estimation of the models is based on the pseudo-likelihood and the conditional score techniques. The resulting estimators for the regression parameters and the unspecified baseline functions are shown to be consistent with rates of square and cubic roots of the sample size, respectively. Asymptotic normality with closed-form asymptotic variance is derived for the estimator of the regression parameters. We apply the proposed method to a fracture-osteoporosis survey data to identify risk factors jointly for fracture and osteoporosis in elders, while accounting for association between the two events within a subject. © 2018, The International Biometric Society.

  12. Maximum likelihood estimation of signal-to-noise ratio and combiner weight

    NASA Technical Reports Server (NTRS)

    Kalson, S.; Dolinar, S. J.

    1986-01-01

    An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.

  13. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    PubMed Central

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  14. Biomechanics of injury prediction for anthropomorphic manikins - preliminary design considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engin, A.E.

    1996-12-31

    The anthropomorphic manikins are used in automobile safety research as well as in aerospace related applications. There is now a strong need to advance the biomechanics knowledge to determine appropriate criteria for injury likelihood prediction as functions of manikin-measured responses. In this paper, three regions of a manikin, namely, the head, knee joint, and lumbar spine are taken as examples to introduce preliminary design considerations for injury prediction by means of responses of theoretical models and strategically placed sensing devices.

  15. Excellent AUC for joint fluid cytology in the detection/exclusion of hip and knee prosthetic joint infection.

    PubMed

    Gallo, Jiri; Juranova, Jarmila; Svoboda, Michal; Zapletalova, Jana

    2017-09-01

    The aim of this study was to evaluate the characteristics of synovial fluid (SF) white cell count (SWCC) and neutrophil/lymphocyte percentage in the diagnosis of prosthetic joint infection (PJI) for particular threshold values. This was a prospective study of 391 patients in whom SF specimens were collected before total joint replacement revisions. SF was aspirated before joint capsule incision. The PJI diagnosis was based only on non-SF data. Receiver operating characteristic plots were constructed for the SWCC and differential counts of leukocytes in aspirated fluid. Logistic binomic regression was used to distinguish infected and non-infected cases in the combined data. PJI was diagnosed in 78 patients, and aseptic revision in 313 patients. The areas (AUC) under the curve for the SWCC, the neutrophil and lymphocyte percentages were 0.974, 0.962, and 0.951, respectively. The optimal cut-off for PJI was 3,450 cells/μL, 74.6% neutrophils, and 14.6% lymphocytes. Positive likelihood ratios for the SWCC, neutrophil and lymphocyte percentages were 19.0, 10.4, and 9.5, respectively. Negative likelihood ratios for the SWCC, neutrophil and lymphocyte percentages were 0.06, 0.076, and 0.092, respectively. Based on AUC, the present study identified cut-off values for the SWCC and differential leukocyte count for the diagnosis of PJI. The likelihood ratio for positive/negative SWCCs can significantly change the pre-test probability of PJI.

  16. Characterization of computer network events through simultaneous feature selection and clustering of intrusion alerts

    NASA Astrophysics Data System (ADS)

    Chen, Siyue; Leung, Henry; Dondo, Maxwell

    2014-05-01

    As computer network security threats increase, many organizations implement multiple Network Intrusion Detection Systems (NIDS) to maximize the likelihood of intrusion detection and provide a comprehensive understanding of intrusion activities. However, NIDS trigger a massive number of alerts on a daily basis. This can be overwhelming for computer network security analysts since it is a slow and tedious process to manually analyse each alert produced. Thus, automated and intelligent clustering of alerts is important to reveal the structural correlation of events by grouping alerts with common features. As the nature of computer network attacks, and therefore alerts, is not known in advance, unsupervised alert clustering is a promising approach to achieve this goal. We propose a joint optimization technique for feature selection and clustering to aggregate similar alerts and to reduce the number of alerts that analysts have to handle individually. More precisely, each identified feature is assigned a binary value, which reflects the feature's saliency. This value is treated as a hidden variable and incorporated into a likelihood function for clustering. Since computing the optimal solution of the likelihood function directly is analytically intractable, we use the Expectation-Maximisation (EM) algorithm to iteratively update the hidden variable and use it to maximize the expected likelihood. Our empirical results, using a labelled Defense Advanced Research Projects Agency (DARPA) 2000 reference dataset, show that the proposed method gives better results than the EM clustering without feature selection in terms of the clustering accuracy.

  17. Production Functions for Water Delivery Systems: Analysis and Estimation Using Dual Cost Function and Implicit Price Specifications

    NASA Astrophysics Data System (ADS)

    Teeples, Ronald; Glyer, David

    1987-05-01

    Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.

  18. Model criticism based on likelihood-free inference, with an application to protein network evolution.

    PubMed

    Ratmann, Oliver; Andrieu, Christophe; Wiuf, Carsten; Richardson, Sylvia

    2009-06-30

    Mathematical models are an important tool to explain and comprehend complex phenomena, and unparalleled computational advances enable us to easily explore them without any or little understanding of their global properties. In fact, the likelihood of the data under complex stochastic models is often analytically or numerically intractable in many areas of sciences. This makes it even more important to simultaneously investigate the adequacy of these models-in absolute terms, against the data, rather than relative to the performance of other models-but no such procedure has been formally discussed when the likelihood is intractable. We provide a statistical interpretation to current developments in likelihood-free Bayesian inference that explicitly accounts for discrepancies between the model and the data, termed Approximate Bayesian Computation under model uncertainty (ABCmicro). We augment the likelihood of the data with unknown error terms that correspond to freely chosen checking functions, and provide Monte Carlo strategies for sampling from the associated joint posterior distribution without the need of evaluating the likelihood. We discuss the benefit of incorporating model diagnostics within an ABC framework, and demonstrate how this method diagnoses model mismatch and guides model refinement by contrasting three qualitative models of protein network evolution to the protein interaction datasets of Helicobacter pylori and Treponema pallidum. Our results make a number of model deficiencies explicit, and suggest that the T. pallidum network topology is inconsistent with evolution dominated by link turnover or lateral gene transfer alone.

  19. Analyzing Planck and low redshift data sets with advanced statistical methods

    NASA Astrophysics Data System (ADS)

    Eifler, Tim

    The recent ESA/NASA Planck mission has provided a key data set to constrain cosmology that is most sensitive to physics of the early Universe, such as inflation and primordial NonGaussianity (Planck 2015 results XIII). In combination with cosmological probes of the LargeScale Structure (LSS), the Planck data set is a powerful source of information to investigate late time phenomena (Planck 2015 results XIV), e.g. the accelerated expansion of the Universe, the impact of baryonic physics on the growth of structure, and the alignment of galaxies in their dark matter halos. It is the main objective of this proposal to re-analyze the archival Planck data, 1) with different, more recently developed statistical methods for cosmological parameter inference, and 2) to combine Planck and ground-based observations in an innovative way. We will make the corresponding analysis framework publicly available and believe that it will set a new standard for future CMB-LSS analyses. Advanced statistical methods, such as the Gibbs sampler (Jewell et al 2004, Wandelt et al 2004) have been critical in the analysis of Planck data. More recently, Approximate Bayesian Computation (ABC, see Weyant et al 2012, Akeret et al 2015, Ishida et al 2015, for cosmological applications) has matured to an interesting tool in cosmological likelihood analyses. It circumvents several assumptions that enter the standard Planck (and most LSS) likelihood analyses, most importantly, the assumption that the functional form of the likelihood of the CMB observables is a multivariate Gaussian. Beyond applying new statistical methods to Planck data in order to cross-check and validate existing constraints, we plan to combine Planck and DES data in a new and innovative way and run multi-probe likelihood analyses of CMB and LSS observables. The complexity of multiprobe likelihood analyses scale (non-linearly) with the level of correlations amongst the individual probes that are included. For the multi-probe analysis proposed here we will use the existing CosmoLike software, a computationally efficient analysis framework that is unique in its integrated ansatz of jointly analyzing probes of large-scale structure (LSS) of the Universe. We plan to combine CosmoLike with publicly available CMB analysis software (Camb, CLASS) to include modeling capabilities of CMB temperature, polarization, and lensing measurements. The resulting analysis framework will be capable to independently and jointly analyze data from the CMB and from various probes of the LSS of the Universe. After completion we will utilize this framework to check for consistency amongst the individual probes and subsequently run a joint likelihood analysis of probes that are not in tension. The inclusion of Planck information in a joint likelihood analysis substantially reduces DES uncertainties in cosmological parameters, and allows for unprecedented constraints on parameters that describe astrophysics. In their recent review Observational Probes of Cosmic Acceleration (Weinberg et al 2013) the authors emphasize the value of a balanced program that employs several of the most powerful methods in combination, both to cross-check systematic uncertainties and to take advantage of complementary information. The work we propose follows exactly this idea: 1) cross-checking existing Planck results with alternative methods in the data analysis, 2) checking for consistency of Planck and DES data, and 3) running a joint analysis to constrain cosmology and astrophysics. It is now expedient to develop and refine multi-probe analysis strategies that allow the comparison and inclusion of information from disparate probes to optimally obtain cosmology and astrophysics. Analyzing Planck and DES data poses an ideal opportunity for this purpose and corresponding lessons will be of great value for the science preparation of Euclid and WFIRST.

  20. Joint reconstruction of activity and attenuation in Time-of-Flight PET: A Quantitative Analysis.

    PubMed

    Rezaei, Ahmadreza; Deroose, Christophe M; Vahle, Thomas; Boada, Fernando; Nuyts, Johan

    2018-03-01

    Joint activity and attenuation reconstruction methods from time of flight (TOF) positron emission tomography (PET) data provide an effective solution to attenuation correction when no (or incomplete/inaccurate) information on the attenuation is available. One of the main barriers limiting their use in clinical practice is the lack of validation of these methods on a relatively large patient database. In this contribution, we aim at validating the activity reconstructions of the maximum likelihood activity reconstruction and attenuation registration (MLRR) algorithm on a whole-body patient data set. Furthermore, a partial validation (since the scale problem of the algorithm is avoided for now) of the maximum likelihood activity and attenuation reconstruction (MLAA) algorithm is also provided. We present a quantitative comparison of the joint reconstructions to the current clinical gold-standard maximum likelihood expectation maximization (MLEM) reconstruction with CT-based attenuation correction. Methods: The whole-body TOF-PET emission data of each patient data set is processed as a whole to reconstruct an activity volume covering all the acquired bed positions, which helps to reduce the problem of a scale per bed position in MLAA to a global scale for the entire activity volume. Three reconstruction algorithms are used: MLEM, MLRR and MLAA. A maximum likelihood (ML) scaling of the single scatter simulation (SSS) estimate to the emission data is used for scatter correction. The reconstruction results are then analyzed in different regions of interest. Results: The joint reconstructions of the whole-body patient data set provide better quantification in case of PET and CT misalignments caused by patient and organ motion. Our quantitative analysis shows a difference of -4.2% (±2.3%) and -7.5% (±4.6%) between the joint reconstructions of MLRR and MLAA compared to MLEM, averaged over all regions of interest, respectively. Conclusion: Joint activity and attenuation estimation methods provide a useful means to estimate the tracer distribution in cases where CT-based attenuation images are subject to misalignments or are not available. With an accurate estimate of the scatter contribution in the emission measurements, the joint TOF-PET reconstructions are within clinical acceptable accuracy. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  1. A Joint Model for Longitudinal Measurements and Survival Data in the Presence of Multiple Failure Types

    PubMed Central

    Elashoff, Robert M.; Li, Gang; Li, Ning

    2009-01-01

    Summary In this article we study a joint model for longitudinal measurements and competing risks survival data. Our joint model provides a flexible approach to handle possible nonignorable missing data in the longitudinal measurements due to dropout. It is also an extension of previous joint models with a single failure type, offering a possible way to model informatively censored events as a competing risk. Our model consists of a linear mixed effects submodel for the longitudinal outcome and a proportional cause-specific hazards frailty submodel (Prentice et al., 1978, Biometrics 34, 541-554) for the competing risks survival data, linked together by some latent random effects. We propose to obtain the maximum likelihood estimates of the parameters by an expectation maximization (EM) algorithm and estimate their standard errors using a profile likelihood method. The developed method works well in our simulation studies and is applied to a clinical trial for the scleroderma lung disease. PMID:18162112

  2. Impact of Moving From a Widespread to Multisite Pain Definition on Other Fibromyalgia Symptoms.

    PubMed

    Dean, Linda E; Arnold, Lesley; Crofford, Leslie; Bennett, Robert; Goldenberg, Don; Fitzcharles, Mary-Ann; Paiva, Eduardo S; Staud, Roland; Clauw, Dan; Sarzi-Puttini, Piercarlo; Jones, Gareth T; Ayorinde, Abimbola; Flüß, Elisa; Beasley, Marcus; Macfarlane, Gary J

    2017-12-01

    To investigate whether associations between pain and the additional symptoms associated with fibromyalgia are different in persons with chronic widespread pain (CWP) compared to multisite pain (MSP), with or without joint areas. Six studies were used: 1958 British birth cohort, Epidemiology of Functional Disorders, Kid Low Back Pain, Managing Unexplained Symptoms (Chronic Widespread Pain) in Primary Care: Involving Traditional and Accessible New Approaches, Study of Health and its Management, and Women's Health Study (WHEST; females). MSP was defined as the presence of pain in ≥8 body sites in adults (≥10 sites in children) indicated on 4-view body manikins, conducted first to include joints (positive joints) and second without (negative joints). The relationship between pain and fatigue, sleep disturbance, somatic symptoms, and mood impairment was assessed using logistic regression. Results are presented as odds ratios (ORs) with 95% confidence intervals (95% CIs). There were 34,818 participants across the study populations (adults age range 42-56 years, male 43-51% [excluding WHEST], and CWP prevalence 12-17%). Among those reporting MSP, the proportion reporting CWP ranged between 62% and 76%. Among those reporting the symptoms associated with fibromyalgia, there was an increased likelihood of reporting pain, the magnitude of which was similar regardless of the definition used. For example, within WHEST, reporting moderate/severe fatigue (Chalder fatigue scale 4-11) was associated with a >5-fold increase in likelihood of reporting pain (CWP OR 5.2 [95% CI 3.9-6.9], MSP-positive joints OR 6.5 [95% CI 5.0-8.6], and MSP-negative joints OR 6.5 [95% CI 4.7-9.0]). This large-scale study demonstrates that regardless of the pain definition used, the magnitude of association between pain and other associated symptoms of fibromyalgia is similar. This finding supports the continued collection of both when classifying fibromyalgia, but highlights the fact that pain may not require to follow the definition outlined within the 1990 American College of Rheumatology criteria. © 2017, American College of Rheumatology.

  3. Combining classifiers using their receiver operating characteristics and maximum likelihood estimation.

    PubMed

    Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H

    2005-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging.

  4. Combining Classifiers Using Their Receiver Operating Characteristics and Maximum Likelihood Estimation*

    PubMed Central

    Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.

    2010-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884

  5. Signal Recovery and System Calibration from Multiple Compressive Poisson Measurements

    DOE PAGES

    Wang, Liming; Huang, Jiaji; Yuan, Xin; ...

    2015-09-17

    The measurement matrix employed in compressive sensing typically cannot be known precisely a priori and must be estimated via calibration. One may take multiple compressive measurements, from which the measurement matrix and underlying signals may be estimated jointly. This is of interest as well when the measurement matrix may change as a function of the details of what is measured. This problem has been considered recently for Gaussian measurement noise, and here we develop this idea with application to Poisson systems. A collaborative maximum likelihood algorithm and alternating proximal gradient algorithm are proposed, and associated theoretical performance guarantees are establishedmore » based on newly derived concentration-of-measure results. A Bayesian model is then introduced, to improve flexibility and generality. Connections between the maximum likelihood methods and the Bayesian model are developed, and example results are presented for a real compressive X-ray imaging system.« less

  6. Cosmological Parameters and Hyper-Parameters: The Hubble Constant from Boomerang and Maxima

    NASA Astrophysics Data System (ADS)

    Lahav, Ofer

    Recently several studies have jointly analysed data from different cosmological probes with the motivation of estimating cosmological parameters. Here we generalise this procedure to allow freedom in the relative weights of various probes. This is done by including in the joint likelihood function a set of `Hyper-Parameters', which are dealt with using Bayesian considerations. The resulting algorithm, which assumes uniform priors on the log of the Hyper-Parameters, is very simple to implement. We illustrate the method by estimating the Hubble constant H0 from different sets of recent CMB experiments (including Saskatoon, Python V, MSAM1, TOCO, Boomerang and Maxima). The approach can be generalised for a combination of cosmic probes, and for other priors on the Hyper-Parameters. Reference: Lahav, Bridle, Hobson, Lasenby & Sodre, 2000, MNRAS, in press (astro-ph/9912105)

  7. Application of individually performed acrylic cement spacers containing 5% of antibiotic in two-stage revision of hip and knee prosthesis due to infection.

    PubMed

    Babiak, Ireneusz

    2012-07-03

    Deep infection of a joint endoprosthesis constitutes a threat to the stability of the implant and joint function. It requires a comprehensive and interdisciplinary approach, involving the joint revision and removal of the bacterial biofilm from all tissues, the endoprosthesis must be often removed and bone stock infection treated. The paper presents the author's experience with the use of acrylic cement spacers, custom-made during the surgery and containing low dose of an antibiotic supplemented with 5% of a selected, targeted antibiotic for the infection of hip and knee endoprostheses. 33 two-stage revisions of knee and hip joints with the use of a spacer were performed. They involved 24 knee joints and 9 hip joints. The infections were mostly caused by staphylococci MRSA (18) and MSSA (8), and in some cases Enterococci (4), Salmonella (1), Pseudomonas (1) and Acinetobacter (1). The infection was successfully treated in 31 out of 33 cases (93.93%), including 8 patients with the hip infection and 23 patients with the knee infection. The endoprosthesis was reimplanted in 30 cases: for 7 hips and 23 knees, in 3 remaining cases the endoprosthesis was not reimplanted. Mechanical complications due to the spacer occurred in 4 cases: 3 dislocations and 1 fracture (hip spacer). The patients with hip spacers were ambulatory with a partial weight bearing of the operated extremity and those with knee spacers were also ambulatory with a partial weight bearing, but the extremity was initially protected by an orthosis. The spacer enables to maintain a limb function, and making it by hand allows the addition of the specific bacteria targeted antibiotic thus increasing the likelihood of the effective antibacterial treatment.

  8. A Comparative Study of Co-Channel Interference Suppression Techniques

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Satorius, Ed; Paparisto, Gent; Polydoros, Andreas

    1997-01-01

    We describe three methods of combatting co-channel interference (CCI): a cross-coupled phase-locked loop (CCPLL); a phase-tracking circuit (PTC), and joint Viterbi estimation based on the maximum likelihood principle. In the case of co-channel FM-modulated voice signals, the CCPLL and PTC methods typically outperform the maximum likelihood estimators when the modulation parameters are dissimilar. However, as the modulation parameters become identical, joint Viterbi estimation provides for a more robust estimate of the co-channel signals and does not suffer as much from "signal switching" which especially plagues the CCPLL approach. Good performance for the PTC requires both dissimilar modulation parameters and a priori knowledge of the co-channel signal amplitudes. The CCPLL and joint Viterbi estimators, on the other hand, incorporate accurate amplitude estimates. In addition, application of the joint Viterbi algorithm to demodulating co-channel digital (BPSK) signals in a multipath environment is also discussed. It is shown in this case that if the interference is sufficiently small, a single trellis model is most effective in demodulating the co-channel signals.

  9. Improvements in Spectrum's fit to program data tool.

    PubMed

    Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John

    2017-04-01

    The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.

  10. Overcoming Barriers to Firewise Actions by Residents. Final Report to Joint Fire Science Program

    Treesearch

    James D. Absher; Jerry J. Vaske; Katie M. Lyon

    2013-01-01

    Encouraging the public to take action (e.g., creating defensible space) that can reduce the likelihood of wildfire damage and decrease the likelihood of injury is a common approach to increasing wildfire safety and damage mitigation. This study was designed to improve our understanding of both individual and community actions that homeowners currently do or might take...

  11. Estimating crustal thickness and Vp/Vs ratio with joint constraints of receiver function and gravity data

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Guo, Lianghui; Ma, Yawei; Li, Yonghua; Wang, Weilai

    2018-05-01

    The technique of teleseismic receiver function H-κ stacking is popular for estimating the crustal thickness and Vp/Vs ratio. However, it has large uncertainty or ambiguity when the Moho multiples in receiver function are not easy to be identified. We present an improved technique to estimate the crustal thickness and Vp/Vs ratio by joint constraints of receiver function and gravity data. The complete Bouguer gravity anomalies, composed of the anomalies due to the relief of the Moho interface and the heterogeneous density distribution within the crust, are associated with the crustal thickness, density and Vp/Vs ratio. According to their relationship formulae presented by Lowry and Pérez-Gussinyé, we invert the complete Bouguer gravity anomalies by using a common algorithm of likelihood estimation to obtain the crustal thickness and Vp/Vs ratio, and then utilize them to constrain the receiver function H-κ stacking result. We verified the improved technique on three synthetic crustal models and evaluated the influence of selected parameters, the results of which demonstrated that the novel technique could reduce the ambiguity and enhance the accuracy of estimation. Real data test at two given stations in the NE margin of Tibetan Plateau illustrated that the improved technique provided reliable estimations of crustal thickness and Vp/Vs ratio.

  12. Do depression and anxiety reduce the likelihood of remission in rheumatoid arthritis and psoriatic arthritis? Data from the prospective multicentre NOR-DMARD study.

    PubMed

    Michelsen, Brigitte; Kristianslund, Eirik Klami; Sexton, Joseph; Hammer, Hilde Berner; Fagerli, Karen Minde; Lie, Elisabeth; Wierød, Ada; Kalstad, Synøve; Rødevand, Erik; Krøll, Frode; Haugeberg, Glenn; Kvien, Tore K

    2017-11-01

    To investigate the predictive value of baseline depression/anxiety on the likelihood of achieving joint remission in rheumatoid arthritis (RA) and psoriatic arthritis (PsA) as well as the associations between baseline depression/anxiety and the components of the remission criteria at follow-up. We included 1326 patients with RA and 728 patients with PsA from the prospective observational NOR-DMARD study starting first-time tumour necrosis factor inhibitors or methotrexate. The predictive value of depression/anxiety on remission was explored in prespecified logistic regression models and the associations between baseline depression/anxiety and the components of the remission criteria in prespecified multiple linear regression models. Baseline depression/anxiety according to EuroQoL-5D-3L, Short Form-36 (SF-36) Mental Health subscale ≤56 and SF-36 Mental Component Summary ≤38 negatively predicted 28-joint Disease Activity Score <2.6, Simplified Disease Activity Index ≤3.3, Clinical Disease Activity Index ≤2.8, ACR/EULAR Boolean and Disease Activity Index for Psoriatic Arthritis ≤4 remission after 3 and 6 months treatment in RA (p≤0.008) and partly in PsA (p from 0.001 to 0.73). Baseline depression/anxiety was associated with increased patient's and evaluator's global assessment, tender joint count and joint pain in RA at follow-up, but not with swollen joint count and acute phase reactants. Depression and anxiety may reduce likelihood of joint remission based on composite scores in RA and PsA and should be taken into account in individual patients when making a shared decision on a treatment target. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  14. On the Existence and Uniqueness of JML Estimates for the Partial Credit Model

    ERIC Educational Resources Information Center

    Bertoli-Barsotti, Lucio

    2005-01-01

    A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…

  15. Hip Implant Modified To Increase Probability Of Retention

    NASA Technical Reports Server (NTRS)

    Canabal, Francisco, III

    1995-01-01

    Modification in design of hip implant proposed to increase likelihood of retention of implant in femur after hip-repair surgery. Decreases likelihood of patient distress and expense associated with repetition of surgery after failed implant procedure. Intended to provide more favorable flow of cement used to bind implant in proximal extreme end of femur, reducing structural flaws causing early failure of implant/femur joint.

  16. Specifying the core network supporting episodic simulation and episodic memory by activation likelihood estimation

    PubMed Central

    Benoit, Roland G.; Schacter, Daniel L.

    2015-01-01

    It has been suggested that the simulation of hypothetical episodes and the recollection of past episodes are supported by fundamentally the same set of brain regions. The present article specifies this core network via Activation Likelihood Estimation (ALE). Specifically, a first meta-analysis revealed joint engagement of core network regions during episodic memory and episodic simulation. These include parts of the medial surface, the hippocampus and parahippocampal cortex within the medial temporal lobes, and the lateral temporal and inferior posterior parietal cortices on the lateral surface. Both capacities also jointly recruited additional regions such as parts of the bilateral dorsolateral prefrontal cortex. All of these core regions overlapped with the default network. Moreover, it has further been suggested that episodic simulation may require a stronger engagement of some of the core network’s nodes as wells as the recruitment of additional brain regions supporting control functions. A second ALE meta-analysis indeed identified such regions that were consistently more strongly engaged during episodic simulation than episodic memory. These comprised the core-network clusters located in the left dorsolateral prefrontal cortex and posterior inferior parietal lobe and other structures distributed broadly across the default and fronto-parietal control networks. Together, the analyses determine the set of brain regions that allow us to experience past and hypothetical episodes, thus providing an important foundation for studying the regions’ specialized contributions and interactions. PMID:26142352

  17. Effect of formal and informal likelihood functions on uncertainty assessment in a single event rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran

    2016-09-01

    In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.

  18. Results and Analysis from Space Suit Joint Torque Testing

    NASA Technical Reports Server (NTRS)

    Matty, Jennifer E.; Aitchison, Lindsay

    2009-01-01

    A space suit s mobility is critical to an astronaut s ability to perform work efficiently. As mobility increases, the astronaut can perform tasks for longer durations with less fatigue. The term mobility, with respect to space suits, is defined in terms of two key components: joint range of motion and joint torque. Individually these measures describe the path which in which a joint travels and the force required to move it through that path. Previous space suits mobility requirements were defined as the collective result of these two measures and verified by the completion of discrete functional tasks. While a valid way to impose mobility requirements, such a method does necessitate a solid understanding of the operational scenarios in which the final suit will be performing. Because the Constellation space suit system requirements are being finalized with a relatively immature concept of operations, the Space Suit Element team elected to define mobility in terms of its constituent parts to increase the likelihood that the future pressure garment will be mobile enough to enable a broad scope of undefined exploration activities. The range of motion requirements were defined by measuring the ranges of motion test subjects achieved while performing a series of joint maximizing tasks in a variety of flight and prototype space suits. The definition of joint torque requirements has proved more elusive. NASA evaluated several different approaches to the problem before deciding to generate requirements based on unmanned joint torque evaluations of six different space suit configurations being articulated through 16 separate joint movements. This paper discusses the experiment design, data analysis and results, and the process used to determine the final values for the Constellation pressure garment joint torque requirements.

  19. A Bayesian Semiparametric Approach for Incorporating Longitudinal Information on Exposure History for Inference in Case-Control Studies

    PubMed Central

    Bhadra, Dhiman; Daniels, Michael J.; Kim, Sungduk; Ghosh, Malay; Mukherjee, Bhramar

    2014-01-01

    In a typical case-control study, exposure information is collected at a single time-point for the cases and controls. However, case-control studies are often embedded in existing cohort studies containing a wealth of longitudinal exposure history on the participants. Recent medical studies have indicated that incorporating past exposure history, or a constructed summary measure of cumulative exposure derived from the past exposure history, when available, may lead to more precise and clinically meaningful estimates of the disease risk. In this paper, we propose a flexible Bayesian semiparametric approach to model the longitudinal exposure profiles of the cases and controls and then use measures of cumulative exposure based on a weighted integral of this trajectory in the final disease risk model. The estimation is done via a joint likelihood. In the construction of the cumulative exposure summary, we introduce an influence function, a smooth function of time to characterize the association pattern of the exposure profile on the disease status with different time windows potentially having differential influence/weights. This enables us to analyze how the present disease status of a subject is influenced by his/her past exposure history conditional on the current ones. The joint likelihood formulation allows us to properly account for uncertainties associated with both stages of the estimation process in an integrated manner. Analysis is carried out in a hierarchical Bayesian framework using Reversible jump Markov chain Monte Carlo (RJMCMC) algorithms. The proposed methodology is motivated by, and applied to a case-control study of prostate cancer where longitudinal biomarker information is available for the cases and controls. PMID:22313248

  20. Combining evidence using likelihood ratios in writer verification

    NASA Astrophysics Data System (ADS)

    Srihari, Sargur; Kovalenko, Dimitry; Tang, Yi; Ball, Gregory

    2013-01-01

    Forensic identification is the task of determining whether or not observed evidence arose from a known source. It involves determining a likelihood ratio (LR) - the ratio of the joint probability of the evidence and source under the identification hypothesis (that the evidence came from the source) and under the exclusion hypothesis (that the evidence did not arise from the source). In LR- based decision methods, particularly handwriting comparison, a variable number of input evidences is used. A decision based on many pieces of evidence can result in nearly the same LR as one based on few pieces of evidence. We consider methods for distinguishing between such situations. One of these is to provide confidence intervals together with the decisions and another is to combine the inputs using weights. We propose a new method that generalizes the Bayesian approach and uses an explicitly defined discount function. Empirical evaluation with several data sets including synthetically generated ones and handwriting comparison shows greater flexibility of the proposed method.

  1. Assessment of articular disc displacement of temporomandibular joint with ultrasound.

    PubMed

    Razek, Ahmed Abdel Khalek Abdel; Al Mahdy Al Belasy, Fouad; Ahmed, Wael Mohamed Said; Haggag, Mai Ahmed

    2015-06-01

    To assess pattern of articular disc displacement in patients with internal derangement (ID) of temporomandibular joint (TMJ) with ultrasound. Prospective study was conducted upon 40 TMJ of 20 patients (3 male, 17 female with mean age of 26.1 years) with ID of TMJ. They underwent high-resolution ultrasound and MR imaging of TMJ. The MR images were used as the gold standard for calculating sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (PLR), and negative likelihood ratio (NLR) of ultrasound for diagnosis of anterior or sideway displacement of the disc. The anterior displaced disc was seen in 26 joints at MR and 22 joints at ultrasound. The diagnostic efficacy of ultrasound for anterior displacement has sensitivity of 79.3 %, specificity of 72.7 %, accuracy of 77.5 %, PPV of 88.5 %, NPV of 57.1 %, PLR of 2.9 and NLR of 0.34. The sideway displacement of disc was seen in four joints at MR and three joints at ultrasound. The diagnostic efficacy of ultrasound for sideway displacement has a sensitivity of 75 %, specificity of 63.6 %, accuracy of 66.7 %, PPV of 42.8, NPV of 87.5 %, PLR of 2.06, and NLR of 0.39. We concluded that ultrasound is a non-invasive imaging modality used for assessment of anterior and sideway displacement of the articular disc in patients with ID of TMJ.

  2. A joint frailty-copula model between tumour progression and death for meta-analysis.

    PubMed

    Emura, Takeshi; Nakatochi, Masahiro; Murotani, Kenta; Rondeau, Virginie

    2017-12-01

    Dependent censoring often arises in biomedical studies when time to tumour progression (e.g., relapse of cancer) is censored by an informative terminal event (e.g., death). For meta-analysis combining existing studies, a joint survival model between tumour progression and death has been considered under semicompeting risks, which induces dependence through the study-specific frailty. Our paper here utilizes copulas to generalize the joint frailty model by introducing additional source of dependence arising from intra-subject association between tumour progression and death. The practical value of the new model is particularly evident for meta-analyses in which only a few covariates are consistently measured across studies and hence there exist residual dependence. The covariate effects are formulated through the Cox proportional hazards model, and the baseline hazards are nonparametrically modeled on a basis of splines. The estimator is then obtained by maximizing a penalized log-likelihood function. We also show that the present methodologies are easily modified for the competing risks or recurrent event data, and are generalized to accommodate left-truncation. Simulations are performed to examine the performance of the proposed estimator. The method is applied to a meta-analysis for assessing a recently suggested biomarker CXCL12 for survival in ovarian cancer patients. We implement our proposed methods in R joint.Cox package.

  3. Idealized models of the joint probability distribution of wind speeds

    NASA Astrophysics Data System (ADS)

    Monahan, Adam H.

    2018-05-01

    The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.

  4. Cramer-Rao Lower Bound Evaluation for Linear Frequency Modulation Based Active Radar Networks Operating in a Rice Fading Environment.

    PubMed

    Shi, Chenguang; Salous, Sana; Wang, Fei; Zhou, Jianjiang

    2016-12-06

    This paper investigates the joint target parameter (delay and Doppler) estimation performance of linear frequency modulation (LFM)-based radar networks in a Rice fading environment. The active radar networks are composed of multiple radar transmitters and multichannel receivers placed on moving platforms. First, the log-likelihood function of the received signal for a Rician target is derived, where the received signal scattered off the target comprises of dominant scatterer (DS) component and weak isotropic scatterers (WIS) components. Then, the analytically closed-form expressions of the Cramer-Rao lower bounds (CRLBs) on the Cartesian coordinates of target position and velocity are calculated, which can be adopted as a performance metric to access the target parameter estimation accuracy for LFM-based radar network systems in a Rice fading environment. It is found that the cumulative Fisher information matrix (FIM) is a linear combination of both DS component and WIS components, and it also demonstrates that the joint CRLB is a function of signal-to-noise ratio (SNR), target's radar cross section (RCS) and transmitted waveform parameters, as well as the relative geometry between the target and the radar network architectures. Finally, numerical results are provided to indicate that the joint target parameter estimation performance of active radar networks can be significantly improved with the exploitation of DS component.

  5. Cramer-Rao Lower Bound Evaluation for Linear Frequency Modulation Based Active Radar Networks Operating in a Rice Fading Environment

    PubMed Central

    Shi, Chenguang; Salous, Sana; Wang, Fei; Zhou, Jianjiang

    2016-01-01

    This paper investigates the joint target parameter (delay and Doppler) estimation performance of linear frequency modulation (LFM)-based radar networks in a Rice fading environment. The active radar networks are composed of multiple radar transmitters and multichannel receivers placed on moving platforms. First, the log-likelihood function of the received signal for a Rician target is derived, where the received signal scattered off the target comprises of dominant scatterer (DS) component and weak isotropic scatterers (WIS) components. Then, the analytically closed-form expressions of the Cramer-Rao lower bounds (CRLBs) on the Cartesian coordinates of target position and velocity are calculated, which can be adopted as a performance metric to access the target parameter estimation accuracy for LFM-based radar network systems in a Rice fading environment. It is found that the cumulative Fisher information matrix (FIM) is a linear combination of both DS component and WIS components, and it also demonstrates that the joint CRLB is a function of signal-to-noise ratio (SNR), target’s radar cross section (RCS) and transmitted waveform parameters, as well as the relative geometry between the target and the radar network architectures. Finally, numerical results are provided to indicate that the joint target parameter estimation performance of active radar networks can be significantly improved with the exploitation of DS component. PMID:27929433

  6. A concise evidence-based physical examination for diagnosis of acromioclavicular joint pathology: a systematic review.

    PubMed

    Krill, Michael K; Rosas, Samuel; Kwon, KiHyun; Dakkak, Andrew; Nwachukwu, Benedict U; McCormick, Frank

    2018-02-01

    The clinical examination of the shoulder joint is an undervalued diagnostic tool for evaluating acromioclavicular (AC) joint pathology. Applying evidence-based clinical tests enables providers to make an accurate diagnosis and minimize costly imaging procedures and potential delays in care. The purpose of this study was to create a decision tree analysis enabling simple and accurate diagnosis of AC joint pathology. A systematic review of the Medline, Ovid and Cochrane Review databases was performed to identify level one and two diagnostic studies evaluating clinical tests for AC joint pathology. Individual test characteristics were combined in series and in parallel to improve sensitivities and specificities. A secondary analysis utilized subjective pre-test probabilities to create a clinical decision tree algorithm with post-test probabilities. The optimal special test combination to screen and confirm AC joint pathology combined Paxinos sign and O'Brien's Test, with a specificity of 95.8% when performed in series; whereas, Paxinos sign and Hawkins-Kennedy Test demonstrated a sensitivity of 93.7% when performed in parallel. Paxinos sign and O'Brien's Test demonstrated the greatest positive likelihood ratio (2.71); whereas, Paxinos sign and Hawkins-Kennedy Test reported the lowest negative likelihood ratio (0.35). No combination of special tests performed in series or in parallel creates more than a small impact on post-test probabilities to screen or confirm AC joint pathology. Paxinos sign and O'Brien's Test is the only special test combination that has a small and sometimes important impact when used both in series and in parallel. Physical examination testing is not beneficial for diagnosis of AC joint pathology when pretest probability is unequivocal. In these instances, it is of benefit to proceed with procedural tests to evaluate AC joint pathology. Ultrasound-guided corticosteroid injections are diagnostic and therapeutic. An ultrasound-guided AC joint corticosteroid injection may be an appropriate new standard for treatment and surgical decision-making. II - Systematic Review.

  7. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2017-02-11

    This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  8. Under-reported data analysis with INAR-hidden Markov chains.

    PubMed

    Fernández-Fontelo, Amanda; Cabaña, Alejandra; Puig, Pedro; Moriña, David

    2016-11-20

    In this work, we deal with correlated under-reported data through INAR(1)-hidden Markov chain models. These models are very flexible and can be identified through its autocorrelation function, which has a very simple form. A naïve method of parameter estimation is proposed, jointly with the maximum likelihood method based on a revised version of the forward algorithm. The most-probable unobserved time series is reconstructed by means of the Viterbi algorithm. Several examples of application in the field of public health are discussed illustrating the utility of the models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Planning and conducting medical support to joint operations.

    PubMed

    Hughes, A S

    2000-01-01

    Operations are core business for all of us and the PJHQ medical cell is at the heart of this process. With the likelihood of a continuing UK presence in the Balkans for some time to come, the challenge of meeting this and any other new operational commitments will continue to demand a flexible and innovative approach from all concerned. These challenges together with the Joint and multinational aspects of the job make the PJHQ medical cell a demanding but rewarding place to work and provide a valuable Joint staff training opportunity for the RNMS.

  10. Neutron Tomography of a Fuel Cell: Statistical Learning Implementation of a Penalized Likelihood Method

    NASA Astrophysics Data System (ADS)

    Coakley, Kevin J.; Vecchia, Dominic F.; Hussey, Daniel S.; Jacobson, David L.

    2013-10-01

    At the NIST Neutron Imaging Facility, we collect neutron projection data for both the dry and wet states of a Proton-Exchange-Membrane (PEM) fuel cell. Transmitted thermal neutrons captured in a scintillator doped with lithium-6 produce scintillation light that is detected by an amorphous silicon detector. Based on joint analysis of the dry and wet state projection data, we reconstruct a residual neutron attenuation image with a Penalized Likelihood method with an edge-preserving Huber penalty function that has two parameters that control how well jumps in the reconstruction are preserved and how well noisy fluctuations are smoothed out. The choice of these parameters greatly influences the resulting reconstruction. We present a data-driven method that objectively selects these parameters, and study its performance for both simulated and experimental data. Before reconstruction, we transform the projection data so that the variance-to-mean ratio is approximately one. For both simulated and measured projection data, the Penalized Likelihood method reconstruction is visually sharper than a reconstruction yielded by a standard Filtered Back Projection method. In an idealized simulation experiment, we demonstrate that the cross validation procedure selects regularization parameters that yield a reconstruction that is nearly optimal according to a root-mean-square prediction error criterion.

  11. Interpreting DNA mixtures with the presence of relatives.

    PubMed

    Hu, Yue-Qing; Fung, Wing K

    2003-02-01

    The assessment of DNA mixtures with the presence of relatives is discussed in this paper. The kinship coefficients are incorporated into the evaluation of the likelihood ratio and we first derive a unified expression of joint genotypic probabilities. A general formula and seven types of detailed expressions for calculating likelihood ratios are then developed for the case that a relative of the tested suspect is an unknown contributor to the mixed stain. These results can also be applied to the case of a non-tested suspect with one tested relative. Moreover, the formula for calculating the likelihood ratio when there are two related unknown contributors is given. Data for a real situation are given for illustration, and the effect of kinship on the likelihood ratio is shown therein. Some interesting findings are obtained.

  12. Evaluation of traction stirrup distraction technique to increase the joint space of the shoulder joint in the dog: A cadaveric study.

    PubMed

    Devesa, V; Rovesti, G L; Urrutia, P G; Sanroman, F; Rodriguez-Quiros, J

    2015-06-01

    The objective of this study was to evaluate technical feasibility and efficacy of a joint distraction technique by traction stirrup to facilitate shoulder arthroscopy and assess potential soft tissue damage. Twenty shoulders were evaluated radiographically before distraction. Distraction was applied with loads from 40 N up to 200 N, in 40 N increments, and the joint space was recorded at each step by radiographic images. The effects of joint flexion and intra-articular air injection at maximum load were evaluated. Radiographic evaluation was performed after distraction to evaluate ensuing joint laxity. Joint distraction by traction stirrup technique produces a significant increase in the joint space; an increase in joint laxity could not be inferred by standard and stress radiographs. However, further clinical studies are required to evaluate potential neurovascular complications. A wider joint space may be useful to facilitate arthroscopy, reducing the likelihood for iatrogenic damage to intra-articular structures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Specifying the core network supporting episodic simulation and episodic memory by activation likelihood estimation.

    PubMed

    Benoit, Roland G; Schacter, Daniel L

    2015-08-01

    It has been suggested that the simulation of hypothetical episodes and the recollection of past episodes are supported by fundamentally the same set of brain regions. The present article specifies this core network via Activation Likelihood Estimation (ALE). Specifically, a first meta-analysis revealed joint engagement of expected core-network regions during episodic memory and episodic simulation. These include parts of the medial surface, the hippocampus and parahippocampal cortex within the medial temporal lobes, and the temporal and inferior posterior parietal cortices on the lateral surface. Both capacities also jointly recruited additional regions such as parts of the bilateral dorsolateral prefrontal cortex. All of these core regions overlapped with the default network. Moreover, it has further been suggested that episodic simulation may require a stronger engagement of some of the core network's nodes as well as the recruitment of additional brain regions supporting control functions. A second ALE meta-analysis indeed identified such regions that were consistently more strongly engaged during episodic simulation than episodic memory. These comprised the core-network clusters located in the left dorsolateral prefrontal cortex and posterior inferior parietal lobe and other structures distributed broadly across the default and fronto-parietal control networks. Together, the analyses determine the set of brain regions that allow us to experience past and hypothetical episodes, thus providing an important foundation for studying the regions' specialized contributions and interactions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Deep Learning for Population Genetic Inference.

    PubMed

    Sheehan, Sara; Song, Yun S

    2016-03-01

    Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.

  15. Deep Learning for Population Genetic Inference

    PubMed Central

    Sheehan, Sara; Song, Yun S.

    2016-01-01

    Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908

  16. Proximal interphalangeal joint ankylosis in an early medieval horse from Wrocław Cathedral Island, Poland.

    PubMed

    Janeczek, Maciej; Chrószcz, Aleksander; Onar, Vedat; Henklewski, Radomir; Skalec, Aleksandra

    2017-06-01

    Animal remains that are unearthed during archaeological excavations often provide useful information about socio-cultural context, including human habits, beliefs, and ancestral relationships. In this report, we present pathologically altered equine first and second phalanges from an 11th century specimen that was excavated at Wrocław Cathedral Island, Poland. The results of gross examination, radiography, and computed tomography, indicate osteoarthritis of the proximal interphalangeal joint, with partial ankylosis. Based on comparison with living modern horses undergoing lameness examination, as well as with recent literature, we conclude that the horse likely was lame for at least several months prior to death. The ability of this horse to work probably was reduced, but the degree of compromise during life cannot be stated precisely. Present day medical knowledge indicates that there was little likelihood of successful treatment for this condition during the middle ages. However, modern horses with similar pathology can function reasonably well with appropriate treatment and management, particularly following joint ankylosis. Thus, we approach the cultural question of why such an individual would have been maintained with limitations, for a probably-significant period of time. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. The composition of engineered cartilage at the time of implantation determines the likelihood of regenerating tissue with a normal collagen architecture.

    PubMed

    Nagel, Thomas; Kelly, Daniel J

    2013-04-01

    The biomechanical functionality of articular cartilage is derived from both its biochemical composition and the architecture of the collagen network. Failure to replicate this normal Benninghoff architecture in regenerating articular cartilage may in turn predispose the tissue to failure. In this article, the influence of the maturity (or functionality) of a tissue-engineered construct at the time of implantation into a tibial chondral defect on the likelihood of recapitulating a normal Benninghoff architecture was investigated using a computational model featuring a collagen remodeling algorithm. Such a normal tissue architecture was predicted to form in the intact tibial plateau due to the interplay between the depth-dependent extracellular matrix properties, foremost swelling pressures, and external mechanical loading. In the presence of even small empty defects in the articular surface, the collagen architecture in the surrounding cartilage was predicted to deviate significantly from the native state, indicating a possible predisposition for osteoarthritic changes. These negative alterations were alleviated by the implantation of tissue-engineered cartilage, where a mature implant was predicted to result in the formation of a more native-like collagen architecture than immature implants. The results of this study highlight the importance of cartilage graft functionality to maintain and/or re-establish joint function and suggest that engineering a tissue with a native depth-dependent composition may facilitate the establishment of a normal Benninghoff collagen architecture after implantation into load-bearing defects.

  18. Joint effect of MCP-1 genotype GG and MMP-1 genotype 2G/2G increases the likelihood of developing pulmonary tuberculosis in BCG-vaccinated individuals.

    PubMed

    Ganachari, Malathesha; Ruiz-Morales, Jorge A; Gomez de la Torre Pretell, Juan C; Dinh, Jeffrey; Granados, Julio; Flores-Villanueva, Pedro O

    2010-01-25

    We previously reported that the -2518 MCP-1 genotype GG increases the likelihood of developing tuberculosis (TB) in non-BCG-vaccinated Mexicans and Koreans. Here, we tested the hypothesis that this genotype, alone or together with the -1607 MMP-1 functional polymorphism, increases the likelihood of developing TB in BCG-vaccinated individuals. We conducted population-based case-control studies of BCG-vaccinated individuals in Mexico and Peru that included 193 TB cases and 243 healthy tuberculin-positive controls from Mexico and 701 TB cases and 796 controls from Peru. We also performed immunohistochemistry (IHC) analysis of lymph nodes from carriers of relevant two-locus genotypes and in vitro studies to determine how these variants may operate to increase the risk of developing active disease. We report that a joint effect between the -2518 MCP-1 genotype GG and the -1607 MMP-1 genotype 2G/2G consistently increases the odds of developing TB 3.59-fold in Mexicans and 3.9-fold in Peruvians. IHC analysis of lymph nodes indicated that carriers of the two-locus genotype MCP-1 GG MMP-1 2G/2G express the highest levels of both MCP-1 and MMP-1. Carriers of these susceptibility genotypes might be at increased risk of developing TB because they produce high levels of MCP-1, which enhances the induction of MMP-1 production by M. tuberculosis-sonicate antigens to higher levels than in carriers of the other two-locus MCP-1 MMP-1 genotypes studied. This notion was supported by in vitro experiments and luciferase based promoter activity assay. MMP-1 may destabilize granuloma formation and promote tissue damage and disease progression early in the infection. Our findings may foster the development of new and personalized therapeutic approaches targeting MCP-1 and/or MMP-1.

  19. A likelihood framework for joint estimation of salmon abundance and migratory timing using telemetric mark-recapture

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; Gates, Kenneth S.; Palmer, Douglas E.

    2010-01-01

    Many fisheries for Pacific salmon Oncorhynchus spp. are actively managed to meet escapement goal objectives. In fisheries where the demand for surplus production is high, an extensive assessment program is needed to achieve the opposing objectives of allowing adequate escapement and fully exploiting the available surplus. Knowledge of abundance is a critical element of such assessment programs. Abundance estimation using mark—recapture experiments in combination with telemetry has become common in recent years, particularly within Alaskan river systems. Fish are typically captured and marked in the lower river while migrating in aggregations of individuals from multiple populations. Recapture data are obtained using telemetry receivers that are co-located with abundance assessment projects near spawning areas, which provide large sample sizes and information on population-specific mark rates. When recapture data are obtained from multiple populations, unequal mark rates may reflect a violation of the assumption of homogeneous capture probabilities. A common analytical strategy is to test the hypothesis that mark rates are homogeneous and combine all recapture data if the test is not significant. However, mark rates are often low, and a test of homogeneity may lack sufficient power to detect meaningful differences among populations. In addition, differences among mark rates may provide information that could be exploited during parameter estimation. We present a temporally stratified mark—recapture model that permits capture probabilities and migratory timing through the capture area to vary among strata. Abundance information obtained from a subset of populations after the populations have segregated for spawning is jointly modeled with telemetry distribution data by use of a likelihood function. Maximization of the likelihood produces estimates of the abundance and timing of individual populations migrating through the capture area, thus yielding substantially more information than the total abundance estimate provided by the conventional approach. The utility of the model is illustrated with data for coho salmon O. kisutch from the Kasilof River in south-central Alaska.

  20. Joint Effect of MCP-1 Genotype GG and MMP-1 Genotype 2G/2G Increases the Likelihood of Developing Pulmonary Tuberculosis in BCG-Vaccinated Individuals

    PubMed Central

    Ganachari, Malathesha; Ruiz-Morales, Jorge A.; Gomez de la Torre Pretell, Juan C.; Dinh, Jeffrey; Granados, Julio; Flores-Villanueva, Pedro O.

    2010-01-01

    We previously reported that the – 2518 MCP-1 genotype GG increases the likelihood of developing tuberculosis (TB) in non-BCG-vaccinated Mexicans and Koreans. Here, we tested the hypothesis that this genotype, alone or together with the – 1607 MMP-1 functional polymorphism, increases the likelihood of developing TB in BCG-vaccinated individuals. We conducted population-based case-control studies of BCG-vaccinated individuals in Mexico and Peru that included 193 TB cases and 243 healthy tuberculin-positive controls from Mexico and 701 TB cases and 796 controls from Peru. We also performed immunohistochemistry (IHC) analysis of lymph nodes from carriers of relevant two-locus genotypes and in vitro studies to determine how these variants may operate to increase the risk of developing active disease. We report that a joint effect between the – 2518 MCP-1 genotype GG and the – 1607 MMP-1 genotype 2G/2G consistently increases the odds of developing TB 3.59-fold in Mexicans and 3.9-fold in Peruvians. IHC analysis of lymph nodes indicated that carriers of the two-locus genotype MCP-1 GG MMP-1 2G/2G express the highest levels of both MCP-1 and MMP-1. Carriers of these susceptibility genotypes might be at increased risk of developing TB because they produce high levels of MCP-1, which enhances the induction of MMP-1 production by M. tuberculosis-sonicate antigens to higher levels than in carriers of the other two-locus MCP-1 MMP-1 genotypes studied. This notion was supported by in vitro experiments and luciferase based promoter activity assay. MMP-1 may destabilize granuloma formation and promote tissue damage and disease progression early in the infection. Our findings may foster the development of new and personalized therapeutic approaches targeting MCP-1 and/or MMP-1. PMID:20111728

  1. Model-based estimation for dynamic cardiac studies using ECT.

    PubMed

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  2. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  3. optBINS: Optimal Binning for histograms

    NASA Astrophysics Data System (ADS)

    Knuth, Kevin H.

    2018-03-01

    optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.

  4. Education and black-white interracial marriage.

    PubMed

    Gullickson, Aaron

    2006-11-01

    This article examines competing theoretical claims regarding how an individual's education will affect his or her likelihood of interracial marriage. I demonstrate that prior models of interracial marriage have failed to adequately distinguish the joint and marginal effects of education on interracial marriage and present a model capable of distinguishing these effects. I test this model on black-white interracial marriages using 1980, 1990, and 2000 U.S. census data. The results reveal partial support for status exchange theory within black male-white female unions and strong isolation of lower-class blacks from the interracial marriage market. Structural assimilation theory is not supported because the educational attainment of whites is not related in any consistent fashion to the likelihood of interracial marriage. The strong isolation of lower-class blacks from the interracial marriage market has gone unnoticed in prior research because of the failure of prior methods to distinguish joint and marginal effects.

  5. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography

    PubMed Central

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-01-01

    Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290

  6. Joint optimization of fluence field modulation and regularization in task-driven computed tomography

    NASA Astrophysics Data System (ADS)

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-03-01

    Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, Adam Paul

    The authors present a measurement of the mass of the top quark. The event sample is selected from proton-antiproton collisions, at 1.96 TeV center-of-mass energy, observed with the CDF detector at Fermilab's Tevatron. They consider a 318 pb -1 dataset collected between March 2002 and August 2004. They select events that contain one energetic lepton, large missing transverse energy, exactly four energetic jets, and at least one displaced vertex b tag. The analysis uses leading-order tmore » $$\\bar{t}$$ and background matrix elements along with parameterized parton showering to construct event-by-event likelihoods as a function of top quark mass. From the 63 events observed with the 318 pb -1 dataset they extract a top quark mass of 172.0 ± 2.6(stat) ± 3.3(syst) GeV/c 2 from the joint likelihood. The mean expected statistical uncertainty is 3.2 GeV/c 2 for m $$\\bar{t}$$ = 178 GTeV/c 2 and 3.1 GeV/c 2 for m $$\\bar{t}$$ = 172.5 GeV/c 2. The systematic error is dominated by the uncertainty of the jet energy scale.« less

  8. A Likelihood-Based Framework for Association Analysis of Allele-Specific Copy Numbers.

    PubMed

    Hu, Y J; Lin, D Y; Sun, W; Zeng, D

    2014-10-01

    Copy number variants (CNVs) and single nucleotide polymorphisms (SNPs) co-exist throughout the human genome and jointly contribute to phenotypic variations. Thus, it is desirable to consider both types of variants, as characterized by allele-specific copy numbers (ASCNs), in association studies of complex human diseases. Current SNP genotyping technologies capture the CNV and SNP information simultaneously via fluorescent intensity measurements. The common practice of calling ASCNs from the intensity measurements and then using the ASCN calls in downstream association analysis has important limitations. First, the association tests are prone to false-positive findings when differential measurement errors between cases and controls arise from differences in DNA quality or handling. Second, the uncertainties in the ASCN calls are ignored. We present a general framework for the integrated analysis of CNVs and SNPs, including the analysis of total copy numbers as a special case. Our approach combines the ASCN calling and the association analysis into a single step while allowing for differential measurement errors. We construct likelihood functions that properly account for case-control sampling and measurement errors. We establish the asymptotic properties of the maximum likelihood estimators and develop EM algorithms to implement the corresponding inference procedures. The advantages of the proposed methods over the existing ones are demonstrated through realistic simulation studies and an application to a genome-wide association study of schizophrenia. Extensions to next-generation sequencing data are discussed.

  9. Model-based estimation for dynamic cardiac studies using ECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.

    1994-06-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performancemore » to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed.« less

  10. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  11. On non-parametric maximum likelihood estimation of the bivariate survivor function.

    PubMed

    Prentice, R L

    The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.

  12. A Systematic Bayesian Integration of Epidemiological and Genetic Data

    PubMed Central

    Lau, Max S. Y.; Marion, Glenn; Streftaris, George; Gibson, Gavin

    2015-01-01

    Genetic sequence data on pathogens have great potential to inform inference of their transmission dynamics ultimately leading to better disease control. Where genetic change and disease transmission occur on comparable timescales additional information can be inferred via the joint analysis of such genetic sequence data and epidemiological observations based on clinical symptoms and diagnostic tests. Although recently introduced approaches represent substantial progress, for computational reasons they approximate genuine joint inference of disease dynamics and genetic change in the pathogen population, capturing partially the joint epidemiological-evolutionary dynamics. Improved methods are needed to fully integrate such genetic data with epidemiological observations, for achieving a more robust inference of the transmission tree and other key epidemiological parameters such as latent periods. Here, building on current literature, a novel Bayesian framework is proposed that infers simultaneously and explicitly the transmission tree and unobserved transmitted pathogen sequences. Our framework facilitates the use of realistic likelihood functions and enables systematic and genuine joint inference of the epidemiological-evolutionary process from partially observed outbreaks. Using simulated data it is shown that this approach is able to infer accurately joint epidemiological-evolutionary dynamics, even when pathogen sequences and epidemiological data are incomplete, and when sequences are available for only a fraction of exposures. These results also characterise and quantify the value of incomplete and partial sequence data, which has important implications for sampling design, and demonstrate the abilities of the introduced method to identify multiple clusters within an outbreak. The framework is used to analyse an outbreak of foot-and-mouth disease in the UK, enhancing current understanding of its transmission dynamics and evolutionary process. PMID:26599399

  13. Polarimetric image reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Valenzuela, John R.

    In the field of imaging polarimetry Stokes parameters are sought and must be inferred from noisy and blurred intensity measurements. Using a penalized-likelihood estimation framework we investigate reconstruction quality when estimating intensity images and then transforming to Stokes parameters (traditional estimator), and when estimating Stokes parameters directly (Stokes estimator). We define our cost function for reconstruction by a weighted least squares data fit term and a regularization penalty. It is shown that under quadratic regularization, the traditional and Stokes estimators can be made equal by appropriate choice of regularization parameters. It is empirically shown that, when using edge preserving regularization, estimating the Stokes parameters directly leads to lower RMS error in reconstruction. Also, the addition of a cross channel regularization term further lowers the RMS error for both methods especially in the case of low SNR. The technique of phase diversity has been used in traditional incoherent imaging systems to jointly estimate an object and optical system aberrations. We extend the technique of phase diversity to polarimetric imaging systems. Specifically, we describe penalized-likelihood methods for jointly estimating Stokes images and optical system aberrations from measurements that contain phase diversity. Jointly estimating Stokes images and optical system aberrations involves a large parameter space. A closed-form expression for the estimate of the Stokes images in terms of the aberration parameters is derived and used in a formulation that reduces the dimensionality of the search space to the number of aberration parameters only. We compare the performance of the joint estimator under both quadratic and edge-preserving regularization. The joint estimator with edge-preserving regularization yields higher fidelity polarization estimates than with quadratic regularization. Under quadratic regularization, using the reduced-parameter search strategy, accurate aberration estimates can be obtained without recourse to regularization "tuning". Phase-diverse wavefront sensing is emerging as a viable candidate wavefront sensor for adaptive-optics systems. In a quadratically penalized weighted least squares estimation framework a closed form expression for the object being imaged in terms of the aberrations in the system is available. This expression offers a dramatic reduction of the dimensionality of the estimation problem and thus is of great interest for practical applications. We have derived an expression for an approximate joint covariance matrix for object and aberrations in the phase diversity context. Our expression for the approximate joint covariance is compared with the "known-object" Cramer-Rao lower bound that is typically used for system parameter optimization. Estimates of the optimal amount of defocus in a phase-diverse wavefront sensor derived from the joint-covariance matrix, the known-object Cramer-Rao bound, and Monte Carlo simulations are compared for an extended scene and a point object. It is found that our variance approximation, that incorporates the uncertainty of the object, leads to an improvement in predicting the optimal amount of defocus to use in a phase-diverse wavefront sensor.

  14. Image transmission system using adaptive joint source and channel decoding

    NASA Astrophysics Data System (ADS)

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  15. Cosmological parameters from a re-analysis of the WMAP 7 year low-resolution maps

    NASA Astrophysics Data System (ADS)

    Finelli, F.; De Rosa, A.; Gruppuso, A.; Paoletti, D.

    2013-06-01

    Cosmological parameters from Wilkinson Microwave Anisotropy Probe (WMAP) 7 year data are re-analysed by substituting a pixel-based likelihood estimator to the one delivered publicly by the WMAP team. Our pixel-based estimator handles exactly intensity and polarization in a joint manner, allowing us to use low-resolution maps and noise covariance matrices in T, Q, U at the same resolution, which in this work is 3.6°. We describe the features and the performances of the code implementing our pixel-based likelihood estimator. We perform a battery of tests on the application of our pixel-based likelihood routine to WMAP publicly available low-resolution foreground-cleaned products, in combination with the WMAP high-ℓ likelihood, reporting the differences on cosmological parameters evaluated by the full WMAP likelihood public package. The differences are not only due to the treatment of polarization, but also to the marginalization over monopole and dipole uncertainties present in the WMAP pixel likelihood code for temperature. The credible central value for the cosmological parameters change below the 1σ level with respect to the evaluation by the full WMAP 7 year likelihood code, with the largest difference in a shift to smaller values of the scalar spectral index nS.

  16. I Do…Do You? Dependence and Biological Sex Moderate Daters’ Cortisol Responses When Accommodating a Partner’s Thoughts about Marriage

    PubMed Central

    Schoenfeld, Elizabeth A.; Loving, Timothy J.

    2012-01-01

    We examined how daters’ levels of relationship dependence interact with men’s and women’s degree of accommodation during a likelihood of marriage discussion to predict cortisol levels at the conclusion of the discussion. Upon arriving at the laboratory, couple members were separated and asked to graph their perceived likelihood of one day marrying each other. Couples were reunited and instructed to create a joint graph depicting their agreed-upon chance of marriage. For the majority of couples, negotiating their likelihood of marriage required one or both partners to accommodate each other’s presumed likelihood of marriage. Multilevel analyses revealed a significant Dependence x Accommodation x Sex interaction. For women who increased their likelihood of marriage, feelings of dependence predicted heightened levels of cortisol relative to baseline; we suggest such a response is indicative of eustress. Among men, those who accommodated by decreasing their likelihood of marriage experienced significantly lower levels of cortisol to the extent they were less dependent on their partners. Discussion focuses on why men and women show different physiological reactions in response to seemingly favorable outcomes from a relationship discussion. PMID:22801249

  17. I do…do you? Dependence and biological sex moderate daters' cortisol responses when accommodating a partner's thoughts about marriage.

    PubMed

    Schoenfeld, Elizabeth A; Loving, Timothy J

    2013-06-01

    We examined how daters' levels of relationship dependence interact with men's and women's degree of accommodation during a likelihood of marriage discussion to predict cortisol levels at the conclusion of the discussion. Upon arriving at the laboratory, couple members were separated and asked to graph their perceived likelihood of one day marrying each other. Couples were reunited and instructed to create a joint graph depicting their agreed-upon chance of marriage. For the majority of couples, negotiating their likelihood of marriage required one or both partners to accommodate each other's presumed likelihood of marriage. Multilevel analyses revealed a significant Dependence×Accommodation×Sex interaction. For women who increased their likelihood of marriage, feelings of dependence predicted heightened levels of cortisol relative to baseline; we suggest such a response is indicative of eustress. Among men, those who accommodated by decreasing their likelihood of marriage experienced significantly lower levels of cortisol to the extent that they were less dependent on their partners. Discussion focuses on why men and women show different physiological reactions in response to seemingly favorable outcomes from a relationship discussion. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. A probabilistic assessment of the likelihood of vegetation drought under varying climate conditions across China.

    PubMed

    Liu, Zhiyong; Li, Chao; Zhou, Ping; Chen, Xiuzhi

    2016-10-07

    Climate change significantly impacts the vegetation growth and terrestrial ecosystems. Using satellite remote sensing observations, here we focus on investigating vegetation dynamics and the likelihood of vegetation-related drought under varying climate conditions across China. We first compare temporal trends of Normalized Difference Vegetation Index (NDVI) and climatic variables over China. We find that in fact there is no significant change in vegetation over the cold regions where warming is significant. Then, we propose a joint probability model to estimate the likelihood of vegetation-related drought conditioned on different precipitation/temperature scenarios in growing season across China. To the best of our knowledge, this study is the first to examine the vegetation-related drought risk over China from a perspective based on joint probability. Our results demonstrate risk patterns of vegetation-related drought under both low and high precipitation/temperature conditions. We further identify the variations in vegetation-related drought risk under different climate conditions and the sensitivity of drought risk to climate variability. These findings provide insights for decision makers to evaluate drought risk and vegetation-related develop drought mitigation strategies over China in a warming world. The proposed methodology also has a great potential to be applied for vegetation-related drought risk assessment in other regions worldwide.

  19. A probabilistic assessment of the likelihood of vegetation drought under varying climate conditions across China

    PubMed Central

    Liu, Zhiyong; Li, Chao; Zhou, Ping; Chen, Xiuzhi

    2016-01-01

    Climate change significantly impacts the vegetation growth and terrestrial ecosystems. Using satellite remote sensing observations, here we focus on investigating vegetation dynamics and the likelihood of vegetation-related drought under varying climate conditions across China. We first compare temporal trends of Normalized Difference Vegetation Index (NDVI) and climatic variables over China. We find that in fact there is no significant change in vegetation over the cold regions where warming is significant. Then, we propose a joint probability model to estimate the likelihood of vegetation-related drought conditioned on different precipitation/temperature scenarios in growing season across China. To the best of our knowledge, this study is the first to examine the vegetation-related drought risk over China from a perspective based on joint probability. Our results demonstrate risk patterns of vegetation-related drought under both low and high precipitation/temperature conditions. We further identify the variations in vegetation-related drought risk under different climate conditions and the sensitivity of drought risk to climate variability. These findings provide insights for decision makers to evaluate drought risk and vegetation-related develop drought mitigation strategies over China in a warming world. The proposed methodology also has a great potential to be applied for vegetation-related drought risk assessment in other regions worldwide. PMID:27713530

  20. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  1. A family of chaotic pure analog coding schemes based on baker's map function

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun

    2015-12-01

    This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.

  2. Joint image registration and fusion method with a gradient strength regularization

    NASA Astrophysics Data System (ADS)

    Lidong, Huang; Wei, Zhao; Jun, Wang

    2015-05-01

    Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.

  3. Jointly modeling longitudinal proportional data and survival times with an application to the quality of life data in a breast cancer trial.

    PubMed

    Song, Hui; Peng, Yingwei; Tu, Dongsheng

    2017-04-01

    Motivated by the joint analysis of longitudinal quality of life data and recurrence free survival times from a cancer clinical trial, we present in this paper two approaches to jointly model the longitudinal proportional measurements, which are confined in a finite interval, and survival data. Both approaches assume a proportional hazards model for the survival times. For the longitudinal component, the first approach applies the classical linear mixed model to logit transformed responses, while the second approach directly models the responses using a simplex distribution. A semiparametric method based on a penalized joint likelihood generated by the Laplace approximation is derived to fit the joint model defined by the second approach. The proposed procedures are evaluated in a simulation study and applied to the analysis of breast cancer data motivated this research.

  4. Localized cervical facet joint kinematics under physiological and whiplash loading.

    PubMed

    Stemper, Brian D; Yoganandan, Narayan; Gennarelli, Thomas A; Pintar, Frank A

    2005-12-01

    Although facet joints have been implicated in the whiplash injury mechanism, no investigators have determined the degree to which joint motions in whiplash are nonphysiological. The purpose of this investigation was to quantify the correlation between facet joint and segmental motions under physiological and whiplash loading. Human cadaveric cervical spine specimens were exercise tested under physiological extension loading, and intact human head-neck complexes were exercise tested under whiplash loading to correlate the localized component motions of the C4-5 facet joint with segmental extension. Facet joint shear and distraction kinematics demonstrated a linear correlation with segmental extension under both loading modes. Facet joints responded differently to whiplash and physiological loading, with significantly increased kinematics for the same-segmental angulation. The limitations of this study include removal of superficial musculature and the limited sample size for physiological testing. The presence of increased facet joint motions indicated that synovial joint soft-tissue components (that is, synovial membrane and capsular ligament) sustain increased distortion that may subject these tissues to a greater likelihood of injury. This finding is supported by clinical investigations in which lower cervical facet joint injury resulted in similar pain patterns due to the most commonly reported whiplash symptoms.

  5. Bayesian spatiotemporal analysis of zero-inflated biological population density data by a delta-normal spatiotemporal additive model.

    PubMed

    Arcuti, Simona; Pollice, Alessio; Ribecco, Nunziata; D'Onghia, Gianfranco

    2016-03-01

    We evaluate the spatiotemporal changes in the density of a particular species of crustacean known as deep-water rose shrimp, Parapenaeus longirostris, based on biological sample data collected during trawl surveys carried out from 1995 to 2006 as part of the international project MEDITS (MEDiterranean International Trawl Surveys). As is the case for many biological variables, density data are continuous and characterized by unusually large amounts of zeros, accompanied by a skewed distribution of the remaining values. Here we analyze the normalized density data by a Bayesian delta-normal semiparametric additive model including the effects of covariates, using penalized regression with low-rank thin-plate splines for nonlinear spatial and temporal effects. Modeling the zero and nonzero values by two joint processes, as we propose in this work, allows to obtain great flexibility and easily handling of complex likelihood functions, avoiding inaccurate statistical inferences due to misclassification of the high proportion of exact zeros in the model. Bayesian model estimation is obtained by Markov chain Monte Carlo simulations, suitably specifying the complex likelihood function of the zero-inflated density data. The study highlights relevant nonlinear spatial and temporal effects and the influence of the annual Mediterranean oscillations index and of the sea surface temperature on the distribution of the deep-water rose shrimp density. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Computation of nonparametric convex hazard estimators via profile methods.

    PubMed

    Jankowski, Hanna K; Wellner, Jon A

    2009-05-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.

  7. Exact likelihood evaluations and foreground marginalization in low resolution WMAP data

    NASA Astrophysics Data System (ADS)

    Slosar, Anže; Seljak, Uroš; Makarov, Alexey

    2004-06-01

    The large scale anisotropies of Wilkinson Microwave Anisotropy Probe (WMAP) data have attracted a lot of attention and have been a source of controversy, with many favorite cosmological models being apparently disfavored by the power spectrum estimates at low l. All the existing analyses of theoretical models are based on approximations for the likelihood function, which are likely to be inaccurate on large scales. Here we present exact evaluations of the likelihood of the low multipoles by direct inversion of the theoretical covariance matrix for low resolution WMAP maps. We project out the unwanted galactic contaminants using the WMAP derived maps of these foregrounds. This improves over the template based foreground subtraction used in the original analysis, which can remove some of the cosmological signal and may lead to a suppression of power. As a result we find an increase in power at low multipoles. For the quadrupole the maximum likelihood values are rather uncertain and vary between 140 and 220 μK2. On the other hand, the probability distribution away from the peak is robust and, assuming a uniform prior between 0 and 2000 μK2, the probability of having the true value above 1200 μK2 (as predicted by the simplest cold dark matter model with a cosmological constant) is 10%, a factor of 2.5 higher than predicted by the WMAP likelihood code. We do not find the correlation function to be unusual beyond the low quadrupole value. We develop a fast likelihood evaluation routine that can be used instead of WMAP routines for low l values. We apply it to the Markov chain Monte Carlo analysis to compare the cosmological parameters between the two cases. The new analysis of WMAP either alone or jointly with the Sloan Digital Sky Survey (SDSS) and the Very Small Array (VSA) data reduces the evidence for running to less than 1σ, giving αs=-0.022±0.033 for the combined case. The new analysis prefers about a 1σ lower value of Ωm, a consequence of an increased integrated Sachs-Wolfe (ISW) effect contribution required by the increase in the spectrum at low l. These results suggest that the details of foreground removal and full likelihood analysis are important for parameter estimation from the WMAP data. They are robust in the sense that they do not change significantly with frequency, mask, or details of foreground template marginalization. The marginalization approach presented here is the most conservative method to remove the foregrounds and should be particularly useful in the analysis of polarization, where foreground contamination may be much more severe.

  8. Joint sparsity based heterogeneous data-level fusion for target detection and estimation

    NASA Astrophysics Data System (ADS)

    Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe

    2017-05-01

    Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.

  9. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  10. Robust Likelihoods for Inflationary Gravitational Waves from Maps of Cosmic Microwave Background Polarization

    NASA Technical Reports Server (NTRS)

    Switzer, Eric Ryan; Watts, Duncan J.

    2016-01-01

    The B-mode polarization of the cosmic microwave background provides a unique window into tensor perturbations from inflationary gravitational waves. Survey effects complicate the estimation and description of the power spectrum on the largest angular scales. The pixel-space likelihood yields parameter distributions without the power spectrum as an intermediate step, but it does not have the large suite of tests available to power spectral methods. Searches for primordial B-modes must rigorously reject and rule out contamination. Many forms of contamination vary or are uncorrelated across epochs, frequencies, surveys, or other data treatment subsets. The cross power and the power spectrum of the difference of subset maps provide approaches to reject and isolate excess variance. We develop an analogous joint pixel-space likelihood. Contamination not modeled in the likelihood produces parameter-dependent bias and complicates the interpretation of the difference map. We describe a null test that consistently weights the difference map. Excess variance should either be explicitly modeled in the covariance or be removed through reprocessing the data.

  11. Spatial hydrological drought characteristics in Karkheh River basin, southwest Iran using copulas

    NASA Astrophysics Data System (ADS)

    Dodangeh, Esmaeel; Shahedi, Kaka; Shiau, Jenq-Tzong; MirAkbari, Maryam

    2017-08-01

    Investigation on drought characteristics such as severity, duration, and frequency is crucial for water resources planning and management in a river basin. While the methodology for multivariate drought frequency analysis is well established by applying the copulas, the estimation on the associated parameters by various parameter estimation methods and the effects on the obtained results have not yet been investigated. This research aims at conducting a comparative analysis between the maximum likelihood parametric and non-parametric method of the Kendall τ estimation method for copulas parameter estimation. The methods were employed to study joint severity-duration probability and recurrence intervals in Karkheh River basin (southwest Iran) which is facing severe water-deficit problems. Daily streamflow data at three hydrological gauging stations (Tang Sazbon, Huleilan and Polchehr) near the Karkheh dam were used to draw flow duration curves (FDC) of these three stations. The Q_{75} index extracted from the FDC were set as threshold level to abstract drought characteristics such as drought duration and severity on the basis of the run theory. Drought duration and severity were separately modeled using the univariate probabilistic distributions and gamma-GEV, LN2-exponential, and LN2-gamma were selected as the best paired drought severity-duration inputs for copulas according to the Akaike Information Criteria (AIC), Kolmogorov-Smirnov and chi-square tests. Archimedean Clayton, Frank, and extreme value Gumbel copulas were employed to construct joint cumulative distribution functions (JCDF) of droughts for each station. Frank copula at Tang Sazbon and Gumbel at Huleilan and Polchehr stations were identified as the best copulas based on the performance evaluation criteria including AIC, BIC, log-likelihood and root mean square error (RMSE) values. Based on the RMSE values, nonparametric Kendall-τ is preferred to the parametric maximum likelihood estimation method. The results showed greater drought return periods by the parametric ML method in comparison to the nonparametric Kendall τ estimation method. The results also showed that stations located in tributaries (Huleilan and Polchehr) have close return periods, while the station along the main river (Tang Sazbon) has the smaller return periods for the drought events with identical drought duration and severity.

  12. Bayesian Analysis of a Simple Measurement Model Distinguishing between Types of Information

    NASA Astrophysics Data System (ADS)

    Lira, Ignacio; Grientschnig, Dieter

    2015-12-01

    Let a quantity of interest, Y, be modeled in terms of a quantity X and a set of other quantities Z. Suppose that for Z there is type B information, by which we mean that it leads directly to a joint state-of-knowledge probability density function (PDF) for that set, without reference to likelihoods. Suppose also that for X there is type A information, which signifies that a likelihood is available. The posterior for X is then obtained by updating its prior with said likelihood by means of Bayes' rule, where the prior encodes whatever type B information there may be available for X. If there is no such information, an appropriate non-informative prior should be used. Once the PDFs for X and Z have been constructed, they can be propagated through the measurement model to obtain the PDF for Y, either analytically or numerically. But suppose that, at the same time, there is also information of type A, type B or both types together for the quantity Y. By processing such information in the manner described above we obtain another PDF for Y. Which one is right? Should both PDFs be merged somehow? Is there another way of applying Bayes' rule such that a single PDF for Y is obtained that encodes all existing information? In this paper we examine what we believe should be the proper ways of dealing with such a (not uncommon) situation.

  13. The Maximum Likelihood Solution for Inclination-only Data

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2006-12-01

    The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag

  14. New method to incorporate Type B uncertainty into least-squares procedures in radionuclide metrology.

    PubMed

    Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei

    2016-03-01

    We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.

  15. Position of the prosthesis and the incidence of dislocation following total hip replacement.

    PubMed

    He, Rong-xin; Yan, Shi-gui; Wu, Li-dong; Wang, Xiang-hua; Dai, Xue-song

    2007-07-05

    Dislocation is the second most common complication of hip replacement surgery, and impact of the prosthesis is believed to be the fundamental reason. The present study employed Solidworks 2003 and MSC-Nastran software to analyze the three dimensional variables in order to investigate how to prevent dislocation following hip replacement surgery. Computed tomography (CT) imaging was used to collect femoral outline data and Solidworks 2003 software was used to construct the cup model with variabilities. Nastran software was used to evaluate dislocation at different prosthesis positions and different geometrical shapes. Three dimensional movement and results from finite element method were analyzed and the values of dislocation resistance index (DRI), range of motion to impingement (ROM-I), range of motion to dislocation (ROM-D) and peak resisting moment (PRM) were determined. Computer simulation was used to evaluate the range of motion of the hip joint at different prosthesis positions. Finite element analysis showed: (1) Increasing the ratio of head/neck increased the ROM-I values and moderately increased ROM-D and PRM values. Increasing the head size significantly increased PRM and to some extent ROM-I and ROM-D values, which suggested that there would be a greater likelihood of dislocation. (2) Increasing the anteversion angle increased the ROM-I, ROM-D, PRM, energy required for dislocation (ENERGY-D) and DRI values, which would increase the stability of the joint. (3) As the chamber angle was increased, ROM-I, ROM-D, PRM, Energy-D and DRI values were increased, resulting in improved joint stability. Chamber angles exceeding 55 degrees resulted in increases in ROM-I and ROM-D values, but decreases in PRM, Energy-D, and DRI values, which, in turn, increased the likelihood of dislocation. (4) The cup, which was reduced posteriorly, reduced ROM-I values (2.1 -- 5.3 degrees ) and increased the DRI value (0.073). This suggested that the posterior high side had the effect of 10 degrees anteversion angle. Increasing the head/neck ratio increases joint stability. Posterior high side reduced the range of motion of the joint but increased joint stability; Increasing the anteversion angle increases DRI values and thus improve joint stability; Increasing the chamber angle increases DRI values and improves joint stability. However, at angles exceeding 55 degrees , further increases in the chamber angle result in decreased DRI values and reduce the stability of the joint.

  16. Joint Transmit and Receive Filter Optimization for Sub-Nyquist Delay-Doppler Estimation

    NASA Astrophysics Data System (ADS)

    Lenz, Andreas; Stein, Manuel S.; Swindlehurst, A. Lee

    2018-05-01

    In this article, a framework is presented for the joint optimization of the analog transmit and receive filter with respect to a parameter estimation problem. At the receiver, conventional signal processing systems restrict the two-sided bandwidth of the analog pre-filter $B$ to the rate of the analog-to-digital converter $f_s$ to comply with the well-known Nyquist-Shannon sampling theorem. In contrast, here we consider a transceiver that by design violates the common paradigm $B\\leq f_s$. To this end, at the receiver, we allow for a higher pre-filter bandwidth $B>f_s$ and study the achievable parameter estimation accuracy under a fixed sampling rate when the transmit and receive filter are jointly optimized with respect to the Bayesian Cram\\'{e}r-Rao lower bound. For the case of delay-Doppler estimation, we propose to approximate the required Fisher information matrix and solve the transceiver design problem by an alternating optimization algorithm. The presented approach allows us to explore the Pareto-optimal region spanned by transmit and receive filters which are favorable under a weighted mean squared error criterion. We also discuss the computational complexity of the obtained transceiver design by visualizing the resulting ambiguity function. Finally, we verify the performance of the optimized designs by Monte-Carlo simulations of a likelihood-based estimator.

  17. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    NASA Astrophysics Data System (ADS)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  18. Gaussian copula as a likelihood function for environmental models

    NASA Astrophysics Data System (ADS)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.

  19. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    PubMed

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  20. Effects of Lewis number on the statistics of the invariants of the velocity gradient tensor and local flow topologies in turbulent premixed flames

    NASA Astrophysics Data System (ADS)

    Wacks, Daniel; Konstantinou, Ilias; Chakraborty, Nilanjan

    2018-04-01

    The behaviours of the three invariants of the velocity gradient tensor and the resultant local flow topologies in turbulent premixed flames have been analysed using three-dimensional direct numerical simulation data for different values of the characteristic Lewis number ranging from 0.34 to 1.2. The results have been analysed to reveal the statistical behaviours of the invariants and the flow topologies conditional upon the reaction progress variable. The behaviours of the invariants have been explained in terms of the relative strengths of the thermal and mass diffusions, embodied by the influence of the Lewis number on turbulent premixed combustion. Similarly, the behaviours of the flow topologies have been explained in terms not only of the Lewis number but also of the likelihood of the occurrence of individual flow topologies in the different flame regions. Furthermore, the sensitivity of the joint probability density function of the second and third invariants and the joint probability density functions of the mean and Gaussian curvatures to the variation in Lewis number have similarly been examined. Finally, the dependences of the scalar-turbulence interaction term on augmented heat release and of the vortex-stretching term on flame-induced turbulence have been explained in terms of the Lewis number, flow topology and reaction progress variable.

  1. Effects of Lewis number on the statistics of the invariants of the velocity gradient tensor and local flow topologies in turbulent premixed flames

    PubMed Central

    Konstantinou, Ilias; Chakraborty, Nilanjan

    2018-01-01

    The behaviours of the three invariants of the velocity gradient tensor and the resultant local flow topologies in turbulent premixed flames have been analysed using three-dimensional direct numerical simulation data for different values of the characteristic Lewis number ranging from 0.34 to 1.2. The results have been analysed to reveal the statistical behaviours of the invariants and the flow topologies conditional upon the reaction progress variable. The behaviours of the invariants have been explained in terms of the relative strengths of the thermal and mass diffusions, embodied by the influence of the Lewis number on turbulent premixed combustion. Similarly, the behaviours of the flow topologies have been explained in terms not only of the Lewis number but also of the likelihood of the occurrence of individual flow topologies in the different flame regions. Furthermore, the sensitivity of the joint probability density function of the second and third invariants and the joint probability density functions of the mean and Gaussian curvatures to the variation in Lewis number have similarly been examined. Finally, the dependences of the scalar--turbulence interaction term on augmented heat release and of the vortex-stretching term on flame-induced turbulence have been explained in terms of the Lewis number, flow topology and reaction progress variable. PMID:29740257

  2. Performance Analysis for Joint Target Parameter Estimation in UMTS-Based Passive Multistatic Radar with Antenna Arrays Using Modified Cramér-Rao Lower Bounds.

    PubMed

    Shi, Chenguang; Wang, Fei; Salous, Sana; Zhou, Jianjiang

    2017-10-18

    In this study, the modified Cramér-Rao lower bounds (MCRLBs) on the joint estimation of target position and velocity is investigated for a universal mobile telecommunication system (UMTS)-based passive multistatic radar system with antenna arrays. First, we analyze the log-likelihood redfunction of the received signal for a complex Gaussian extended target. Then, due to the non-deterministic transmitted data symbols, the analytically closed-form expressions of the MCRLBs on the Cartesian coordinates of target position and velocity are derived for a multistatic radar system with N t UMTS-based transmit station of L t antenna elements and N r receive stations of L r antenna elements. With the aid of numerical simulations, it is shown that increasing the number of receiving elements in each receive station can reduce the estimation errors. In addition, it is demonstrated that the MCRLB is not only a function of signal-to-noise ratio (SNR), the number of receiving antenna elements and the properties of the transmitted UMTS signals, but also a function of the relative geometric configuration between the target and the multistatic radar system.The analytical expressions for MCRLB will open up a new dimension for passive multistatic radar system by aiding the optimal placement of receive stations to improve the target parameter estimation performance.

  3. Performance Analysis for Joint Target Parameter Estimation in UMTS-Based Passive Multistatic Radar with Antenna Arrays Using Modified Cramér-Rao Lower Bounds

    PubMed Central

    Wang, Fei; Salous, Sana; Zhou, Jianjiang

    2017-01-01

    In this study, the modified Cramér-Rao lower bounds (MCRLBs) on the joint estimation of target position and velocity is investigated for a universal mobile telecommunication system (UMTS)-based passive multistatic radar system with antenna arrays. First, we analyze the log-likelihood redfunction of the received signal for a complex Gaussian extended target. Then, due to the non-deterministic transmitted data symbols, the analytically closed-form expressions of the MCRLBs on the Cartesian coordinates of target position and velocity are derived for a multistatic radar system with Nt UMTS-based transmit station of Lt antenna elements and Nr receive stations of Lr antenna elements. With the aid of numerical simulations, it is shown that increasing the number of receiving elements in each receive station can reduce the estimation errors. In addition, it is demonstrated that the MCRLB is not only a function of signal-to-noise ratio (SNR), the number of receiving antenna elements and the properties of the transmitted UMTS signals, but also a function of the relative geometric configuration between the target and the multistatic radar system.The analytical expressions for MCRLB will open up a new dimension for passive multistatic radar system by aiding the optimal placement of receive stations to improve the target parameter estimation performance. PMID:29057805

  4. Joint estimation of preferential attachment and node fitness in growing complex networks

    NASA Astrophysics Data System (ADS)

    Pham, Thong; Sheridan, Paul; Shimodaira, Hidetoshi

    2016-09-01

    Complex network growth across diverse fields of science is hypothesized to be driven in the main by a combination of preferential attachment and node fitness processes. For measuring the respective influences of these processes, previous approaches make strong and untested assumptions on the functional forms of either the preferential attachment function or fitness function or both. We introduce a Bayesian statistical method called PAFit to estimate preferential attachment and node fitness without imposing such functional constraints that works by maximizing a log-likelihood function with suitably added regularization terms. We use PAFit to investigate the interplay between preferential attachment and node fitness processes in a Facebook wall-post network. While we uncover evidence for both preferential attachment and node fitness, thus validating the hypothesis that these processes together drive complex network evolution, we also find that node fitness plays the bigger role in determining the degree of a node. This is the first validation of its kind on real-world network data. But surprisingly the rate of preferential attachment is found to deviate from the conventional log-linear form when node fitness is taken into account. The proposed method is implemented in the R package PAFit.

  5. Differentially Private Synthesization of Multi-Dimensional Data using Copula Functions

    PubMed Central

    Li, Haoran; Xiong, Li; Jiang, Xiaoqian

    2014-01-01

    Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall’s τ estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques. PMID:25405241

  6. Joint estimation of preferential attachment and node fitness in growing complex networks

    PubMed Central

    Pham, Thong; Sheridan, Paul; Shimodaira, Hidetoshi

    2016-01-01

    Complex network growth across diverse fields of science is hypothesized to be driven in the main by a combination of preferential attachment and node fitness processes. For measuring the respective influences of these processes, previous approaches make strong and untested assumptions on the functional forms of either the preferential attachment function or fitness function or both. We introduce a Bayesian statistical method called PAFit to estimate preferential attachment and node fitness without imposing such functional constraints that works by maximizing a log-likelihood function with suitably added regularization terms. We use PAFit to investigate the interplay between preferential attachment and node fitness processes in a Facebook wall-post network. While we uncover evidence for both preferential attachment and node fitness, thus validating the hypothesis that these processes together drive complex network evolution, we also find that node fitness plays the bigger role in determining the degree of a node. This is the first validation of its kind on real-world network data. But surprisingly the rate of preferential attachment is found to deviate from the conventional log-linear form when node fitness is taken into account. The proposed method is implemented in the R package PAFit. PMID:27601314

  7. COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION FOR BANDED PROBABILITY DISTRIBUTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gjerløw, E.; Mikkelsen, K.; Eriksen, H. K.

    We investigate sets of random variables that can be arranged sequentially such that a given variable only depends conditionally on its immediate predecessor. For such sets, we show that the full joint probability distribution may be expressed exclusively in terms of uni- and bivariate marginals. Under the assumption that the cosmic microwave background (CMB) power spectrum likelihood only exhibits correlations within a banded multipole range, Δl{sub C}, we apply this expression to two outstanding problems in CMB likelihood analysis. First, we derive a statistically well-defined hybrid likelihood estimator, merging two independent (e.g., low- and high-l) likelihoods into a single expressionmore » that properly accounts for correlations between the two. Applying this expression to the Wilkinson Microwave Anisotropy Probe (WMAP) likelihood, we verify that the effect of correlations on cosmological parameters in the transition region is negligible in terms of cosmological parameters for WMAP; the largest relative shift seen for any parameter is 0.06σ. However, because this may not hold for other experimental setups (e.g., for different instrumental noise properties or analysis masks), but must rather be verified on a case-by-case basis, we recommend our new hybridization scheme for future experiments for statistical self-consistency reasons. Second, we use the same expression to improve the convergence rate of the Blackwell-Rao likelihood estimator, reducing the required number of Monte Carlo samples by several orders of magnitude, and thereby extend it to high-l applications.« less

  8. Development of the Average Likelihood Function for Code Division Multiple Access (CDMA) Using BPSK and QPSK Symbols

    DTIC Science & Technology

    2015-01-01

    This research has the purpose to establish a foundation for new classification and estimation of CDMA signals. Keywords: DS / CDMA signals, BPSK, QPSK...DEVELOPMENT OF THE AVERAGE LIKELIHOOD FUNCTION FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) USING BPSK AND QPSK SYMBOLS JANUARY 2015...To) OCT 2013 – OCT 2014 4. TITLE AND SUBTITLE DEVELOPMENT OF THE AVERAGE LIKELIHOOD FUNCTION FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) USING BPSK

  9. Developmental association of prosocial behaviour with aggression, anxiety and depression from infancy to preadolescence.

    PubMed

    Nantel-Vivier, Amélie; Pihl, Robert O; Côté, Sylvana; Tremblay, Richard E

    2014-10-01

    Research on associations between children's prosocial behaviour and mental health has provided mixed evidence. The present study sought to describe and predict the joint development of prosocial behaviour with externalizing and internalizing problems (physical aggression, anxiety and depression) from 2 to 11 years of age. Data were drawn from the National Longitudinal Survey of Children and Youth (NLSCY). Biennial prosocial behaviour, physical aggression, anxiety and depression maternal ratings were sought for 10,700 children aged 0 to 9 years at the first assessment point. While a negative association was observed between prosociality and physical aggression, more complex associations emerged with internalizing problems. Being a boy decreased the likelihood of membership in the high prosocial trajectory. Maternal depression increased the likelihood of moderate aggression, but also of joint high prosociality/low aggression. Low family income predicted the joint development of high prosociality with high physical aggression and high depression. Individual differences exist in the association of prosocial behaviour with mental health. While high prosociality tends to co-occur with low levels of mental health problems, high prosociality and internalizing/externalizing problems can co-occur in subgroups of children. Child, mother and family characteristics are predictive of individual differences in prosocial behaviour and mental health development. Mechanisms underlying these associations warrant future investigations. © 2014 The Authors. Journal of Child Psychology and Psychiatry. © 2014 Association for Child and Adolescent Mental Health.

  10. Women's Inheritance Rights and Intergenerational Transmission of Resources in India

    ERIC Educational Resources Information Center

    Deininger, Klaus; Goyal, Aparajita; Nagarajan, Hari

    2013-01-01

    We use inheritance patterns over three generations of individuals to assess the impact of changes in the Hindu Succession Act that grant daughters equal coparcenary birth rights in joint family property that were denied to daughters in the past. We show that the amendment significantly increased daughters' likelihood to inherit land, but that…

  11. Further Iterations on Using the Problem-Analysis Framework

    ERIC Educational Resources Information Center

    Annan, Michael; Chua, Jocelyn; Cole, Rachel; Kennedy, Emma; James, Robert; Markusdottir, Ingibjorg; Monsen, Jeremy; Robertson, Lucy; Shah, Sonia

    2013-01-01

    A core component of applied educational and child psychology practice is the skilfulness with which practitioners are able to rigorously structure and conceptualise complex real world human problems. This is done in such a way that when they (with others) jointly work on them, there is an increased likelihood of positive outcomes being achieved…

  12. On the Relation between the Linear Factor Model and the Latent Profile Model

    ERIC Educational Resources Information Center

    Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul

    2011-01-01

    The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…

  13. Indirect detection constraints on s- and t-channel simplified models of dark matter

    NASA Astrophysics Data System (ADS)

    Carpenter, Linda M.; Colburn, Russell; Goodman, Jessica; Linden, Tim

    2016-09-01

    Recent Fermi-LAT observations of dwarf spheroidal galaxies in the Milky Way have placed strong limits on the gamma-ray flux from dark matter annihilation. In order to produce the strongest limit on the dark matter annihilation cross section, the observations of each dwarf galaxy have typically been "stacked" in a joint-likelihood analysis, utilizing optical observations to constrain the dark matter density profile in each dwarf. These limits have typically been computed only for singular annihilation final states, such as b b ¯ or τ+τ- . In this paper, we generalize this approach by producing an independent joint-likelihood analysis to set constraints on models where the dark matter particle annihilates to multiple final-state fermions. We interpret these results in the context of the most popular simplified models, including those with s- and t-channel dark matter annihilation through scalar and vector mediators. We present our results as constraints on the minimum dark matter mass and the mediator sector parameters. Additionally, we compare our simplified model results to those of effective field theory contact interactions in the high-mass limit.

  14. Distributed multimodal data fusion for large scale wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Ertin, Emre

    2006-05-01

    Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.

  15. RELATIONSHIP OF PRESEASON MOVEMENT SCREENS WITH OVERUSE SYMPTOMS IN COLLEGIATE BASEBALL PLAYERS

    PubMed Central

    Clifton, Daniel R.; Onate, James A.; Ramsey, Vincent K.; Cromartie, Fred

    2017-01-01

    Background: The shoulder mobility screen of the Functional Movement Screen™ (FMS™) and the upper extremity patterns of the Selective Functional Movement Assessment (SFMA) assess global, multi-joint movement capabilities in the upper-extremities. Identifying which assessment can most accurately determine if baseball players are at an increased risk of experiencing overuse symptoms in the shoulder or elbow throughout a competitive season may reduce throwing-related injuries requiring medical attention. Purpose: The purpose of this study was to determine if preseason FMS™ or SFMA scores were related to overuse severity scores in the shoulder or elbow during the preseason and competitive season. Study design: Cohort study. Methods: Sixty healthy, male, Division III collegiate baseball players (mean age = 20.1 ± 2.0 years) underwent preseason testing using the FMS™ shoulder mobility screen, and SFMA upper extremity patterns. Their scores were dichotomized into good and bad movement scores, and were compared to weekly questionnaires registering overuse symptoms and pain severity in the shoulder or elbow during the season. Results: Poor FMS™ performance was associated with an increased likelihood of experiencing at least one overuse symptom during the preseason independent of grade and position (adjusted odds ratio [OR] = 5.14, p = 0.03). Poor SFMA performance was associated with an increased likelihood of experiencing at least one overuse symptom during the preseason (adjusted OR = 6.10, p = 0.03) and during the competitive season (adjusted OR = 17.07, p = 0.03) independent of grade and position. Conclusion: FMS™ shoulder mobility and SFMA upper extremity pattern performance were related to the likelihood of experiencing overuse symptoms during a baseball season. Participants with poor FMSTM performances may be more likely to experience at least one overuse symptom in their shoulder or elbow during the preseason. Additionally, individuals with poor SFMA performances may be more likely to report overuse symptoms during the preseason or competitive season. Level of evidence: Level 3 PMID:29158957

  16. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  17. A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Becker, D.; Cain, S.

    Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.

  18. Estimating Animal Abundance in Ground Beef Batches Assayed with Molecular Markers

    PubMed Central

    Hu, Xin-Sheng; Simila, Janika; Platz, Sindey Schueler; Moore, Stephen S.; Plastow, Graham; Meghen, Ciaran N.

    2012-01-01

    Estimating animal abundance in industrial scale batches of ground meat is important for mapping meat products through the manufacturing process and for effectively tracing the finished product during a food safety recall. The processing of ground beef involves a potentially large number of animals from diverse sources in a single product batch, which produces a high heterogeneity in capture probability. In order to estimate animal abundance through DNA profiling of ground beef constituents, two parameter-based statistical models were developed for incidence data. Simulations were applied to evaluate the maximum likelihood estimate (MLE) of a joint likelihood function from multiple surveys, showing superiority in the presence of high capture heterogeneity with small sample sizes, or comparable estimation in the presence of low capture heterogeneity with a large sample size when compared to other existing models. Our model employs the full information on the pattern of the capture-recapture frequencies from multiple samples. We applied the proposed models to estimate animal abundance in six manufacturing beef batches, genotyped using 30 single nucleotide polymorphism (SNP) markers, from a large scale beef grinding facility. Results show that between 411∼1367 animals were present in six manufacturing beef batches. These estimates are informative as a reference for improving recall processes and tracing finished meat products back to source. PMID:22479559

  19. Association between Clean Indoor Air Laws and Voluntary Smokefree Rules in Homes and Cars

    PubMed Central

    Cheng, Kai-Wen; Okechukwu, Cassandra A.; McMillen, Robert; Glantz, Stanton A.

    2013-01-01

    Objectives This study examines the influence that smokefree workplaces, restaurants, and bars on the adoption of smokefree rules in homes and cars and whether the adoptions of home and car smokefree rule are associated. Methods Bivariate probit models were used to jointly estimate the likelihood of living in a smokefree home and having a smokefree car as a function of law coverage and other variables. Household data are from the nationally representative Social Climate Survey of Tobacco Control 2001, 2002, and 2004–2009; clean indoor air law data comes from the American Nonsmokers’ Rights Foundation Tobacco Control Laws Database. Results Both “full coverage” and “partial coverage” smokefree legislations are associated with an increased likelihood of having voluntary home and car smokefree rules compared with “no coverage”. The association between “full coverage” and smokefree rule in homes and cars is 5% and 4%, respectively, and the association between “partial coverage” and smokefree rule in homes and cars is 3% and 4%, respectively. There is a positive association between the adoption of home and car smokefree rules. Conclusions Clean indoor air laws provide the additional benefit of encouraging voluntary adoption of smokefree rules in homes and cars. PMID:24114562

  20. PET image reconstruction using multi-parametric anato-functional priors

    NASA Astrophysics Data System (ADS)

    Mehranian, Abolfazl; Belzunce, Martin A.; Niccolini, Flavia; Politis, Marios; Prieto, Claudia; Turkheimer, Federico; Hammers, Alexander; Reader, Andrew J.

    2017-08-01

    In this study, we investigate the application of multi-parametric anato-functional (MR-PET) priors for the maximum a posteriori (MAP) reconstruction of brain PET data in order to address the limitations of the conventional anatomical priors in the presence of PET-MR mismatches. In addition to partial volume correction benefits, the suitability of these priors for reconstruction of low-count PET data is also introduced and demonstrated, comparing to standard maximum-likelihood (ML) reconstruction of high-count data. The conventional local Tikhonov and total variation (TV) priors and current state-of-the-art anatomical priors including the Kaipio, non-local Tikhonov prior with Bowsher and Gaussian similarity kernels are investigated and presented in a unified framework. The Gaussian kernels are calculated using both voxel- and patch-based feature vectors. To cope with PET and MR mismatches, the Bowsher and Gaussian priors are extended to multi-parametric priors. In addition, we propose a modified joint Burg entropy prior that by definition exploits all parametric information in the MAP reconstruction of PET data. The performance of the priors was extensively evaluated using 3D simulations and two clinical brain datasets of [18F]florbetaben and [18F]FDG radiotracers. For simulations, several anato-functional mismatches were intentionally introduced between the PET and MR images, and furthermore, for the FDG clinical dataset, two PET-unique active tumours were embedded in the PET data. Our simulation results showed that the joint Burg entropy prior far outperformed the conventional anatomical priors in terms of preserving PET unique lesions, while still reconstructing functional boundaries with corresponding MR boundaries. In addition, the multi-parametric extension of the Gaussian and Bowsher priors led to enhanced preservation of edge and PET unique features and also an improved bias-variance performance. In agreement with the simulation results, the clinical results also showed that the Gaussian prior with voxel-based feature vectors, the Bowsher and the joint Burg entropy priors were the best performing priors. However, for the FDG dataset with simulated tumours, the TV and proposed priors were capable of preserving the PET-unique tumours. Finally, an important outcome was the demonstration that the MAP reconstruction of a low-count FDG PET dataset using the proposed joint entropy prior can lead to comparable image quality to a conventional ML reconstruction with up to 5 times more counts. In conclusion, multi-parametric anato-functional priors provide a solution to address the pitfalls of the conventional priors and are therefore likely to increase the diagnostic confidence in MR-guided PET image reconstructions.

  1. Copula-based assessment of the relationship between food peaks and flood volumes using information on historical floods by Bayesian Monte Carlo Markov Chain simulations

    NASA Astrophysics Data System (ADS)

    Gaál, Ladislav; Szolgay, Ján.; Bacigál, Tomáå.¡; Kohnová, Silvia

    2010-05-01

    Copula-based estimation methods of hydro-climatological extremes have increasingly been gaining attention of researchers and practitioners in the last couple of years. Unlike the traditional estimation methods which are based on bivariate cumulative distribution functions (CDFs), copulas are a relatively flexible tool of statistics that allow for modelling dependencies between two or more variables such as flood peaks and flood volumes without making strict assumptions on the marginal distributions. The dependence structure and the reliability of the joint estimates of hydro-climatological extremes, mainly in the right tail of the joint CDF not only depends on the particular copula adopted but also on the data available for the estimation of the marginal distributions of the individual variables. Generally, data samples for frequency modelling have limited temporal extent, which is a considerable drawback of frequency analyses in practice. Therefore, it is advised to deal with statistical methods that improve any part of the process of copula construction and result in more reliable design values of hydrological variables. The scarcity of the data sample mostly in the extreme tail of the joint CDF can be bypassed, e.g., by using a considerably larger amount of simulated data by rainfall-runoff analysis or by including historical information on the variables under study. The latter approach of data extension is used here to make the quantile estimates of the individual marginals of the copula more reliable. In the presented paper it is proposed to use historical information in the frequency analysis of the marginal distributions in the framework of Bayesian Monte Carlo Markov Chain (MCMC) simulations. Generally, a Bayesian approach allows for a straightforward combination of different sources of information on floods (e.g. flood data from systematic measurements and historical flood records, respectively) in terms of a product of the corresponding likelihood functions. On the other hand, the MCMC algorithm is a numerical approach for sampling from the likelihood distributions. The Bayesian MCMC methods therefore provide an attractive way to estimate the uncertainty in parameters and quantile metrics of frequency distributions. The applicability of the method is demonstrated in a case study of the hydroelectric power station Orlík on the Vltava River. This site has a key role in the flood prevention of Prague, the capital city of the Czech Republic. The record length of the available flood data is 126 years from the period 1877-2002, while the flood event observed in 2002 that caused extensive damages and numerous casualties is treated as a historic one. To estimate the joint probabilities of flood peaks and volumes, different copulas are fitted and their goodness-of-fit are evaluated by bootstrap simulations. Finally, selected quantiles of flood volumes conditioned on given flood peaks are derived and compared with those obtained by the traditional method used in the practice of water management specialists of the Vltava River.

  2. Additivity and maximum likelihood estimation of nonlinear component biomass models

    Treesearch

    David L.R. Affleck

    2015-01-01

    Since Parresol's (2001) seminal paper on the subject, it has become common practice to develop nonlinear tree biomass equations so as to ensure compatibility among total and component predictions and to fit equations jointly using multi-step least squares (MSLS) methods. In particular, many researchers have specified total tree biomass models by aggregating the...

  3. Novel joint cupping clinical maneuver for ultrasonographic detection of knee joint effusions.

    PubMed

    Uryasev, Oleg; Joseph, Oliver C; McNamara, John P; Dallas, Apostolos P

    2013-11-01

    Knee effusions occur due to traumatic and atraumatic causes. Clinical diagnosis currently relies on several provocative techniques to demonstrate knee joint effusions. Portable bedside ultrasonography (US) is becoming an adjunct to diagnosis of effusions. We hypothesized that a US approach with a clinical joint cupping maneuver increases sensitivity in identifying effusions as compared to US alone. Using unembalmed cadaver knees, we injected fluid to create effusions up to 10 mL. Each effusion volume was measured in a lateral transverse location with respect to the patella. For each effusion we applied a joint cupping maneuver from an inferior approach, and re-measured the effusion. With increased volume of saline infusion, the mean depth of effusion on ultrasound imaging increased as well. Using a 2-mm cutoff, we visualized an effusion without the joint cupping maneuver at 2.5 mL and with the joint cupping technique at 1 mL. Mean effusion diameter increased on average 0.26 cm for the joint cupping maneuver as compared to without the maneuver. The effusion depth was statistically different at 2.5 and 7.5 mL (P < .05). Utilizing a joint cupping technique in combination with US is a valuable tool in assessing knee effusions, especially those of subclinical levels. Effusion measurements are complicated by uneven distribution of effusion fluid. A clinical joint cupping maneuver concentrates the fluid in one recess of the joint, increasing the likelihood of fluid detection using US. © 2013 Elsevier Inc. All rights reserved.

  4. Closed-loop carrier phase synchronization techniques motivated by likelihood functions

    NASA Technical Reports Server (NTRS)

    Tsou, H.; Hinedi, S.; Simon, M.

    1994-01-01

    This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.

  5. The increased risk of joint venture promotes social cooperation.

    PubMed

    Wu, Te; Fu, Feng; Zhang, Yanling; Wang, Long

    2013-01-01

    The joint venture of many members is common both in animal world and human society. In these public enterprizes, highly cooperative groups are more likely to while low cooperative groups are still possible but not probable to succeed. Existent literature mostly focuses on the traditional public goods game, in which cooperators create public wealth unconditionally and benefit all group members unbiasedly. We here institute a model addressing this public goods dilemma with incorporating the public resource foraging failure risk. Risk-averse individuals tend to lead a autarkic life, while risk-preferential ones tend to participate in the risky public goods game. For participants, group's success relies on its cooperativeness, with increasing contribution leading to increasing success likelihood. We introduce a function with one tunable parameter to describe the risk removal pattern and study in detail three representative classes. Analytical results show that the widely replicated population dynamics of cyclical dominance of loner, cooperator and defector disappear, while most of the time loners act as savors while eventually they also disappear. Depending on the way that group's success relies on its cooperativeness, either cooperators pervade the entire population or they coexist with defectors. Even in the later case, cooperators still hold salient superiority in number as some defectors also survive by parasitizing. The harder the joint venture succeeds, the higher level of cooperation once cooperators can win the evolutionary race. Our work may enrich the literature concerning the risky public goods games.

  6. The Increased Risk of Joint Venture Promotes Social Cooperation

    PubMed Central

    Wu, Te; Fu, Feng; Zhang, Yanling; Wang, Long

    2013-01-01

    The joint venture of many members is common both in animal world and human society. In these public enterprizes, highly cooperative groups are more likely to while low cooperative groups are still possible but not probable to succeed. Existent literature mostly focuses on the traditional public goods game, in which cooperators create public wealth unconditionally and benefit all group members unbiasedly. We here institute a model addressing this public goods dilemma with incorporating the public resource foraging failure risk. Risk-averse individuals tend to lead a autarkic life, while risk-preferential ones tend to participate in the risky public goods game. For participants, group's success relies on its cooperativeness, with increasing contribution leading to increasing success likelihood. We introduce a function with one tunable parameter to describe the risk removal pattern and study in detail three representative classes. Analytical results show that the widely replicated population dynamics of cyclical dominance of loner, cooperator and defector disappear, while most of the time loners act as savors while eventually they also disappear. Depending on the way that group's success relies on its cooperativeness, either cooperators pervade the entire population or they coexist with defectors. Even in the later case, cooperators still hold salient superiority in number as some defectors also survive by parasitizing. The harder the joint venture succeeds, the higher level of cooperation once cooperators can win the evolutionary race. Our work may enrich the literature concerning the risky public goods games. PMID:23750204

  7. JRmGRN: Joint reconstruction of multiple gene regulatory networks with common hub genes using data from multiple tissues or conditions.

    PubMed

    Deng, Wenping; Zhang, Kui; Liu, Sanzhen; Zhao, Patrick; Xu, Shizhong; Wei, Hairong

    2018-04-30

    Joint reconstruction of multiple gene regulatory networks (GRNs) using gene expression data from multiple tissues/conditions is very important for understanding common and tissue/condition-specific regulation. However, there are currently no computational models and methods available for directly constructing such multiple GRNs that not only share some common hub genes but also possess tissue/condition-specific regulatory edges. In this paper, we proposed a new graphic Gaussian model for joint reconstruction of multiple gene regulatory networks (JRmGRN), which highlighted hub genes, using gene expression data from several tissues/conditions. Under the framework of Gaussian graphical model, JRmGRN method constructs the GRNs through maximizing a penalized log likelihood function. We formulated it as a convex optimization problem, and then solved it with an alternating direction method of multipliers (ADMM) algorithm. The performance of JRmGRN was first evaluated with synthetic data and the results showed that JRmGRN outperformed several other methods for reconstruction of GRNs. We also applied our method to real Arabidopsis thaliana RNA-seq data from two light regime conditions in comparison with other methods, and both common hub genes and some conditions-specific hub genes were identified with higher accuracy and precision. JRmGRN is available as a R program from: https://github.com/wenpingd. hairong@mtu.edu. Proof of theorem, derivation of algorithm and supplementary data are available at Bioinformatics online.

  8. Multi-hazard Assessment and Scenario Toolbox (MhAST): A Framework for Analyzing Compounding Effects of Multiple Hazards

    NASA Astrophysics Data System (ADS)

    Sadegh, M.; Moftakhari, H.; AghaKouchak, A.

    2017-12-01

    Many natural hazards are driven by multiple forcing variables, and concurrence/consecutive extreme events significantly increases risk of infrastructure/system failure. It is a common practice to use univariate analysis based upon a perceived ruling driver to estimate design quantiles and/or return periods of extreme events. A multivariate analysis, however, permits modeling simultaneous occurrence of multiple forcing variables. In this presentation, we introduce the Multi-hazard Assessment and Scenario Toolbox (MhAST) that comprehensively analyzes marginal and joint probability distributions of natural hazards. MhAST also offers a wide range of scenarios of return period and design levels and their likelihoods. Contribution of this study is four-fold: 1. comprehensive analysis of marginal and joint probability of multiple drivers through 17 continuous distributions and 26 copulas, 2. multiple scenario analysis of concurrent extremes based upon the most likely joint occurrence, one ruling variable, and weighted random sampling of joint occurrences with similar exceedance probabilities, 3. weighted average scenario analysis based on a expected event, and 4. uncertainty analysis of the most likely joint occurrence scenario using a Bayesian framework.

  9. The presence of Waddell signs depends on age and gender, not diagnosis.

    PubMed

    Yoo, J U; McIver, T C; Hiratzka, J; Carlson, H; Carlson, N; Radoslovich, S S; Gernhart, T; Boshears, E; Kane, M S

    2018-02-01

    The aim of this study was to determine if positive Waddell signs were related to patients' demographics or to perception of their quality of life. This prospective cross-sectional study included 479 adult patients with back pain from a university spine centre. Each completed SF-12 and Oswestry Disability Index (ODI) questionnaires and underwent standard spinal examinations to elicit Waddell signs. The relationship between Waddell signs and age, gender, ODI, Mental Component Score (MCS), and Physical Component Score (PCS) scores was determined. Of the 479 patients, 128 (27%) had at least one positive Waddell sign. There were significantly more women with two or more Waddell signs than men. The proportion of patients with at least one positive Waddell sign increased with age until 55 years, and then declined rapidly; none had a positive sign over the age of 75 years. Functional outcome scores were significantly worse in those with a single Waddell sign (p < 0.01). With one or more Waddell signs, patients' PCS and ODI scores indicated a perception of severe disability; with three or more Waddell signs, patients' MCS scores indicated severe disability. With five Waddell signs, ODI scores indicated that patients perceived themselves as crippled. Positive Waddell signs, a potential indicator of central sensitization, indicated a likelihood of having functional limitations and an impaired quality of life, particularly in young women. Cite this article: Bone Joint J 2018;100-B:219-25. ©2018 The British Editorial Society of Bone & Joint Surgery.

  10. Functional mapping of quantitative trait loci associated with rice tillering.

    PubMed

    Liu, G F; Li, M; Wen, J; Du, Y; Zhang, Y-M

    2010-10-01

    Several biologically significant parameters that are related to rice tillering are closely associated with rice grain yield. Although identification of the genes that control rice tillering and therefore influence crop yield would be valuable for rice production management and genetic improvement, these genes remain largely unidentified. In this study, we carried out functional mapping of quantitative trait loci (QTLs) for rice tillering in 129 doubled haploid lines, which were derived from a cross between IR64 and Azucena. We measured the average number of tillers in each plot at seven developmental stages and fit the growth trajectory of rice tillering with the Wang-Lan-Ding mathematical model. Four biologically meaningful parameters in this model--the potential maximum for tiller number (K), the optimum tiller time (t(0)), and the increased rate (r), or the reduced rate (c) at the time of deviation from t(0)--were our defined variables for multi-marker joint analysis under the framework of penalized maximum likelihood, as well as composite interval mapping. We detected a total of 27 QTLs that accounted for 2.49-8.54% of the total phenotypic variance. Nine common QTLs across multi-marker joint analysis and composite interval mapping showed high stability, while one QTL was environment-specific and three were epistatic. We also identified several genomic segments that are associated with multiple traits. Our results describe the genetic basis of rice tiller development, enable further marker-assisted selection in rice cultivar development, and provide useful information for rice production management.

  11. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach

    PubMed Central

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2018-01-01

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591

  12. A hybrid model for combining case-control and cohort studies in systematic reviews of diagnostic tests

    PubMed Central

    Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao

    2014-01-01

    Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179

  13. Development Risk Methodology for Whole Systems Trade Analysis

    DTIC Science & Technology

    2016-08-01

    Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202- 4302. Respondents should be aware that notwithstanding...JCIDS - Joint Capabilities Integration and Development System L - Likelihood MS - Milestone O&S - Operations and Sustainment P.95 - 95th...and their consequences. These dimensions are: performance, unit cost, operations & sustainment (O&S) cost, development risk, and growth potential

  14. Hybrid pairwise likelihood analysis of animal behavior experiments.

    PubMed

    Cattelan, Manuela; Varin, Cristiano

    2013-12-01

    The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons. © 2013, The International Biometric Society.

  15. Consideration of Collision "Consequence" in Satellite Conjunction Assessment and Risk Analysis

    NASA Technical Reports Server (NTRS)

    Hejduk, M.; Laporte, F.; Moury, M.; Newman, L.; Shepperd, R.

    2017-01-01

    Classic risk management theory requires the assessment of both likelihood and consequence of deleterious events. Satellite conjunction risk assessment has produced a highly-developed theory for assessing collision likelihood but holds a completely static solution for collision consequence, treating all potential collisions as essentially equally worrisome. This may be true for the survival of the protected asset, but the amount of debris produced by the potential collision, and therefore the degree to which the orbital corridor may be compromised, can vary greatly among satellite conjunctions. This study leverages present work on satellite collision modeling to develop a method by which it can be estimated, to a particular confidence level, whether a particular collision is likely to produce a relatively large or relatively small amount of resultant debris and how this datum might alter conjunction remediation decisions. The more general question of orbital corridor protection is also addressed, and a preliminary framework presented by which both collision likelihood and consequence can be jointly considered in the risk assessment process.

  16. Clinical Evaluation and Physical Exam Findings in Patients with Anterior Shoulder Instability.

    PubMed

    Lizzio, Vincent A; Meta, Fabien; Fidai, Mohsin; Makhni, Eric C

    2017-12-01

    The goal of this paper is to provide an overview in evaluating the patient with suspected or known anteroinferior glenohumeral instability. There is a high rate of recurrent subluxations or dislocations in young patients with history of anterior shoulder dislocation, and recurrent instability will increase likelihood of further damage to the glenohumeral joint. Proper identification and treatment of anterior shoulder instability can dramatically reduce the rate of recurrent dislocation and prevent subsequent complications. Overall, the anterior release or surprise test demonstrates the best sensitivity and specificity for clinically diagnosing anterior shoulder instability, although other tests also have favorable sensitivities, specificities, positive likelihood ratios, negative likelihood ratios, and inter-rater reliabilities. Anterior shoulder instability is a relatively common injury in the young and athletic population. The combination of history and performing apprehension, relocation, release or surprise, anterior load, and anterior drawer exam maneuvers will optimize sensitivity and specificity for accurately diagnosing anterior shoulder instability in clinical practice.

  17. Audio Tracking in Noisy Environments by Acoustic Map and Spectral Signature.

    PubMed

    Crocco, Marco; Martelli, Samuele; Trucco, Andrea; Zunino, Andrea; Murino, Vittorio

    2018-05-01

    A novel method is proposed for generic target tracking by audio measurements from a microphone array. To cope with noisy environments characterized by persistent and high energy interfering sources, a classification map (CM) based on spectral signatures is calculated by means of a machine learning algorithm. Next, the CM is combined with the acoustic map, describing the spatial distribution of sound energy, in order to obtain a cleaned joint map in which contributions from the disturbing sources are removed. A likelihood function is derived from this map and fed to a particle filter yielding the target location estimation on the acoustic image. The method is tested on two real environments, addressing both speaker and vehicle tracking. The comparison with a couple of trackers, relying on the acoustic map only, shows a sharp improvement in performance, paving the way to the application of audio tracking in real challenging environments.

  18. Joint Inference of Population Assignment and Demographic History

    PubMed Central

    Choi, Sang Chul; Hey, Jody

    2011-01-01

    A new approach to assigning individuals to populations using genetic data is described. Most existing methods work by maximizing Hardy–Weinberg and linkage equilibrium within populations, neither of which will apply for many demographic histories. By including a demographic model, within a likelihood framework based on coalescent theory, we can jointly study demographic history and population assignment. Genealogies and population assignments are sampled from a posterior distribution using a general isolation-with-migration model for multiple populations. A measure of partition distance between assignments facilitates not only the summary of a posterior sample of assignments, but also the estimation of the posterior density for the demographic history. It is shown that joint estimates of assignment and demographic history are possible, including estimation of population phylogeny for samples from three populations. The new method is compared to results of a widely used assignment method, using simulated and published empirical data sets. PMID:21775468

  19. Two‐phase designs for joint quantitative‐trait‐dependent and genotype‐dependent sampling in post‐GWAS regional sequencing

    PubMed Central

    Espin‐Garcia, Osvaldo; Craiu, Radu V.

    2017-01-01

    ABSTRACT We evaluate two‐phase designs to follow‐up findings from genome‐wide association study (GWAS) when the cost of regional sequencing in the entire cohort is prohibitive. We develop novel expectation‐maximization‐based inference under a semiparametric maximum likelihood formulation tailored for post‐GWAS inference. A GWAS‐SNP (where SNP is single nucleotide polymorphism) serves as a surrogate covariate in inferring association between a sequence variant and a normally distributed quantitative trait (QT). We assess test validity and quantify efficiency and power of joint QT‐SNP‐dependent sampling and analysis under alternative sample allocations by simulations. Joint allocation balanced on SNP genotype and extreme‐QT strata yields significant power improvements compared to marginal QT‐ or SNP‐based allocations. We illustrate the proposed method and evaluate the sensitivity of sample allocation to sampling variation using data from a sequencing study of systolic blood pressure. PMID:29239496

  20. 49 CFR Appendix A to Part 211 - Statement of Agency Policy Concerning Waivers Related to Shared Use of Trackage or Rights-of-Way...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... not designed to be used in situations where there is a reasonable likelihood of a collision with much... rail crossing at grade, a shared method of train control, or shared highway-rail grade crossings. 4.... You should explain the nature of such simultaneous joint use, the system of train control, the...

  1. Bayesian modelling of the emission spectrum of the Joint European Torus Lithium Beam Emission Spectroscopy system.

    PubMed

    Kwak, Sehyun; Svensson, J; Brix, M; Ghim, Y-C

    2016-02-01

    A Bayesian model of the emission spectrum of the JET lithium beam has been developed to infer the intensity of the Li I (2p-2s) line radiation and associated uncertainties. The detected spectrum for each channel of the lithium beam emission spectroscopy system is here modelled by a single Li line modified by an instrumental function, Bremsstrahlung background, instrumental offset, and interference filter curve. Both the instrumental function and the interference filter curve are modelled with non-parametric Gaussian processes. All free parameters of the model, the intensities of the Li line, Bremsstrahlung background, and instrumental offset, are inferred using Bayesian probability theory with a Gaussian likelihood for photon statistics and electronic background noise. The prior distributions of the free parameters are chosen as Gaussians. Given these assumptions, the intensity of the Li line and corresponding uncertainties are analytically available using a Bayesian linear inversion technique. The proposed approach makes it possible to extract the intensity of Li line without doing a separate background subtraction through modulation of the Li beam.

  2. An EM-based semi-parametric mixture model approach to the regression analysis of competing-risks data.

    PubMed

    Ng, S K; McLachlan, G J

    2003-04-15

    We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.

  3. Association between clean indoor air laws and voluntary smokefree rules in homes and cars.

    PubMed

    Cheng, Kai-Wen; Okechukwu, Cassandra A; McMillen, Robert; Glantz, Stanton A

    2015-03-01

    This study examines the influence that smokefree workplaces, restaurants and bars have on the adoption of smokefree rules in homes and cars, and whether there is an association with adopting smokefree rules in homes and cars. Bivariate probit models were used to jointly estimate the likelihood of living in a smokefree home and having a smokefree car as a function of law coverage and other variables. Household data were obtained from the nationally representative Social Climate Survey of Tobacco Control 2001, 2002 and 2004-2009; clean indoor air law data were from the American Nonsmokers' Rights Foundation Tobacco Control Laws Database. 'Full coverage' and 'partial coverage' smokefree legislation is associated with an increased likelihood of having voluntary home and car smokefree rules compared with 'no coverage'. The association between 'full coverage' and smokefree rule in homes and cars is 5% and 4%, respectively, and the association between 'partial coverage' and smokefree rules in homes and cars is 3% and 4%, respectively. There is a positive association between the adoption of smokefree rules in homes and cars. Clean indoor air laws provide the additional benefit of encouraging voluntary adoption of smokefree rules in homes and cars. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  4. Working on a Standard Joint Unit: A pilot test.

    PubMed

    Casajuana, Cristina; López-Pelayo, Hugo; Mercedes Balcells, María; Miquel, Laia; Teixidó, Lídia; Colom, Joan; Gual, Antoni

    2017-09-29

    Assessing cannabis consumption remains complex due to no reliable registration systems. We tested the likelihood of establishing a Standard Joint Unit (SJU) which considers the main cannabinoids with implication on health through a naturalistic approach.  Methodology. Pilot study with current cannabis users of four areas of Barcelona: universities, nightclubs, out-patient mental health service, and cannabis associations. We designed and administered a questionnaire on cannabis use-patterns and determined the willingness to donate a joint for analysis. Descriptive statistics were used to analyze the data. Forty volunteers answered the questionnaire (response rate 95%); most of them were men (72.5%) and young adults (median age 24.5 years; IQR 8.75 years) who consume daily or nearly daily (70%). Most participants consume marihuana (85%) and roll their joints with a median of 0.25 gr of marihuana. Two out of three (67.5%) stated they were willing to donate a joint. Obtaining an SJU with the planned methodology has proved to be feasible. Pre-testing resulted in an improvement of the questionnaire and retribution to incentivize donations. Establishing an SJU is essential to improve our knowledge on cannabis-related outcomes.

  5. Compatibility of pedigree-based and marker-based relationship matrices for single-step genetic evaluation.

    PubMed

    Christensen, Ole F

    2012-12-03

    Single-step methods provide a coherent and conceptually simple approach to incorporate genomic information into genetic evaluations. An issue with single-step methods is compatibility between the marker-based relationship matrix for genotyped animals and the pedigree-based relationship matrix. Therefore, it is necessary to adjust the marker-based relationship matrix to the pedigree-based relationship matrix. Moreover, with data from routine evaluations, this adjustment should in principle be based on both observed marker genotypes and observed phenotypes, but until now this has been overlooked. In this paper, I propose a new method to address this issue by 1) adjusting the pedigree-based relationship matrix to be compatible with the marker-based relationship matrix instead of the reverse and 2) extending the single-step genetic evaluation using a joint likelihood of observed phenotypes and observed marker genotypes. The performance of this method is then evaluated using two simulated datasets. The method derived here is a single-step method in which the marker-based relationship matrix is constructed assuming all allele frequencies equal to 0.5 and the pedigree-based relationship matrix is constructed using the unusual assumption that animals in the base population are related and inbred with a relationship coefficient γ and an inbreeding coefficient γ / 2. Taken together, this γ parameter and a parameter that scales the marker-based relationship matrix can handle the issue of compatibility between marker-based and pedigree-based relationship matrices. The full log-likelihood function used for parameter inference contains two terms. The first term is the REML-log-likelihood for the phenotypes conditional on the observed marker genotypes, whereas the second term is the log-likelihood for the observed marker genotypes. Analyses of the two simulated datasets with this new method showed that 1) the parameters involved in adjusting marker-based and pedigree-based relationship matrices can depend on both observed phenotypes and observed marker genotypes and 2) a strong association between these two parameters exists. Finally, this method performed at least as well as a method based on adjusting the marker-based relationship matrix. Using the full log-likelihood and adjusting the pedigree-based relationship matrix to be compatible with the marker-based relationship matrix provides a new and interesting approach to handle the issue of compatibility between the two matrices in single-step genetic evaluation.

  6. BinQuasi: a peak detection method for ChIP-sequencing data with biological replicates.

    PubMed

    Goren, Emily; Liu, Peng; Wang, Chao; Wang, Chong

    2018-04-19

    ChIP-seq experiments that are aimed at detecting DNA-protein interactions require biological replication to draw inferential conclusions, however there is no current consensus on how to analyze ChIP-seq data with biological replicates. Very few methodologies exist for the joint analysis of replicated ChIP-seq data, with approaches ranging from combining the results of analyzing replicates individually to joint modeling of all replicates. Combining the results of individual replicates analyzed separately can lead to reduced peak classification performance compared to joint modeling. Currently available methods for joint analysis may fail to control the false discovery rate at the nominal level. We propose BinQuasi, a peak caller for replicated ChIP-seq data, that jointly models biological replicates using a generalized linear model framework and employs a one-sided quasi-likelihood ratio test to detect peaks. When applied to simulated data and real datasets, BinQuasi performs favorably compared to existing methods, including better control of false discovery rate than existing joint modeling approaches. BinQuasi offers a flexible approach to joint modeling of replicated ChIP-seq data which is preferable to combining the results of replicates analyzed individually. Source code is freely available for download at https://cran.r-project.org/package=BinQuasi, implemented in R. pliu@iastate.edu or egoren@iastate.edu. Supplementary material is available at Bioinformatics online.

  7. Copula Regression Analysis of Simultaneously Recorded Frontal Eye Field and Inferotemporal Spiking Activity during Object-Based Working Memory

    PubMed Central

    Hu, Meng; Clark, Kelsey L.; Gong, Xiajing; Noudoost, Behrad; Li, Mingyao; Moore, Tirin

    2015-01-01

    Inferotemporal (IT) neurons are known to exhibit persistent, stimulus-selective activity during the delay period of object-based working memory tasks. Frontal eye field (FEF) neurons show robust, spatially selective delay period activity during memory-guided saccade tasks. We present a copula regression paradigm to examine neural interaction of these two types of signals between areas IT and FEF of the monkey during a working memory task. This paradigm is based on copula models that can account for both marginal distribution over spiking activity of individual neurons within each area and joint distribution over ensemble activity of neurons between areas. Considering the popular GLMs as marginal models, we developed a general and flexible likelihood framework that uses the copula to integrate separate GLMs into a joint regression analysis. Such joint analysis essentially leads to a multivariate analog of the marginal GLM theory and hence efficient model estimation. In addition, we show that Granger causality between spike trains can be readily assessed via the likelihood ratio statistic. The performance of this method is validated by extensive simulations, and compared favorably to the widely used GLMs. When applied to spiking activity of simultaneously recorded FEF and IT neurons during working memory task, we observed significant Granger causality influence from FEF to IT, but not in the opposite direction, suggesting the role of the FEF in the selection and retention of visual information during working memory. The copula model has the potential to provide unique neurophysiological insights about network properties of the brain. PMID:26063909

  8. A probabilistic multi-criteria decision making technique for conceptual and preliminary aerospace systems design

    NASA Astrophysics Data System (ADS)

    Bandte, Oliver

    It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.

  9. [Range of Hip Joint Motion and Weight of Lower Limb Function under 3D Dynamic Marker].

    PubMed

    Xia, Q; Zhang, M; Gao, D; Xia, W T

    2017-12-01

    To explore the range of reasonable weight coefficient of hip joint in lower limb function. When the hip joints of healthy volunteers under normal conditions or fixed at three different positions including functional, flexed and extension positions, the movements of lower limbs were recorded by LUKOtronic motion capture and analysis system. The degree of lower limb function loss was calculated using Fugl-Meyer lower limb function assessment form when the hip joints were fixed at the aforementioned positions. One-way analysis of variance and Tamhane's T2 method were used to proceed statistics analysis and calculate the range of reasonable weight coefficient of hip joint. There were significant differences between the degree of lower limb function loss when the hip joints fixed at flexed and extension positions and at functional position. While the differences between the degree of lower limb function loss when the hip joints fixed at flexed position and extension position had no statistical significance. In 95% confidence interval, the reasonable weight coefficient of hip joint in lower limb function was between 61.05% and 73.34%. Expect confirming the reasonable weight coefficient, the effects of functional and non-functional positions on the degree of lower limb function loss should also be considered for the assessment of hip joint function loss. Copyright© by the Editorial Department of Journal of Forensic Medicine

  10. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    NASA Astrophysics Data System (ADS)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  11. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  12. How much to trust the senses: Likelihood learning

    PubMed Central

    Sato, Yoshiyuki; Kording, Konrad P.

    2014-01-01

    Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood. PMID:25398975

  13. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  14. Statistical inference for noisy nonlinear ecological dynamic systems.

    PubMed

    Wood, Simon N

    2010-08-26

    Chaotic ecological dynamic systems defy conventional statistical analysis. Systems with near-chaotic dynamics are little better. Such systems are almost invariably driven by endogenous dynamic processes plus demographic and environmental process noise, and are only observable with error. Their sensitivity to history means that minute changes in the driving noise realization, or the system parameters, will cause drastic changes in the system trajectory. This sensitivity is inherited and amplified by the joint probability density of the observable data and the process noise, rendering it useless as the basis for obtaining measures of statistical fit. Because the joint density is the basis for the fit measures used by all conventional statistical methods, this is a major theoretical shortcoming. The inability to make well-founded statistical inferences about biological dynamic models in the chaotic and near-chaotic regimes, other than on an ad hoc basis, leaves dynamic theory without the methods of quantitative validation that are essential tools in the rest of biological science. Here I show that this impasse can be resolved in a simple and general manner, using a method that requires only the ability to simulate the observed data on a system from the dynamic model about which inferences are required. The raw data series are reduced to phase-insensitive summary statistics, quantifying local dynamic structure and the distribution of observations. Simulation is used to obtain the mean and the covariance matrix of the statistics, given model parameters, allowing the construction of a 'synthetic likelihood' that assesses model fit. This likelihood can be explored using a straightforward Markov chain Monte Carlo sampler, but one further post-processing step returns pure likelihood-based inference. I apply the method to establish the dynamic nature of the fluctuations in Nicholson's classic blowfly experiments.

  15. Exact Calculation of the Joint Allele Frequency Spectrum for Isolation with Migration Models.

    PubMed

    Kern, Andrew D; Hey, Jody

    2017-09-01

    Population genomic datasets collected over the past decade have spurred interest in developing methods that can utilize massive numbers of loci for inference of demographic and selective histories of populations. The allele frequency spectrum (AFS) provides a convenient statistic for such analysis, and, accordingly, much attention has been paid to predicting theoretical expectations of the AFS under a number of different models. However, to date, exact solutions for the joint AFS of two or more populations under models of migration and divergence have not been found. Here, we present a novel Markov chain representation of the coalescent on the state space of the joint AFS that allows for rapid, exact calculation of the joint AFS under isolation with migration (IM) models. In turn, we show how our Markov chain method, in the context of composite likelihood estimation, can be used for accurate inference of parameters of the IM model using SNP data. Lastly, we apply our method to recent whole genome datasets from African Drosophila melanogaster . Copyright © 2017 Kern and Hey.

  16. Measurement of the top-quark mass in all-hadronic decays in pp collisions at CDF II.

    PubMed

    Aaltonen, T; Abulencia, A; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Ambrose, D; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arguin, J-F; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Bedeschi, F; Behari, S; Belforte, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Budroni, S; Burkett, K; Busetto, G; Bussey, P; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carillo, S; Carlsmith, D; Carosi, R; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciljak, M; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Coca, M; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Almenar, C Cuenca; Cuevas, J; Culbertson, R; Cully, J C; Cyr, D; Daronco, S; Datta, M; D'Auria, S; Davies, T; D'Onofrio, M; Dagenhart, D; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; Dell'orso, M; Delli Paoli, F; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; Dituro, P; Dörr, C; Donati, S; Donega, M; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Foland, A; Forrester, S; Foster, G W; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garberson, F; Garfinkel, A F; Gay, C; Gerberich, H; Gerdes, D; Giagu, S; Giannetti, P; Gibson, A; Gibson, K; Gimmell, J L; Ginsburg, C; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, J; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Griffiths, M; Grinstein, S; Grosso-Pilcher, C; Grundler, U; da Costa, J Guimaraes; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Holloway, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ishizawa, Y; Ivanov, A; Iyutin, B; James, E; Jang, D; Jayatilaka, B; Jeans, D; Jensen, H; Jeon, E J; Jindariani, S; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Karchin, P E; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kovalev, A; Kraan, A C; Kraus, J; Kravchenko, I; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Loverre, P; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Manca, G; Margaroli, F; Marginean, R; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Maruyama, T; Mastrandrea, P; Masubuchi, T; Matsunaga, H; Mattson, M E; Mazini, R; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyamoto, A; Moed, S; Moggi, N; Mohr, B; Moore, R; Morello, M; Fernandez, P Movilla; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Nachtman, J; Nagano, A; Naganoma, J; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nigmanov, T; Nodulman, L; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ranjan, N; Rappoccio, S; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Sabik, S; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Saltzberg, D; Sánchez, C; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savard, P; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Sjolin, J; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; Staveris-Polykalas, A; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Takikawa, K; Tanaka, M; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tsuchiya, R; Tsuno, S; Turini, N; Ukegawa, F; Unverhau, T; Uozumi, S; Usynin, D; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Veramendi, G; Veszpremi, V; Vidal, R; Vila, I; Vilar, R; Vine, T; Vollrath, I; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner, J; Wagner, W; Wallny, R; Wang, S M; Warburton, A; Waschke, S; Waters, D; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zhou, J; Zucchelli, S

    2007-04-06

    We present a measurement of the top-quark mass Mtop in the all-hadronic decay channel tt-->W+bW-b-->q1q2bq3q4b. The analysis is performed using 310 pb-1 of sqrt[s]=1.96 TeV pp[over ] collisions collected with the CDF II detector using a multijet trigger. The mass measurement is based on an event-by-event likelihood which depends on both the sample purity and the value of the top-quark mass, using 90 possible jet-to-parton assignments in the six-jet final state. The joint likelihood of 290 selected events yields a value of Mtop=177.1+/-4.9(stat)+/-4.7(syst) GeV/c2.

  17. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    PubMed

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  18. Latent class joint model of ovarian function suppression and DFS for premenopausal breast cancer patients.

    PubMed

    Zhang, Jenny J; Wang, Molin

    2010-09-30

    Breast cancer is the leading cancer in women of reproductive age; more than a quarter of women diagnosed with breast cancer in the US are premenopausal. A common adjuvant treatment for this patient population is chemotherapy, which has been shown to cause premature menopause and infertility with serious consequences to quality of life. Luteinizing-hormone-releasing hormone (LHRH) agonists, which induce temporary ovarian function suppression (OFS), has been shown to be a useful alternative to chemotherapy in the adjuvant setting for estrogen-receptor-positive breast cancer patients. LHRH agonists have the potential to preserve fertility after treatment, thus, reducing the negative effects on a patient's reproductive health. However, little is known about the association between a patient's underlying degree of OFS and disease-free survival (DFS) after receiving LHRH agonists. Specifically, we are interested in whether patients with lower underlying degrees of OFS (i.e. higher estrogen production) after taking LHRH agonists are at a higher risk for late breast cancer events. In this paper, we propose a latent class joint model (LCJM) to analyze a data set from International Breast Cancer Study Group (IBCSG) Trial VIII to investigate the association between OFS and DFS. Analysis of this data set is challenging due to the fact that the main outcome of interest, OFS, is unobservable and the available surrogates for this latent variable involve masked event and cured proportions. We employ a likelihood approach and the EM algorithm to obtain parameter estimates and present results from the IBCSG data analysis.

  19. Computing local edge probability in natural scenes from a population of oriented simple cells

    PubMed Central

    Ramachandra, Chaithanya A.; Mel, Bartlett W.

    2013-01-01

    A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell—an oriented linear filter followed by a divisive normalization—fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels. PMID:24381295

  20. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    PubMed

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  1. A probabilistic approach for the estimation of earthquake source parameters from spectral inversion

    NASA Astrophysics Data System (ADS)

    Supino, M.; Festa, G.; Zollo, A.

    2017-12-01

    The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.

  2. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  3. Multiple robustness in factorized likelihood models.

    PubMed

    Molina, J; Rotnitzky, A; Sued, M; Robins, J M

    2017-09-01

    We consider inference under a nonparametric or semiparametric model with likelihood that factorizes as the product of two or more variation-independent factors. We are interested in a finite-dimensional parameter that depends on only one of the likelihood factors and whose estimation requires the auxiliary estimation of one or several nuisance functions. We investigate general structures conducive to the construction of so-called multiply robust estimating functions, whose computation requires postulating several dimension-reducing models but which have mean zero at the true parameter value provided one of these models is correct.

  4. What the Milky Way's dwarfs tell us about the Galactic Center extended gamma-ray excess

    NASA Astrophysics Data System (ADS)

    Keeley, Ryan E.; Abazajian, Kevork N.; Kwa, Anna; Rodd, Nicholas L.; Safdi, Benjamin R.

    2018-05-01

    The Milky Way's Galactic Center harbors a gamma-ray excess that is a candidate signal of annihilating dark matter. Dwarf galaxies remain predominantly dark in their expected commensurate emission. In this work we quantify the degree of consistency between these two observations through a joint likelihood analysis. In doing so we incorporate Milky Way dark matter halo profile uncertainties, as well as an accounting of diffuse gamma-ray emission uncertainties in dark matter annihilation models for the Galactic Center extended gamma-ray excess (GCE) detected by the Fermi Gamma-Ray Space Telescope. The preferred range of annihilation rates and masses expands when including these unknowns. Even so, using two recent determinations of the Milky Way halo's local density leaves the GCE preferred region of single-channel dark matter annihilation models to be in strong tension with annihilation searches in combined dwarf galaxy analyses. A third, higher Milky Way density determination, alleviates this tension. Our joint likelihood analysis allows us to quantify this inconsistency. We provide a set of tools for testing dark matter annihilation models' consistency within this combined data set. As an example, we test a representative inverse Compton sourced self-interacting dark matter model, which is consistent with both the GCE and dwarfs.

  5. Association between contraceptive use and socio-demographic factors of young fecund women in Bangladesh.

    PubMed

    Islam, Ahmed Zohirul; Rahman, Mosiur; Mostofa, Md Golam

    2017-10-01

    This study aimed to explore the association between socio-demographic factors and contraceptive use among fecund women under 25years old. This study utilized a cross-sectional data (n=3744) extracted from the Bangladesh Demographic and Health Survey 2011. Differences in the use of contraceptives by socio-demographic characteristics were assessed by χ 2 analyses. Binary logistic regression was used to identify the determinants of contraceptive use among young women. This study observed that 71% fecund women aged below 25years used contraceptives. Getting family planning (FP) methods from FP workers increases the likelihood of using contraceptives among young women because outreach activities by FP workers and accessibility of FP related information pave the way of using contraceptives. Husband-wife joint participation in decision making on health care increases the likelihood of using contraceptives. Participation of women in decision making on health care could be achieved by promoting higher education and gainful employment for women. Reproductive and sex education should be introduced in schools to prepare the young for healthy and responsible living. Moreover, policy makers should focus on developing negotiation skills in young women by creating educational and employment opportunities since husband-wife joint participation in decision making increases contraceptive use. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. The role of implementation intention formation in promoting hepatitis B vaccination uptake among men who have sex with men.

    PubMed

    Vet, Raymond; de Wit, John B F; Das, Enny

    2014-02-01

    This study assessed the separate and joint effects of having a goal intention and the completeness of implementation intention formation on the likelihood of attending an appointment to obtain vaccination against the hepatitis B virus among men who have sex with men (MSM) in the Netherlands. Extending previous research, it was hypothesized that to be effective in promoting vaccination, implementation intention formation not only requires a strong goal intention, but also complete details specifying when, where and how to make an appointment to obtain hepatitis B virus vaccination among MSM. MSM at risk for hepatitis B virus (N = 616), with strong or weak intentions to obtain hepatitis B virus vaccination, were randomly assigned to form an implementation intention or not. Completeness of implementation intentions was rated and hepatitis B virus uptake was assessed through data linkage with the joint vaccination registry of the collaborating Public Health Services. Having a strong goal intention to obtain hepatitis B virus vaccination and forming an implementation intention, each significantly and independently increased the likelihood of MSM obtaining hepatitis B virus vaccination. In addition, MSM who formed complete implementation intentions were more successful in obtaining vaccination (p < 0.01). The formation of complete implementation intentions was promoted by strong goal intentions (p < 0.01).

  7. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  8. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  9. Multisite EPR oximetry from multiple quadrature harmonics.

    PubMed

    Ahmad, R; Som, S; Johnson, D H; Zweier, J L; Kuppusamy, P; Potter, L C

    2012-01-01

    Multisite continuous wave (CW) electron paramagnetic resonance (EPR) oximetry using multiple quadrature field modulation harmonics is presented. First, a recently developed digital receiver is used to extract multiple harmonics of field modulated projection data. Second, a forward model is presented that relates the projection data to unknown parameters, including linewidth at each site. Third, a maximum likelihood estimator of unknown parameters is reported using an iterative algorithm capable of jointly processing multiple quadrature harmonics. The data modeling and processing are applicable for parametric lineshapes under nonsaturating conditions. Joint processing of multiple harmonics leads to 2-3-fold acceleration of EPR data acquisition. For demonstration in two spatial dimensions, both simulations and phantom studies on an L-band system are reported. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. A joint modeling and estimation method for multivariate longitudinal data with mixed types of responses to analyze physical activity data generated by accelerometers.

    PubMed

    Li, Haocheng; Zhang, Yukun; Carroll, Raymond J; Keadle, Sarah Kozey; Sampson, Joshua N; Matthews, Charles E

    2017-11-10

    A mixed effect model is proposed to jointly analyze multivariate longitudinal data with continuous, proportion, count, and binary responses. The association of the variables is modeled through the correlation of random effects. We use a quasi-likelihood type approximation for nonlinear variables and transform the proposed model into a multivariate linear mixed model framework for estimation and inference. Via an extension to the EM approach, an efficient algorithm is developed to fit the model. The method is applied to physical activity data, which uses a wearable accelerometer device to measure daily movement and energy expenditure information. Our approach is also evaluated by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Likelihood-based gene annotations for gap filling and quality assessment in genome-scale metabolic models

    DOE PAGES

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...

    2014-10-16

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less

  12. Likelihood-Based Gene Annotations for Gap Filling and Quality Assessment in Genome-Scale Metabolic Models

    PubMed Central

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.

    2014-01-01

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157

  13. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  14. MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavel, D.T.

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.

  15. A long-term earthquake rate model for the central and eastern United States from smoothed seismicity

    USGS Publications Warehouse

    Moschetti, Morgan P.

    2015-01-01

    I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.

  16. Dietary patterns as compared with physical activity in relation to metabolic syndrome among Chinese adults.

    PubMed

    He, Y; Li, Y; Lai, J; Wang, D; Zhang, J; Fu, P; Yang, X; Qi, L

    2013-10-01

    To examine the nationally-representative dietary patterns and their joint effects with physical activity on the likelihood of metabolic syndrome (MS) among 20,827 Chinese adults. CNNHS was a nationally representative cross-sectional observational study. Metabolic syndrome was defined according to the Joint Interim Statement definition. The "Green Water" dietary pattern, characterized by high intakes of rice and vegetables and moderate intakes in animal foods was related to the lowest prevalence of MS (15.9%). Compared to the "Green Water" dietary pattern, the "Yellow Earth" dietary pattern, characterized by high intakes of refined cereal products, tubers, cooking salt and salted vegetable was associated with a significantly elevated odds of MS (odds ratio 1.66, 95%CI: 1.40-1.96), after adjustment of age, sex, socioeconomic status and lifestyle factors. The "Western/new affluence" dietary pattern characterized by higher consumption of beef/lamb, fruit, eggs, poultry and seafood also significantly associated with MS (odds ratio: 1.37, 95%CI: 1.13-1.67). Physical activity showed significant interactions with the dietary patterns in relation to MS risk (P for interaction = 0.008). In the joint analysis, participants with the combination of sedentary activity with the "Yellow Earth" dietary pattern or the "Western/new affluence" dietary pattern both had more than three times (95%CI: 2.8-6.1) higher odds of MS than those with active activity and the "Green Water" dietary pattern. Our findings from the large Chinese national representative data indicate that dietary patterns affect the likelihood of MS. Combining healthy dietary pattern with active lifestyle may benefit more in prevention of MS. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Multimodal Likelihoods in Educational Assessment: Will the Real Maximum Likelihood Score Please Stand up?

    ERIC Educational Resources Information Center

    Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike

    2011-01-01

    It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…

  18. Exploring Neutrino Oscillation Parameter Space with a Monte Carlo Algorithm

    NASA Astrophysics Data System (ADS)

    Espejel, Hugo; Ernst, David; Cogswell, Bernadette; Latimer, David

    2015-04-01

    The χ2 (or likelihood) function for a global analysis of neutrino oscillation data is first calculated as a function of the neutrino mixing parameters. A computational challenge is to obtain the minima or the allowed regions for the mixing parameters. The conventional approach is to calculate the χ2 (or likelihood) function on a grid for a large number of points, and then marginalize over the likelihood function. As the number of parameters increases with the number of neutrinos, making the calculation numerically efficient becomes necessary. We implement a new Monte Carlo algorithm (D. Foreman-Mackey, D. W. Hogg, D. Lang and J. Goodman, Publications of the Astronomical Society of the Pacific, 125 306 (2013)) to determine its computational efficiency at finding the minima and allowed regions. We examine a realistic example to compare the historical and the new methods.

  19. An Evaluation of Statistical Strategies for Making Equating Function Selections. Research Report. ETS RR-08-60

    ERIC Educational Resources Information Center

    Moses, Tim

    2008-01-01

    Nine statistical strategies for selecting equating functions in an equivalent groups design were evaluated. The strategies of interest were likelihood ratio chi-square tests, regression tests, Kolmogorov-Smirnov tests, and significance tests for equated score differences. The most accurate strategies in the study were the likelihood ratio tests…

  20. Quasi- and pseudo-maximum likelihood estimators for discretely observed continuous-time Markov branching processes

    PubMed Central

    Chen, Rui; Hyrien, Ollivier

    2011-01-01

    This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356

  1. The skewed weak lensing likelihood: why biases arise, despite data and theory being sound

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim

    2018-07-01

    We derive the essentials of the skewed weak lensing likelihood via a simple hierarchical forward model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of Lambda cold dark matter. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from cosmic microwave background analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30 per cent of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.

  2. The skewed weak lensing likelihood: why biases arise, despite data and theory being sound.

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim

    2018-04-01

    We derive the essentials of the skewed weak lensing likelihood via a simple Hierarchical Forward Model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of ΛCDM. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from CMB analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30% of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.

  3. Functional joint regeneration is achieved using reintegration mechanism in Xenopus laevis

    PubMed Central

    Yamada, Shigehito

    2016-01-01

    Abstract A functional joint requires integration of multiple tissues: the apposing skeletal elements should form an interlocking structure, and muscles should insert into skeletal tissues via tendons across the joint. Whereas newts can regenerate functional joints after amputation, Xenopus laevis regenerates a cartilaginous rod without joints, a “spike.” Previously we reported that the reintegration mechanism between the remaining and regenerated tissues has a significant effect on regenerating joint morphogenesis during elbow joint regeneration in newt. Based on this insight into the importance of reintegration, we amputated frogs’ limbs at the elbow joint and found that frogs could regenerate a functional elbow joint between the remaining tissues and regenerated spike. During regeneration, the regenerating cartilage was partially connected to the remaining articular cartilage to reform the interlocking structure of the elbow joint at the proximal end of the spike. Furthermore, the muscles of the remaining part inserted into the regenerated spike cartilage via tendons. This study might open up an avenue for analyzing molecular and cellular mechanisms of joint regeneration using Xenopus. PMID:27499877

  4. Transfer Entropy as a Log-Likelihood Ratio

    NASA Astrophysics Data System (ADS)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  5. Transfer entropy as a log-likelihood ratio.

    PubMed

    Barnett, Lionel; Bossomaier, Terry

    2012-09-28

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  6. Robust joint score tests in the application of DNA methylation data analysis.

    PubMed

    Li, Xuan; Fu, Yuejiao; Wang, Xiaogang; Qiu, Weiliang

    2018-05-18

    Recently differential variability has been showed to be valuable in evaluating the association of DNA methylation to the risks of complex human diseases. The statistical tests based on both differential methylation level and differential variability can be more powerful than those based only on differential methylation level. Anh and Wang (2013) proposed a joint score test (AW) to simultaneously detect for differential methylation and differential variability. However, AW's method seems to be quite conservative and has not been fully compared with existing joint tests. We proposed three improved joint score tests, namely iAW.Lev, iAW.BF, and iAW.TM, and have made extensive comparisons with the joint likelihood ratio test (jointLRT), the Kolmogorov-Smirnov (KS) test, and the AW test. Systematic simulation studies showed that: 1) the three improved tests performed better (i.e., having larger power, while keeping nominal Type I error rates) than the other three tests for data with outliers and having different variances between cases and controls; 2) for data from normal distributions, the three improved tests had slightly lower power than jointLRT and AW. The analyses of two Illumina HumanMethylation27 data sets GSE37020 and GSE20080 and one Illumina Infinium MethylationEPIC data set GSE107080 demonstrated that three improved tests had higher true validation rates than those from jointLRT, KS, and AW. The three proposed joint score tests are robust against the violation of normality assumption and presence of outlying observations in comparison with other three existing tests. Among the three proposed tests, iAW.BF seems to be the most robust and effective one for all simulated scenarios and also in real data analyses.

  7. Correlates of physical activity participation in community-dwelling older adults.

    PubMed

    Haley, Christy; Andel, Ross

    2010-10-01

    The authors examined factors related to participation in walking, gardening or yard work, and sports or exercise in 686 community-dwelling adults 60-95 years of age from Wave IV of the population-based Americans' Changing Lives Study. Logistic regression revealed that male gender, being married, and better functional health were associated with greater likelihood of participating in gardening or yard work (p < .05). Male gender, better functional health, and lower body-mass index were independently associated with greater likelihood of walking (p < .05). Increasing age, male gender, higher education, and better functional health were associated with greater likelihood of participating in sports or exercise (p < .05). Subsequent analyses yielded an interaction of functional health by gender in sport or exercise participation (p = .06), suggesting a greater association between functional health and participation in men. Gender and functional health appear to be particularly important for physical activity participation, which may be useful in guiding future research. Attention to different subgroups may be needed to promote participation in specific activities.

  8. Two-phase designs for joint quantitative-trait-dependent and genotype-dependent sampling in post-GWAS regional sequencing.

    PubMed

    Espin-Garcia, Osvaldo; Craiu, Radu V; Bull, Shelley B

    2018-02-01

    We evaluate two-phase designs to follow-up findings from genome-wide association study (GWAS) when the cost of regional sequencing in the entire cohort is prohibitive. We develop novel expectation-maximization-based inference under a semiparametric maximum likelihood formulation tailored for post-GWAS inference. A GWAS-SNP (where SNP is single nucleotide polymorphism) serves as a surrogate covariate in inferring association between a sequence variant and a normally distributed quantitative trait (QT). We assess test validity and quantify efficiency and power of joint QT-SNP-dependent sampling and analysis under alternative sample allocations by simulations. Joint allocation balanced on SNP genotype and extreme-QT strata yields significant power improvements compared to marginal QT- or SNP-based allocations. We illustrate the proposed method and evaluate the sensitivity of sample allocation to sampling variation using data from a sequencing study of systolic blood pressure. © 2017 The Authors. Genetic Epidemiology Published by Wiley Periodicals, Inc.

  9. A Versatile Omnibus Test for Detecting Mean and Variance Heterogeneity

    PubMed Central

    Bailey, Matthew; Kauwe, John S. K.; Maxwell, Taylor J.

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (GxG), or gene-by-environment (GxE) interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRTMV) or either effect alone (LRTM or LRTV) in the presence of covariates. Using extensive simulations for our method and others we found that all parametric tests were sensitive to non-normality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant we demonstrate how linkage disequilibrium (LD) can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D’ and relatively low r2 values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect gene-by-gene interactions and also how vQTL are related to relationship loci (rQTL) and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait. PMID:24482837

  10. Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling.

    PubMed

    Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F

    2015-01-01

    Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.

  11. Blind Compensation of I/Q Impairments in Wireless Transceivers

    PubMed Central

    Aziz, Mohsin; Ghannouchi, Fadhel M.; Helaoui, Mohamed

    2017-01-01

    The majority of techniques that deal with the mitigation of in-phase and quadrature-phase (I/Q) imbalance at the transmitter (pre-compensation) require long training sequences, reducing the throughput of the system. These techniques also require a feedback path, which adds more complexity and cost to the transmitter architecture. Blind estimation techniques are attractive for avoiding the use of long training sequences. In this paper, we propose a blind frequency-independent I/Q imbalance compensation method based on the maximum likelihood (ML) estimation of the imbalance parameters of a transceiver. A closed-form joint probability density function (PDF) for the imbalanced I and Q signals is derived and validated. ML estimation is then used to estimate the imbalance parameters using the derived joint PDF of the output I and Q signals. Various figures of merit have been used to evaluate the efficacy of the proposed approach using extensive computer simulations and measurements. Additionally, the bit error rate curves show the effectiveness of the proposed method in the presence of the wireless channel and Additive White Gaussian Noise. Real-world experimental results show an image rejection of greater than 30 dB as compared to the uncompensated system. This method has also been found to be robust in the presence of practical system impairments, such as time and phase delay mismatches. PMID:29257081

  12. Intra-articular pressures and joint mechanics: should we pay attention to effusion in knee osteoarthritis?

    PubMed

    Rutherford, Derek James

    2014-09-01

    What factors play a role to ensure a knee joint does what it should given the demands of moving through the physical environment? This paper aims to probe the hypothesis that intra-articular joint pressures, once a topic of interest, have been left aside in contemporary frameworks in which we now view knee joint function. The focus on ligamentous deficiencies and the chondrocentric view of osteoarthritis, while important, have left little attention to the consideration of other factors that can impair joint function across the lifespan. Dynamic knee stability is required during every step we take. While there is much known about the role that passive structures and muscular activation play in maintaining a healthy knee joint, this framework does not account for the role that intra-articular joint pressures may have in providing joint stability during motion and how these factors interact. Joint injuries invariably result in some form of intra-articular fluid accumulation. Ultimately, it may be how the knee mechanically responds to this fluid, of which pressure plays a significant role that provides the mechanisms for continued function. Do joint pressures provide an important foundation for maintaining knee function? This hypothesis is unique and argues that we are missing an important piece of the puzzle when attempting to understand implications that joint injury and disease have for joint function. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Childhood adversity, attachment and personality styles as predictors of anxiety among elderly caregivers.

    PubMed

    Prigerson, H G; Shear, M K; Bierhals, A J; Zonarich, D L; Reynolds, C F

    1996-01-01

    The purpose of this study was to examine the ways in which childhood adversity, attachment and personality styles influenced the likelihood of having an anxiety disorder among aged caregivers for terminally ill spouses. We also sought to determine how childhood adversity and attachment/personality styles jointly influenced the likelihood of developing an anxiety disorder among aged caregivers. Data were derived from semistructured interviews with 50 spouses (aged 60 and above) of terminally ill patients. The Childhood Experience of Care and Abuse (CECA) record provided retrospective, behaviorally based information on childhood adversity. Measures of attachment and personality styles were obtained from self-report questionnaires, and the Structured Clinical Interview for the DSM-III-R (SCID) was used to determine diagnoses for anxiety disorders. Logistic regression models estimated the effects of childhood adversity, attachment/personality disturbances, and the interaction between the two on the likelihood of having an anxiety disorder. Results indicated that childhood adversity and paranoid, histrionic and self-defeating styles all directly increase the odds of having an anxiety disorder as an elderly spousal caregiver. In addition, childhood adversity in conjunction with borderline, antisocial and excessively dependent styles increased the likelihood of having an anxiety disorder. The results indicate the need to investigate further the interaction between childhood experiences and current attachment/personality styles in their effects on the development of anxiety disorders.

  14. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  15. Improvement and comparison of likelihood functions for model calibration and parameter uncertainty analysis within a Markov chain Monte Carlo scheme

    NASA Astrophysics Data System (ADS)

    Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim

    2014-11-01

    In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.

  16. [Correlation of medial compartmental joint line elevation with femorotibial angle correction and clinical function after unicompartmental arthroplasty].

    PubMed

    Zhang, Zhan-Feng; Wang, Dan; Min, Ji-Kang

    2017-04-25

    To study the correlation of postoperative femorotibial angle with medial compartmental joint line elevation after unicompartmental arthroplasty(UKA), as well as the correlation of joint line elevation with the clinical function by measuring radiological joint line. A retrospective study of 56 patients from July 2012 to August 2015 was performed. The mean body mass index (BMI) was 23.5 (ranged, 18.3 to 30.1). The standing anteroposterior radiographs of these patients were assessed both pre-and post-operatively, and the knee function was evaluated according to HSS grading. The correlation between postoperative femorotibial angle(FTA) and joint line elevation was analyzed as well as the correlation between joint line elevation and the clinical function. The mean medial joint line elevation was (2.2±2.0) mm(ranged, -3.3 to 7.0 mm), and the mean FTA correction was (2.3±3.0)°(ranged, -4.5° to 9.6°). The mean follow-up period was 12.2 months. There was a significant correlation between in joint line elevation and FTA correction( P <0.05), while there was no significant correlation between joint line elevation and the clinical function( P >0.05). There was a significant correlation between medial compartmental joint line elevation and FTA correction after UKA, and the proximal tibial osteotomy was critical during the procedure. There was no significant correlation between joint line elevation and the clinical function, which may be related to the design of UKA prosthesis.

  17. Neural Networks Involved in Adolescent Reward Processing: An Activation Likelihood Estimation Meta-Analysis of Functional Neuroimaging Studies

    PubMed Central

    Silverman, Merav H.; Jedd, Kelly; Luciana, Monica

    2015-01-01

    Behavioral responses to, and the neural processing of, rewards change dramatically during adolescence and may contribute to observed increases in risk-taking during this developmental period. Functional MRI (fMRI) studies suggest differences between adolescents and adults in neural activation during reward processing, but findings are contradictory, and effects have been found in non-predicted directions. The current study uses an activation likelihood estimation (ALE) approach for quantitative meta-analysis of functional neuroimaging studies to: 1) confirm the network of brain regions involved in adolescents’ reward processing, 2) identify regions involved in specific stages (anticipation, outcome) and valence (positive, negative) of reward processing, and 3) identify differences in activation likelihood between adolescent and adult reward-related brain activation. Results reveal a subcortical network of brain regions involved in adolescent reward processing similar to that found in adults with major hubs including the ventral and dorsal striatum, insula, and posterior cingulate cortex (PCC). Contrast analyses find that adolescents exhibit greater likelihood of activation in the insula while processing anticipation relative to outcome and greater likelihood of activation in the putamen and amygdala during outcome relative to anticipation. While processing positive compared to negative valence, adolescents show increased likelihood for activation in the posterior cingulate cortex (PCC) and ventral striatum. Contrasting adolescent reward processing with the existing ALE of adult reward processing (Liu et al., 2011) reveals increased likelihood for activation in limbic, frontolimbic, and striatal regions in adolescents compared with adults. Unlike adolescents, adults also activate executive control regions of the frontal and parietal lobes. These findings support hypothesized elevations in motivated activity during adolescence. PMID:26254587

  18. Unified Theory of Inference for Text Understanding

    DTIC Science & Technology

    1986-11-25

    Technical Report S. L. Graham Principal Investigator (4151 642-2059 DTIC ^ELECTE APR 2 21987 D "The views and conclusions contained in this document...obtain X? Function - Infer P will use X for its normal purpose, if it has one. Intervention - How could C keep P from obtaining X? Knowledge Propagation...likelihood is low otherwise, likelihood is moderate otherwise, does X have a normal function ? if so, does P do actions like this function ? if so

  19. Noninvasively measuring oxygen saturation of human finger-joint vessels by multi-transducer functional photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Deng, Zijian; Li, Changhui

    2016-06-01

    Imaging small blood vessels and measuring their functional information in finger joint are still challenges for clinical imaging modalities. In this study, we developed a multi-transducer functional photoacoustic tomography (PAT) system and successfully imaged human finger-joint vessels from ˜1 mm to <0.2 mm in diameter. In addition, the oxygen saturation (SO2) values of these vessels were also measured. Our results demonstrate that PAT can provide both anatomical and functional information of individual finger-joint vessels with different sizes, which might help the study of finger-joint diseases, such as rheumatoid arthritis.

  20. Spatiotemporal modelling of groundwater extraction in semi-arid central Queensland, Australia

    NASA Astrophysics Data System (ADS)

    Keir, Greg; Bulovic, Nevenka; McIntyre, Neil

    2016-04-01

    The semi-arid Surat Basin in central Queensland, Australia, forms part of the Great Artesian Basin, a groundwater resource of national significance. While this area relies heavily on groundwater supply bores to sustain agricultural industries and rural life in general, measurement of groundwater extraction rates is very limited. Consequently, regional groundwater extraction rates are not well known, which may have implications for regional numerical groundwater modelling. However, flows from a small number of bores are metered, and less precise anecdotal estimates of extraction are increasingly available. There is also an increasing number of other spatiotemporal datasets which may help predict extraction rates (e.g. rainfall, temperature, soils, stocking rates etc.). These can be used to construct spatial multivariate regression models to estimate extraction. The data exhibit complicated statistical features, such as zero-valued observations, non-Gaussianity, and non-stationarity, which limit the use of many classical estimation techniques, such as kriging. As well, water extraction histories may exhibit temporal autocorrelation. To account for these features, we employ a separable space-time model to predict bore extraction rates using the R-INLA package for computationally efficient Bayesian inference. A joint approach is used to model both the probability (using a binomial likelihood) and magnitude (using a gamma likelihood) of extraction. The correlation between extraction rates in space and time is modelled using a Gaussian Markov Random Field (GMRF) with a Matérn spatial covariance function which can evolve over time according to an autoregressive model. To reduce computational burden, we allow the GMRF to be evaluated at a relatively coarse temporal resolution, while still allowing predictions to be made at arbitrarily small time scales. We describe the process of model selection and inference using an information criterion approach, and present some preliminary results from the study area. We conclude by discussing issues related with upscaling of the modelling approach to the entire basin, including merging of extraction rate observations with different precision, temporal resolution, and even potentially different likelihoods.

  1. On the testing of Hardy-Weinberg proportions and equality of allele frequencies in males and females at biallelic genetic markers.

    PubMed

    Graffelman, Jan; Weir, Bruce S

    2018-02-01

    Standard statistical tests for equality of allele frequencies in males and females and tests for Hardy-Weinberg equilibrium are tightly linked by their assumptions. Tests for equality of allele frequencies assume Hardy-Weinberg equilibrium, whereas the usual chi-square or exact test for Hardy-Weinberg equilibrium assume equality of allele frequencies in the sexes. In this paper, we propose ways to break this interdependence in assumptions of the two tests by proposing an omnibus exact test that can test both hypotheses jointly, as well as a likelihood ratio approach that permits these phenomena to be tested both jointly and separately. The tests are illustrated with data from the 1000 Genomes project. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.

  2. A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system

    NASA Astrophysics Data System (ADS)

    Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun

    2014-11-01

    In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.

  3. Vector quantizer designs for joint compression and terrain categorization of multispectral imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Lyons, Daniel F.

    1994-01-01

    Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.

  4. A multi-valued neutrosophic qualitative flexible approach based on likelihood for multi-criteria decision-making problems

    NASA Astrophysics Data System (ADS)

    Peng, Juan-juan; Wang, Jian-qiang; Yang, Wu-E.

    2017-01-01

    In this paper, multi-criteria decision-making (MCDM) problems based on the qualitative flexible multiple criteria method (QUALIFLEX), in which the criteria values are expressed by multi-valued neutrosophic information, are investigated. First, multi-valued neutrosophic sets (MVNSs), which allow the truth-membership function, indeterminacy-membership function and falsity-membership function to have a set of crisp values between zero and one, are introduced. Then the likelihood of multi-valued neutrosophic number (MVNN) preference relations is defined and the corresponding properties are also discussed. Finally, an extended QUALIFLEX approach based on likelihood is explored to solve MCDM problems where the assessments of alternatives are in the form of MVNNs; furthermore an example is provided to illustrate the application of the proposed method, together with a comparison analysis.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stinnett, Jacob; Sullivan, Clair J.; Xiong, Hao

    Low-resolution isotope identifiers are widely deployed for nuclear security purposes, but these detectors currently demonstrate problems in making correct identifications in many typical usage scenarios. While there are many hardware alternatives and improvements that can be made, performance on existing low resolution isotope identifiers should be able to be improved by developing new identification algorithms. We have developed a wavelet-based peak extraction algorithm and an implementation of a Bayesian classifier for automated peak-based identification. The peak extraction algorithm has been extended to compute uncertainties in the peak area calculations. To build empirical joint probability distributions of the peak areas andmore » uncertainties, a large set of spectra were simulated in MCNP6 and processed with the wavelet-based feature extraction algorithm. Kernel density estimation was then used to create a new component of the likelihood function in the Bayesian classifier. Furthermore, identification performance is demonstrated on a variety of real low-resolution spectra, including Category I quantities of special nuclear material.« less

  6. Influence of restricted vision and knee joint range of motion on gait properties during level walking and stair ascent and descent.

    PubMed

    Demura, Tomohiro; Demura, Shin-ich

    2011-01-01

    Because elderly individuals experience marked declines in various physical functions (e.g., vision, joint function) simultaneously, it is difficult to clarify the individual effects of these functional declines on walking. However, by imposing vision and joint function restrictions on young men, the effects of these functional declines on walking can be clarified. The authors aimed to determine the effect of restricted vision and range of motion (ROM) of the knee joint on gait properties while walking and ascending or descending stairs. Fifteen healthy young adults performed level walking and stair ascent and descent during control, vision restriction, and knee joint ROM restriction conditions. During level walking, walking speed and step width decreased, and double support time increased significantly with vision and knee joint ROM restrictions. Stance time, step width, and walking angle increased only with knee joint ROM restriction. Stance time, swing time, and double support time were significantly longer in level walking, stair descent, and stair ascent, in that order. The effects of vision and knee joint ROM restrictions were significantly larger than the control conditions. In conclusion, vision and knee joint ROM restrictions affect gait during level walking and stair ascent and descent. This effect is marked in stair ascent with knee joint ROM restriction.

  7. Effects of the application of ankle functional rehabilitation exercise on the ankle joint functional movement screen and isokinetic muscular function in patients with chronic ankle sprain.

    PubMed

    Ju, Sung-Bum; Park, Gi Duck

    2017-02-01

    [Purpose] This study was conducted to investigate the effects of ankle functional rehabilitation exercise on ankle joint functional movement screen results and isokinetic muscular function in patients with chronic ankle sprain patients. [Subjects and Methods] In this study, 16 patients with chronic ankle sprain were randomized to an ankle functional rehabilitation exercise group (n=8) and a control group (n=8). The ankle functional rehabilitation exercise centered on a proprioceptive sense exercise program, which was applied 12 times for 2 weeks. To verify changes after the application, ankle joint functional movement screen scores and isokinetic muscular function were measured and analyzed. [Results] The ankle functional rehabilitation exercise group showed significant improvements in all items of the ankle joint functional movement screen and in isokinetic muscular function after the exercise, whereas the control group showed no difference after the application. [Conclusion] The ankle functional rehabilitation exercise program can be effectively applied in patients with chronic ankle sprain for the improvement of ankle joint functional movement screen score and isokinetic muscular function.

  8. Effects of the application of ankle functional rehabilitation exercise on the ankle joint functional movement screen and isokinetic muscular function in patients with chronic ankle sprain

    PubMed Central

    Ju, Sung-Bum; Park, Gi Duck

    2017-01-01

    [Purpose] This study was conducted to investigate the effects of ankle functional rehabilitation exercise on ankle joint functional movement screen results and isokinetic muscular function in patients with chronic ankle sprain patients. [Subjects and Methods] In this study, 16 patients with chronic ankle sprain were randomized to an ankle functional rehabilitation exercise group (n=8) and a control group (n=8). The ankle functional rehabilitation exercise centered on a proprioceptive sense exercise program, which was applied 12 times for 2 weeks. To verify changes after the application, ankle joint functional movement screen scores and isokinetic muscular function were measured and analyzed. [Results] The ankle functional rehabilitation exercise group showed significant improvements in all items of the ankle joint functional movement screen and in isokinetic muscular function after the exercise, whereas the control group showed no difference after the application. [Conclusion] The ankle functional rehabilitation exercise program can be effectively applied in patients with chronic ankle sprain for the improvement of ankle joint functional movement screen score and isokinetic muscular function. PMID:28265157

  9. Joint Force Pre-Deployment Training: An Initial Analysis and Product Definition (Strategic Mobility 21: IT Planning Document for APS Demonstration Document (Task 3.7)

    DTIC Science & Technology

    2010-04-13

    Office of Naval Research. DISTRIBUTION STATEMENT A . Approved for public release; distribution is unlimited. a . This statement may be used only on...documents resulting from contracted fundamental research efforts will normally be assigned Distribution Statement A , except for those rare and exceptional...circumstances where there is a high likelihood of disclosing performance characteristics of military systems, or of manufacturing technologies that

  10. Gaussianization for fast and accurate inference from cosmological data

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2016-06-01

    We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.

  11. A Systems Biology Approach to Synovial Joint Lubrication in Health, Injury, and Disease

    PubMed Central

    Hui, Alexander Y.; McCarty, William J.; Masuda, Koichi; Firestein, Gary S.; Sah, Robert L.

    2013-01-01

    The synovial joint contains synovial fluid (SF) within a cavity bounded by articular cartilage and synovium. SF is a viscous fluid that has lubrication, metabolic, and regulatory functions within synovial joints. SF contains lubricant molecules, including proteoglycan-4 and hyaluronan. SF is an ultrafiltrate of plasma with secreted contributions from cell populations lining and within the synovial joint space, including chondrocytes and synoviocytes. Maintenance of normal SF lubricant composition and function are important for joint homeostasis. In osteoarthritis, rheumatoid arthritis, and joint injury, changes in lubricant composition and function accompany alterations in the cytokine and growth factor environment and increased fluid and molecular transport through joint tissues. Thus, understanding the synovial joint lubrication system requires a multi-faceted study of the various parts of the synovial joint and their interactions. Systems biology approaches at multiple scales are being used to describe the molecular, cellular, and tissue components and their interactions that comprise the functioning synovial joint. Analyses of the transcriptome and proteome of SF, cartilage, and synovium suggest that particular molecules and pathways play important roles in joint homeostasis and disease. Such information may be integrated with physicochemical tissue descriptions to construct integrative models of the synovial joint that ultimately may explain maintenance of health, recovery from injury, or development and progression of arthritis. PMID:21826801

  12. Expressed Likelihood as Motivator: Creating Value through Engaging What’s Real

    PubMed Central

    Higgins, E. Tory; Franks, Becca; Pavarini, Dana; Sehnert, Steen; Manley, Katie

    2012-01-01

    Our research tested two predictions regarding how likelihood can have motivational effects as a function of how a probability is expressed. We predicted that describing the probability of a future event that could be either A or B using the language of high likelihood (“80% A”) rather than low likelihood (“20% B”), i.e., high rather than low expressed likelihood, would make a present activity more real and engaging, as long as the future event had properties relevant to the present activity. We also predicted that strengthening engagement from the high (vs. low) expressed likelihood of a future event would intensify the value of present positive and negative objects (in opposite directions). Both predictions were supported. There was also evidence that this intensification effect from expressed likelihood was independent of the actual probability or valence of the future event. What mattered was whether high versus low likelihood language was used to describe the future event. PMID:23940411

  13. Formulating the Rasch Differential Item Functioning Model under the Marginal Maximum Likelihood Estimation Context and Its Comparison with Mantel-Haenszel Procedure in Short Test and Small Sample Conditions

    ERIC Educational Resources Information Center

    Paek, Insu; Wilson, Mark

    2011-01-01

    This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…

  14. Semiparametric time-to-event modeling in the presence of a latent progression event.

    PubMed

    Rice, John D; Tsodikov, Alex

    2017-06-01

    In cancer research, interest frequently centers on factors influencing a latent event that must precede a terminal event. In practice it is often impossible to observe the latent event precisely, making inference about this process difficult. To address this problem, we propose a joint model for the unobserved time to the latent and terminal events, with the two events linked by the baseline hazard. Covariates enter the model parametrically as linear combinations that multiply, respectively, the hazard for the latent event and the hazard for the terminal event conditional on the latent one. We derive the partial likelihood estimators for this problem assuming the latent event is observed, and propose a profile likelihood-based method for estimation when the latent event is unobserved. The baseline hazard in this case is estimated nonparametrically using the EM algorithm, which allows for closed-form Breslow-type estimators at each iteration, bringing improved computational efficiency and stability compared with maximizing the marginal likelihood directly. We present simulation studies to illustrate the finite-sample properties of the method; its use in practice is demonstrated in the analysis of a prostate cancer data set. © 2016, The International Biometric Society.

  15. Maximum likelihood estimates, from censored data, for mixed-Weibull distributions

    NASA Astrophysics Data System (ADS)

    Jiang, Siyuan; Kececioglu, Dimitri

    1992-06-01

    A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.

  16. Functional added value of microprocessor-controlled knee joints in daily life performance of Medicare Functional Classification Level-2 amputees.

    PubMed

    Theeven, Patrick; Hemmen, Bea; Rings, Frans; Meys, Guido; Brink, Peter; Smeets, Rob; Seelen, Henk

    2011-10-01

    To assess the effects of using a microprocessor-controlled prosthetic knee joint on the functional performance of activities of daily living in persons with an above-knee leg amputation. To assess the effects of using a microprocessor-controlled prosthetic knee joint on the functional performance of activities of daily living in persons with an above-knee leg amputation. Randomised cross-over trial. Forty-one persons with unilateral above-knee or knee disarticulation limb loss, classified as Medicare Functional Classification Level-2 (MFCL-2). Participants were measured in 3 conditions, i.e. using a mechanically controlled knee joint and two types of microprocessor-controlled prosthetic knee joints. Functional performance level was assessed using a test in which participants performed 17 simulated activities of daily living (Assessment of Daily Activity Performance in Transfemoral amputees test). Performance time was measured and self-perceived level of difficulty was scored on a visual analogue scale for each activity. High levels of within-group variability in functional performance obscured detection of any effects of using a microprocessor-controlled prosthetic knee joint. Data analysis after stratification of the participants into 3 subgroups, i.e. participants with a "low", "intermediate" and "high" functional mobility level, showed that the two higher functional subgroups performed significantly faster using microprocessor-controlled prosthetic knee joints. MFCL-2 amputees constitute a heterogeneous patient group with large variation in functional performance levels. A substantial part of this group seems to benefit from using a microprocessor-controlled prosthetic knee joint when performing activities of daily living.

  17. Periodontal Ligament Entheses and their Adaptive Role in the Context of Dentoalveolar Joint Function

    PubMed Central

    Lin, Jeremy D.; Jang, Andrew T.; Kurylo, Michael P.; Hurng, Jonathan; Yang, Feifei; Yang, Lynn; Pal, Arvin; Chen, Ling; Ho, Sunita P.

    2017-01-01

    Objectives The dynamic bone-periodontal ligament (PDL)-tooth fibrous joint consists of two adaptive functionally graded interfaces (FGI), the PDL-bone and PDL-cementum that respond to mechanical strain transmitted during mastication. In general, from a materials and mechanics perspective, FGI prevent catastrophic failure during prolonged cyclic loading. This review is a discourse of results gathered from literature to illustrate the dynamic adaptive nature of the fibrous joint in response to physiologic and pathologic simulated functions, and experimental tooth movement. Methods Historically, studies have investigated soft to hard tissue transitions through analytical techniques that provided insights into structural, biochemical, and mechanical characterization methods. Experimental approaches included two dimensional to three dimensional advanced in situ imaging and analytical techniques. These techniques allowed mapping and correlation of deformations to physicochemical and mechanobiological changes within volumes of the complex subjected to concentric and eccentric loading regimes respectively. Results Tooth movement is facilitated by mechanobiological activity at the interfaces of the fibrous joint and generates elastic discontinuities at these interfaces in response to eccentric loading. Both concentric and eccentric loads mediated cellular responses to strains, and prompted self-regulating mineral forming and resorbing zones that in turn altered the functional space of the joint. Significance A multiscale biomechanics and mechanobiology approach is important for correlating joint function to tissue-level strain-adaptive properties with overall effects on joint form as related to physiologic and pathologic functions. Elucidating the shift in localization of biomolecules specifically at interfaces during development, function, and therapeutic loading of the joint is critical for developing “functional regeneration and adaptation” strategies with an emphasis on restoring physiologic joint function. PMID:28476202

  18. MIXOR: a computer program for mixed-effects ordinal regression analysis.

    PubMed

    Hedeker, D; Gibbons, R D

    1996-03-01

    MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.

  19. Multivariate normal maximum likelihood with both ordinal and continuous variables, and data missing at random.

    PubMed

    Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C

    2018-04-01

    A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.

  20. Using local multiplicity to improve effect estimation from a hypothesis-generating pharmacogenetics study.

    PubMed

    Zou, W; Ouyang, H

    2016-02-01

    We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.

  1. Inference of reaction rate parameters based on summary statistics from experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  2. Inference of reaction rate parameters based on summary statistics from experiments

    DOE PAGES

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...

    2016-10-15

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  3. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  4. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  5. Etanercept Injection

    MedlinePlus

    ... own joints, causing pain, swelling, and loss of function) in adults, psoriatic arthritis (condition that causes joint ... its own joints, causing pain, swelling, loss of function, and delays in growth and development) in children ...

  6. Atypical prefrontal cortical responses to joint/non-joint attention in children with autism spectrum disorder (ASD): A functional near-infrared spectroscopy study

    PubMed Central

    Zhu, Huilin; Li, Jun; Fan, Yuebo; Li, Xinge; Huang, Dan; He, Sailing

    2015-01-01

    Autism spectrum disorder (ASD) is a neuro-developmental disorder, characterized by impairments in one’s capacity for joint attention. In this study, functional near-infrared spectroscopy (fNIRS) was applied to study the differences in activation and functional connectivity in the prefrontal cortex between children with autism spectrum disorder (ASD) and typically developing (TD) children. 21 ASD and 20 TD children were recruited to perform joint and non-joint attention tasks. Compared with TD children, children with ASD showed reduced activation and atypical functional connectivity pattern in the prefrontal cortex during joint attention. The atypical development of left prefrontal cortex might play an important role in social cognition defects of children with ASD. PMID:25798296

  7. Joint laxity and the relationship between muscle strength and functional ability in patients with osteoarthritis of the knee.

    PubMed

    van der Esch, M; Steultjens, M; Knol, D L; Dinant, H; Dekker, J

    2006-12-15

    To establish the impact of knee joint laxity on the relationship between muscle strength and functional ability in osteoarthritis (OA) of the knee. A cross-sectional study of 86 patients with OA of the knee was conducted. Tests were performed to determine varus-valgus laxity, muscle strength, and functional ability. Laxity was assessed using a device that measures the angular deviation of the knee in the frontal plane. Muscle strength was measured using a computer-driven isokinetic dynamometer. Functional ability was assessed by observation (100-meter walking test) and self report (Western Ontario and McMaster Universities Osteoarthritis Index [WOMAC]). Regression analyses were performed to assess the impact of joint laxity on the relationship between muscle strength and functional ability. In regression analyses, the interaction between muscle strength and joint laxity contributed to the variance in both walking time (P = 0.002) and WOMAC score (P = 0.080). The slope of the regression lines indicated that the relationship between muscle strength and functional ability (walking time, WOMAC) was stronger in patients with high knee joint laxity. Patients with knee OA and high knee joint laxity show a stronger relationship between muscle strength and functional ability than patients with OA and low knee joint laxity. Patients with OA, high knee joint laxity, and low muscle strength are most at risk of being disabled.

  8. Ball and Socket Ankle: Mechanism and Computational Evidence of Concept.

    PubMed

    Jastifer, James R; Gustafson, Peter A; Labomascus, Aaron; Snoap, Tyler

    The ball and socket ankle joint is a morphologically abnormal joint characterized by rounding of the articular surface of the talus. Other than anecdotal observation, little evidence has been presented to describe the development of this deformity. The purpose of the present study was to review ankle and subtalar joint mechanics and to kinematically examine the functional combination of these joints as a mechanism of the ball and socket ankle deformity. We reviewed functional representations of the ankle joint, subtalar joint, and ball and socket ankle deformity. A computational study of joint kinematics was then performed using a 3-dimensional model derived from a computed tomography scan of a ball and socket deformity. The joint kinematics were captured by creating a "virtual map" of the combined kinematics of the ankle and subtalar joints in the respective models. The ball and socket ankle deformity produces functionally similar kinematics to a combination of the ankle and subtalar joints. The findings of the present study support the notion that a possible cause of the ball and socket deformity is bony adaptation that compensates for a functional deficit of the ankle and subtalar joints. Copyright © 2017 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  9. Multiscale biomechanical responses of adapted bone-periodontal ligament-tooth fibrous joints

    PubMed Central

    Jang, Andrew T.; Merkle, Arno; Fahey, Kevin; Gansky, Stuart A.; Ho, Sunita P.

    2015-01-01

    Reduced functional loads cause adaptations in organs. In this study, temporal adaptations of bone-ligament-tooth fibrous joints to reduced functional loads were mapped using a holistic approach. Systematic studies were performed to evaluate organ-level and tissue-level adaptations in specimens harvested periodically from rats given powder food for 6 months (N = 60 over 8,12,16,20, and 24 weeks). Bone-periodontal ligament (PDL)-tooth fibrous joint adaptation was evaluated by comparing changes in joint stiffness with changes in functional space between the tooth and alveolar bony socket. Adaptations in tissues included mapping changes in the PDL and bone architecture as observed from collagen birefringence, bone hardness and volume fraction in rats fed soft foods (soft diet, SD) compared to those fed hard pellets as a routine diet (hard diet, HD). In situ biomechanical testing on harvested fibrous joints revealed increased stiffness in SD groups (SD:239-605 N/mm) (p<0.05) at 8 and 12 weeks. Increased joint stiffness in early development phase was due to decreased functional space (at 8wks change in functional space was −33 µm, at 12wks change in functional space was −30 µm) and shifts in tissue quality as highlighted by birefringence, architecture and hardness. These physical changes were not observed in joints that were well into function, that is, in rodents older than 12 weeks of age. Significant adaptations in older groups were highlighted by shifts in bone growth (bone volume fraction 24wks: Δ-0.06) and bone hardness (8wks: Δ−0.04 GPa, 16 wks: Δ−0.07 GPa, 24wks: Δ−0.06 GPa). The response rate (N/s) of joints to mechanical loads decreased in SD groups. Results from the study showed that joint adaptation depended on age. The initial form-related adaptation (observed change in functional space) can challenge strain-adaptive nature of tissues to meet functional demands with increasing age into adulthood. The coupled effect between functional space in the bone-PDLtooth complex and strain-adaptive nature of tissues is necessary to accommodate functional demands, and is temporally sensitive despite joint malfunction. From an applied science perspective, we propose that adaptations are registered as functional history in tissues and joints. PMID:26151121

  10. A likelihood ratio test for evolutionary rate shifts and functional divergence among proteins

    PubMed Central

    Knudsen, Bjarne; Miyamoto, Michael M.

    2001-01-01

    Changes in protein function can lead to changes in the selection acting on specific residues. This can often be detected as evolutionary rate changes at the sites in question. A maximum-likelihood method for detecting evolutionary rate shifts at specific protein positions is presented. The method determines significance values of the rate differences to give a sound statistical foundation for the conclusions drawn from the analyses. A statistical test for detecting slowly evolving sites is also described. The methods are applied to a set of Myc proteins for the identification of both conserved sites and those with changing evolutionary rates. Those positions with conserved and changing rates are related to the structures and functions of their proteins. The results are compared with an earlier Bayesian method, thereby highlighting the advantages of the new likelihood ratio tests. PMID:11734650

  11. Estimating a Logistic Discrimination Functions When One of the Training Samples Is Subject to Misclassification: A Maximum Likelihood Approach.

    PubMed

    Nagelkerke, Nico; Fidler, Vaclav

    2015-01-01

    The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.

  12. Characterization and Quantification of Uncertainty in the NARCCAP Regional Climate Model Ensemble and Application to Impacts on Water Systems

    NASA Astrophysics Data System (ADS)

    Mearns, L. O.; Sain, S. R.; McGinnis, S. A.; Steinschneider, S.; Brown, C. M.

    2015-12-01

    In this talk we present the development of a joint Bayesian Probabilistic Model for the climate change results of the North American Regional Climate Change Assessment Program (NARCCAP) that uses a unique prior in the model formulation. We use the climate change results (joint distribution of seasonal temperature and precipitation changes (future vs. current)) from the global climate models (GCMs) that provided boundary conditions for the six different regional climate models used in the program as informative priors for the bivariate Bayesian Model. The two variables involved are seasonal temperature and precipitation over sub-regions (i.e., Bukovsky Regions) of the full NARCCAP domain. The basic approach to the joint Bayesian hierarchical model follows the approach of Tebaldi and Sansó (2009). We compare model results using informative (i.e., GCM information) as well as uninformative priors. We apply these results to the Water Evaluation and Planning System (WEAP) model for the Colorado Springs Utility in Colorado. We investigate the layout of the joint pdfs in the context of the water model sensitivities to ranges of temperature and precipitation results to determine the likelihoods of future climate conditions that cannot be accommodated by possible adaptation options. Comparisons may also be made with joint pdfs formed from the CMIP5 collection of global climate models and empirically downscaled to the region of interest.

  13. A stochastic Iwan-type model for joint behavior variability modeling

    NASA Astrophysics Data System (ADS)

    Mignolet, Marc P.; Song, Pengchao; Wang, X. Q.

    2015-08-01

    This paper focuses overall on the development and validation of a stochastic model to describe the dissipation and stiffness properties of a bolted joint for which experimental data is available and exhibits a large scatter. An extension of the deterministic parallel-series Iwan model for the characterization of the force-displacement behavior of joints is first carried out. This new model involves dynamic and static coefficients of friction differing from each other and a broadly defined distribution of Jenkins elements. Its applicability is next investigated using the experimental data, i.e. stiffness and dissipation measurements obtained in harmonic testing of 9 nominally identical bolted joints. The model is found to provide a very good fit of the experimental data for each bolted joint notwithstanding the significant variability of their behavior. This finding suggests that this variability can be simulated through the randomization of only the parameters of the proposed Iwan-type model. The distribution of these parameters is next selected based on maximum entropy concepts and their corresponding parameters, i.e. the hyperparameters of the model, are identified using a maximum likelihood strategy. Proceeding with a Monte Carlo simulation of this stochastic Iwan model demonstrates that the experimental data fits well within the uncertainty band corresponding to the 5th and 95th percentiles of the model predictions which well supports the adequacy of the modeling effort.

  14. Economic growth and carbon emission control

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenyu

    The question about whether environmental improvement is compatible with continued economic growth remains unclear and requires further study in a specific context. This study intends to provide insight on the potential for carbon emissions control in the absence of international agreement, and connect the empirical analysis with theoretical framework. The Chinese electricity generation sector is used as a case study to demonstrate the problem. Both social planner and private problems are examined to derive the conditions that define the optimal level of production and pollution. The private problem will be demonstrated under the emission regulation using an emission tax, an input tax and an abatement subsidy respectively. The social optimal emission flow is imposed into the private problem. To provide tractable analytical results, a Cobb-Douglas type production function is used to describe the joint production process of the desired output and undesired output (i.e., electricity and emissions). A modified Hamiltonian approach is employed to solve the system and the steady state solutions are examined for policy implications. The theoretical analysis suggests that the ratio of emissions to desired output (refer to 'emission factor'), is a function of productive capital and other parameters. The finding of non-constant emission factor shows that reducing emissions without further cutting back the production of desired outputs is feasible under some circumstances. Rather than an ad hoc specification, the optimal conditions derived from our theoretical framework are used to examine the relationship between desired output and emission level. Data comes from the China Statistical Yearbook and China Electric Power Yearbook and provincial information of electricity generation for the year of 1993-2003 are used to estimate the Cobb-Douglas type joint production by the full information maximum likelihood (FIML) method. The empirical analysis shed light on the optimal policies of emissions control required for achieving the social goal in a private context. The results suggest that the efficiency of abatement technology is crucial for the timing of executing the emission tax. And emission tax is preferred to an input tax, as long as the detection of emissions is not costly and abatement technology is efficient. Keywords: Economic growth, Carbon emission, Power generation, Joint production, China

  15. Clarifying the Hubble constant tension with a Bayesian hierarchical model of the local distance ladder

    NASA Astrophysics Data System (ADS)

    Feeney, Stephen M.; Mortlock, Daniel J.; Dalmasso, Niccolò

    2018-05-01

    Estimates of the Hubble constant, H0, from the local distance ladder and from the cosmic microwave background (CMB) are discrepant at the ˜3σ level, indicating a potential issue with the standard Λ cold dark matter (ΛCDM) cosmology. A probabilistic (i.e. Bayesian) interpretation of this tension requires a model comparison calculation, which in turn depends strongly on the tails of the H0 likelihoods. Evaluating the tails of the local H0 likelihood requires the use of non-Gaussian distributions to faithfully represent anchor likelihoods and outliers, and simultaneous fitting of the complete distance-ladder data set to ensure correct uncertainty propagation. We have hence developed a Bayesian hierarchical model of the full distance ladder that does not rely on Gaussian distributions and allows outliers to be modelled without arbitrary data cuts. Marginalizing over the full ˜3000-parameter joint posterior distribution, we find H0 = (72.72 ± 1.67) km s-1 Mpc-1 when applied to the outlier-cleaned Riess et al. data, and (73.15 ± 1.78) km s-1 Mpc-1 with supernova outliers reintroduced (the pre-cut Cepheid data set is not available). Using our precise evaluation of the tails of the H0 likelihood, we apply Bayesian model comparison to assess the evidence for deviation from ΛCDM given the distance-ladder and CMB data. The odds against ΛCDM are at worst ˜10:1 when considering the Planck 2015 XIII data, regardless of outlier treatment, considerably less dramatic than naïvely implied by the 2.8σ discrepancy. These odds become ˜60:1 when an approximation to the more-discrepant Planck Intermediate XLVI likelihood is included.

  16. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  17. Statistical characteristics of the sequential detection of signals in correlated noise

    NASA Astrophysics Data System (ADS)

    Averochkin, V. A.; Baranov, P. E.

    1985-10-01

    A solution is given to the problem of determining the distribution of the duration of the sequential two-threshold Wald rule for the time-discrete detection of determinate and Gaussian correlated signals on a background of Gaussian correlated noise. Expressions are obtained for the joint probability densities of the likelihood ratio logarithms, and an analysis is made of the effect of correlation and SNR on the duration distribution and the detection efficiency. Comparison is made with Neumann-Pearson detection.

  18. Remote detection of riverine traffic using an ad hoc wireless sensor network

    NASA Astrophysics Data System (ADS)

    Athan, Stephan P.

    2005-05-01

    Trafficking of illegal drugs on riverine and inland waterways continues to proliferate in South America. While there has been a successful joint effort to cut off overland and air trafficking routes, there exists a vast river network and Amazon region consisting of over 13,000 water miles that remains difficult to adequately monitor, increasing the likelihood of narcotics moving along this extensive river system. Hence, an effort is underway to provide remote unattended riverine detection in lieu of manned or attended detection measures.

  19. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  20. Climate reconstruction analysis using coexistence likelihood estimation (CRACLE): a method for the estimation of climate using vegetation.

    PubMed

    Harbert, Robert S; Nixon, Kevin C

    2015-08-01

    • Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.

  1. Application of the two-stage clonal expansion model in characterizing the joint effect of exposure to two carcinogens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zielinski, J.M.; Krewski, D.

    1992-12-31

    In this paper, we describe application of the two-stage clonal expansion model to characterize the joint effect of exposure to two carcinogens. This biologically based model of carcinogenesis provides a useful framework for the quantitative description of carcinogenic risks and for defining agents that act as initiators, promoters, and completers. Depending on the mechanism of action, the agent-specific relative risk following exposure to two carcinogens can be additive, multiplicative, or supramultiplicative, with supra-additive relative risk indicating a synergistic effect between the two agents. Maximum-likelihood methods for fitting the two-stage clonal expansion model with intermittent exposure to two carcinogens are describedmore » and illustrated, using data on lung-cancer mortality among Colorado uranium miners exposed to both radon and tobacco smoke.« less

  2. Joint coverage probability in a simulation study on Continuous-Time Markov Chain parameter estimation.

    PubMed

    Benoit, Julia S; Chan, Wenyaw; Doody, Rachelle S

    2015-01-01

    Parameter dependency within data sets in simulation studies is common, especially in models such as Continuous-Time Markov Chains (CTMC). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: 1) to develop a multivariate approach for assessing accuracy and precision for simulation studies 2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.

  3. Direct Parametric Reconstruction With Joint Motion Estimation/Correction for Dynamic Brain PET Data.

    PubMed

    Jiao, Jieqing; Bousse, Alexandre; Thielemans, Kris; Burgos, Ninon; Weston, Philip S J; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Markiewicz, Pawel; Ourselin, Sebastien

    2017-01-01

    Direct reconstruction of parametric images from raw photon counts has been shown to improve the quantitative analysis of dynamic positron emission tomography (PET) data. However it suffers from subject motion which is inevitable during the typical acquisition time of 1-2 hours. In this work we propose a framework to jointly estimate subject head motion and reconstruct the motion-corrected parametric images directly from raw PET data, so that the effects of distorted tissue-to-voxel mapping due to subject motion can be reduced in reconstructing the parametric images with motion-compensated attenuation correction and spatially aligned temporal PET data. The proposed approach is formulated within the maximum likelihood framework, and efficient solutions are derived for estimating subject motion and kinetic parameters from raw PET photon count data. Results from evaluations on simulated [ 11 C]raclopride data using the Zubal brain phantom and real clinical [ 18 F]florbetapir data of a patient with Alzheimer's disease show that the proposed joint direct parametric reconstruction motion correction approach can improve the accuracy of quantifying dynamic PET data with large subject motion.

  4. Joint awareness after total knee arthroplasty is affected by pain and quadriceps strength.

    PubMed

    Hiyama, Y; Wada, O; Nakakita, S; Mizuno, K

    2016-06-01

    There is a growing interest in the use of patient-reported outcomes to provide a more patient-centered view on treatment. Forgetting the artificial joint can be regarded as the goal in joint arthroplasty. The goals of the study were to describe changes in joint awareness in the artificial joint after total knee arthroplasty (TKA), and to determine which factors among pain, knee range of motion (ROM), quadriceps strength, and functional ability affect joint awareness after TKA. Patients undergoing TKA demonstrate changes in joint awareness and joint awareness is associated with pain, knee ROM, quadriceps strength, and functional ability. This prospective cohort study comprised 63 individuals undergoing TKA, evaluated at 1, 6, and 12 months postoperatively. Outcomes included joint awareness assessed using the Forgotten Joint Score (FJS), pain score, knee ROM, quadriceps strength, and functional ability. Fifty-eight individuals completed all postoperative assessments. All measures except for knee extension ROM improved from 1 to 6 months. However, there were no differences in any measures from 6 to 12 months. FJS was affected most greatly by pain at 1 month and by quadriceps strength at 6 and 12 months. Patients following TKA demonstrate improvements in joint awareness and function within 6 months after surgery, but reach a plateau from 6 to 12 months. Quadriceps strength could contribute to this plateau of joint awareness. Prospective cohort study, IV. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  5. Effects of proprioceptive circuit exercise on knee joint pain and muscle function in patients with knee osteoarthritis.

    PubMed

    Ju, Sung-Bum; Park, Gi Duck; Kim, Sang-Soo

    2015-08-01

    [Purpose] This study applied proprioceptive circuit exercise to patients with degenerative knee osteoarthritis and examined its effects on knee joint muscle function and the level of pain. [Subjects] In this study, 14 patients with knee osteoarthritis in two groups, a proprioceptive circuit exercise group (n = 7) and control group (n = 7), were examined. [Methods] IsoMed 2000 (D&R Ferstl GmbH, Hemau, Germany) was used to assess knee joint muscle function, and a Visual Analog Scale was used to measure pain level. [Results] In the proprioceptive circuit exercise group, knee joint muscle function and pain levels improved significantly, whereas in the control group, no significant improvement was observed. [Conclusion] A proprioceptive circuit exercise may be an effective way to strengthen knee joint muscle function and reduce pain in patients with knee osteoarthritis.

  6. Brucella melitensis prosthetic joint infection in a traveller returning to the UK from Thailand: Case report and review of the literature.

    PubMed

    Lewis, Joseph M; Folb, Jonathan; Kalra, Sanjay; Squire, S Bertel; Taegtmeyer, Miriam; Beeching, Nick J

    Brucella spp. prosthetic joint infections are infrequently reported in the literature, particularly in returning travellers, and optimal treatment is unknown. We describe a prosthetic joint infection (PJI) caused by Brucella melitensis in a traveller returning to the UK from Thailand, which we believe to be the first detailed report of brucellosis in a traveller returning from this area. The 23 patients with Brucella-related PJI reported in the literature are summarised, together with our case. The diagnosis of Brucella-related PJI is difficult to make; only 30% of blood cultures and 75% of joint aspiration cultures were positive in the reported cases. Culture of intraoperative samples provides the best diagnostic yield. In the absence of radiological evidence of joint loosening, combination antimicrobial therapy alone may be appropriate treatment in the first instance; this was successful in 6/7 [86%] of patients, though small numbers of patients and the likelihood of reporting bias warrant caution in drawing any firm conclusions about optimal treatment. Aerosolisation of synovial fluid during joint aspiration procedures and nosocomial infection has been described. Brucella-related PJI should be considered in the differential of travellers returning from endemic areas with PJI, including Thailand. Personal protective equipment including fit tested filtering face piece-3 (FFP3) mask or equivalent is recommended for personnel carrying out joint aspiration when brucellosis is suspected. Travellers can reduce the risk of brucellosis by avoiding unpasteurised dairy products and animal contact (particularly on farms and abattoirs) in endemic areas and should be counselled regarding these risks as part of their pre-travel assessment. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  7. Mobile Phone-Based Joint Angle Measurement for Functional Assessment and Rehabilitation of Proprioception

    PubMed Central

    Mourcou, Quentin; Fleury, Anthony; Diot, Bruno; Franco, Céline; Vuillerme, Nicolas

    2015-01-01

    Assessment of joint functional and proprioceptive abilities is essential for balance, posture, and motor control rehabilitation. Joint functional ability refers to the capacity of movement of the joint. It may be evaluated thereby measuring the joint range of motion (ROM). Proprioception can be defined as the perception of the position and of the movement of various body parts in space. Its role is essential in sensorimotor control for movement acuity, joint stability, coordination, and balance. Its clinical evaluation is commonly based on the assessment of the joint position sense (JPS). Both ROM and JPS measurements require estimating angles through goniometer, scoliometer, laser-pointer, and bubble or digital inclinometer. With the arrival of Smartphones, these costly clinical tools tend to be replaced. Beyond evaluation, maintaining and/or improving joint functional and proprioceptive abilities by training with physical therapy is important for long-term management. This review aims to report Smartphone applications used for measuring and improving functional and proprioceptive abilities. It identifies that Smartphone applications are reliable for clinical measurements and are mainly used to assess ROM and JPS. However, there is lack of studies on Smartphone applications which can be used in an autonomous way to provide physical therapy exercises at home. PMID:26583101

  8. Efficient computation of the phylogenetic likelihood function on multi-gene alignments and multi-core architectures.

    PubMed

    Stamatakis, Alexandros; Ott, Michael

    2008-12-27

    The continuous accumulation of sequence data, for example, due to novel wet-laboratory techniques such as pyrosequencing, coupled with the increasing popularity of multi-gene phylogenies and emerging multi-core processor architectures that face problems of cache congestion, poses new challenges with respect to the efficient computation of the phylogenetic maximum-likelihood (ML) function. Here, we propose two approaches that can significantly speed up likelihood computations that typically represent over 95 per cent of the computational effort conducted by current ML or Bayesian inference programs. Initially, we present a method and an appropriate data structure to efficiently compute the likelihood score on 'gappy' multi-gene alignments. By 'gappy' we denote sampling-induced gaps owing to missing sequences in individual genes (partitions), i.e. not real alignment gaps. A first proof-of-concept implementation in RAXML indicates that this approach can accelerate inferences on large and gappy alignments by approximately one order of magnitude. Moreover, we present insights and initial performance results on multi-core architectures obtained during the transition from an OpenMP-based to a Pthreads-based fine-grained parallelization of the ML function.

  9. Parameter estimation of history-dependent leaky integrate-and-fire neurons using maximum-likelihood methods

    PubMed Central

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2012-01-01

    When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282

  10. Multi-subject Manifold Alignment of Functional Network Structures via Joint Diagonalization.

    PubMed

    Nenning, Karl-Heinz; Kollndorfer, Kathrin; Schöpf, Veronika; Prayer, Daniela; Langs, Georg

    2015-01-01

    Functional magnetic resonance imaging group studies rely on the ability to establish correspondence across individuals. This enables location specific comparison of functional brain characteristics. Registration is often based on morphology and does not take variability of functional localization into account. This can lead to a loss of specificity, or confounds when studying diseases. In this paper we propose multi-subject functional registration by manifold alignment via coupled joint diagonalization. The functional network structure of each subject is encoded in a diffusion map, where functional relationships are decoupled from spatial position. Two-step manifold alignment estimates initial correspondences between functionally equivalent regions. Then, coupled joint diagonalization establishes common eigenbases across all individuals, and refines the functional correspondences. We evaluate our approach on fMRI data acquired during a language paradigm. Experiments demonstrate the benefits in matching accuracy achieved by coupled joint diagonalization compared to previously proposed functional alignment approaches, or alignment based on structural correspondences.

  11. Mental health, fatigue and function are associated with increased risk of disease flare following TNF inhibitor tapering in patients with rheumatoid arthritis: an exploratory analysis of data from the Optimizing TNF Tapering in RA (OPTTIRA) trial.

    PubMed

    Bechman, Katie; Sin, Fang En; Ibrahim, Fowzia; Norton, Sam; Matcham, Faith; Scott, David Lloyd; Cope, Andrew; Galloway, James

    2018-01-01

    Tapering of anti-tumour necrosis factor (TNF) therapy appears feasible, safe and effective in selected patients with rheumatoid arthritis (RA). Depression is highly prevalent in RA and may impact on flare incidence through various mechanisms. This study aims to investigate if psychological states predict flare in patients' dose tapering their anti-TNF therapy. This study is a post-hoc analysis of the Optimizing TNF Tapering in RA trial, a multicentre, randomised, open-label study investigating anti-TNF tapering in RA patients with sustained low disease activity. Patient-reported outcomes (Health Assessment Questionnaire, EuroQol 5-dimension scale, Functional Assessment of Chronic Illness Therapy fatigue scale (FACIT-F), 36-Item Short Form Survey (SF-36)) were collected at baseline. The primary outcome was flare, defined as an increase in 28-joint count Disease Activity Score (DAS28) ≥0.6 and ≥1 swollen joint. Discrete-time survival models were used to identify patient-reported outcomes that predict flare. Ninety-seven patients were randomised to taper their anti-TNF dose by either 33% or 66%. Forty-one patients flared. Higher baseline DAS28 score was associated with flare (adjusted HR 1.96 (95% CI 1.18 to 3.24), p=0.01). Disability (SF-36 physical component score), fatigue (FACIT-F) and mental health (SF-36 mental health subscale (MH)) predicted flare in unadjusted models. In multivariate analyses, only SF-36 MH remained a statistically significant predictor of flare (adjusted HR per 10 units 0.74 (95% CI 0.60 to 0.93), p=0.01). Baseline DAS28 and mental health status are independently associated with flare in patients who taper their anti-TNF therapy. Fatigue and function also associate with flare but the effect disappears when adjusting for confounders. Given these findings, mental health and functional status should be considered in anti-TNF tapering decisions in order to optimise the likelihood of success. EudraCT Number: 2010-020738-24; ISRCTN: 28955701; Post-results.

  12. Mental health, fatigue and function are associated with increased risk of disease flare following TNF inhibitor tapering in patients with rheumatoid arthritis: an exploratory analysis of data from the Optimizing TNF Tapering in RA (OPTTIRA) trial

    PubMed Central

    Bechman, Katie; Sin, Fang En; Ibrahim, Fowzia; Norton, Sam; Matcham, Faith; Scott, David Lloyd; Cope, Andrew; Galloway, James

    2018-01-01

    Background Tapering of anti-tumour necrosis factor (TNF) therapy appears feasible, safe and effective in selected patients with rheumatoid arthritis (RA). Depression is highly prevalent in RA and may impact on flare incidence through various mechanisms. This study aims to investigate if psychological states predict flare in patients’ dose tapering their anti-TNF therapy. Methods This study is a post-hoc analysis of the Optimizing TNF Tapering in RA trial, a multicentre, randomised, open-label study investigating anti-TNF tapering in RA patients with sustained low disease activity. Patient-reported outcomes (Health Assessment Questionnaire, EuroQol 5-dimension scale, Functional Assessment of Chronic Illness Therapy fatigue scale (FACIT-F), 36-Item Short Form Survey (SF-36)) were collected at baseline. The primary outcome was flare, defined as an increase in 28-joint count Disease Activity Score (DAS28) ≥0.6 and ≥1 swollen joint. Discrete-time survival models were used to identify patient-reported outcomes that predict flare. Results Ninety-seven patients were randomised to taper their anti-TNF dose by either 33% or 66%. Forty-one patients flared. Higher baseline DAS28 score was associated with flare (adjusted HR 1.96 (95% CI 1.18 to 3.24), p=0.01). Disability (SF-36 physical component score), fatigue (FACIT-F) and mental health (SF-36 mental health subscale (MH)) predicted flare in unadjusted models. In multivariate analyses, only SF-36 MH remained a statistically significant predictor of flare (adjusted HR per 10 units 0.74 (95% CI 0.60 to 0.93), p=0.01). Conclusions Baseline DAS28 and mental health status are independently associated with flare in patients who taper their anti-TNF therapy. Fatigue and function also associate with flare but the effect disappears when adjusting for confounders. Given these findings, mental health and functional status should be considered in anti-TNF tapering decisions in order to optimise the likelihood of success. Trial registration numbers EudraCT Number: 2010-020738-24; ISRCTN: 28955701; Post-results. PMID:29862047

  13. [Temporo-mandibular joint. Morpho-functional considerations].

    PubMed

    Scutariu, M D; Indrei, Anca

    2004-01-01

    The temporo-mandibular joint is distinguished from most other synovial joints of the body by two features: 1. the two jointed components carry teeth whose position and occlusion introduce a very strong influence on the movements of the temporo-mandibular joint and 2. its articular surfaces are not covered by hyaline cartilage, but by a dense, fibrous tissue. This paper describes the parts of the temporo-mandibular joint: the articular surfaces (the condylar process of the mandible and the glenoid part of the temporal bone), the fibrocartilaginous disc which is interposed between the mandibular and the temporal surface, the fibrous capsule of the temporo-mandibular joint and the ligaments of this joint. All these parts present a very strong adaptation at the important functions of the temporo-mandibular joint.

  14. Lower Limb Osteoarthritis and the Risk of Falls in a Community-Based Longitudinal Study of Adults with and without Osteoarthritis

    PubMed Central

    Doré, Adam L.; Golightly, Yvonne M.; Mercer, Vicki S.; Shi, Xiaoyan A.; Renner, Jordan B.; Jordan, Joanne M.; Nelson, Amanda E.

    2014-01-01

    Objective Knee and hip osteoarthritis (OA) are known risk factors for falls, but whether they together additionally contribute to falls risk is unknown. This study utilizes a biracial cohort of men and women to examine the influence of lower limb OA burden on the risk for future falls. Methods A longitudinal analysis was performed using data from 2 time points of a large cohort. The outcome of interest was falls at follow up. Covariates included age, sex, race, body mass index, a history of prior falls, symptomatic OA of the hip and/or knee, a history of neurologic or pulmonary diseases, and current use of narcotic medications. Symptomatic OA was defined as patient reported symptoms and radiographic evidence of OA in the same joint. Logistic regression analyses were used to determine associations between covariates and falls at follow-up. Results The odds of falling increased with an increasing number of lower limb symptomatic OA joints: those with 1 joint had 53% higher odds, those with 2 joints had 74% higher odds, those with 3–4 OA joints had 85% higher odds. When controlling for covariates, patients who had symptomatic knee or hip OA had an increased likelihood of falling (aOR 1.39 95% CI [1.02, 1.88]; aOR 1.60 95% CI [1.14, 2.24], respectively). Conclusions This study reveals the risk for falls increases with additional symptomatic OA lower limb joints and confirms that symptomatic hip and knee OA are important risk factors for falls. PMID:25331686

  15. MRI of the sacroiliac joints in spondyloarthritis: the added value of intra-articular signal changes for a 'positive MRI'.

    PubMed

    Laloo, Frederiek; Herregods, N; Jaremko, J L; Verstraete, K; Jans, L

    2018-05-01

    To determine if intra-articular signal changes at the sacroiliac joint space on MRI have added diagnostic value for spondyloarthritis, when compared to bone marrow edema (BME). A retrospective study was performed on the MRIs of sacroiliac joints of 363 patients, aged 16-45 years, clinically suspected of sacroiliitis. BME of the sacroiliac joints was correlated to intra-articular sacroiliac joint MR signal changes: high T1 signal, fluid signal, ankylosis and vacuum phenomenon (VP). These MRI findings were correlated with final clinical diagnosis. Sensitivity (SN), specificity (SP), likelihood ratios (LR), predictive values and post-test probabilities were calculated. BME had SN of 68.9%, SP of 74.0% and LR+ of 2.6 for diagnosis of spondyloarthritis. BME in absence of intra-articular signal changes had a lower SN and LR+ for spondyloarthritis (SN = 20.5%, LR+ 1.4). Concomitant BME and high T1 signal (SP = 97.2%, LR + = 10.5), BME and fluid signal (SP = 98.6%, LR + = 10.3) or BME and ankylosis (SP = 100%) had higher SP and LR+ for spondyloarthritis. Concomitant BME and VP had low LR+ for spondyloarthritis (SP = 91%, LR + =0.9). When BME was absent, intra-articular signal changes were less prevalent, but remained highly specific for spondyloarthritis. Our results suggest that both periarticular and intra-articular MR signal of the sacroiliac joint should be examined to determine whether an MRI is 'positive' or 'not positive' for sacroiliitis associated with spondyloarthritis.

  16. Spectral likelihood expansions for Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nagel, Joseph B.; Sudret, Bruno

    2016-03-01

    A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

  17. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  18. The Analysis of Adhesively Bonded Advanced Composite Joints Using Joint Finite Elements

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott E.; Waas, Anthony M.

    2012-01-01

    The design and sizing of adhesively bonded joints has always been a major bottleneck in the design of composite vehicles. Dense finite element (FE) meshes are required to capture the full behavior of a joint numerically, but these dense meshes are impractical in vehicle-scale models where a course mesh is more desirable to make quick assessments and comparisons of different joint geometries. Analytical models are often helpful in sizing, but difficulties arise in coupling these models with full-vehicle FE models. Therefore, a joint FE was created which can be used within structural FE models to make quick assessments of bonded composite joints. The shape functions of the joint FE were found by solving the governing equations for a structural model for a joint. By analytically determining the shape functions of the joint FE, the complex joint behavior can be captured with very few elements. This joint FE was modified and used to consider adhesives with functionally graded material properties to reduce the peel stress concentrations located near adherend discontinuities. Several practical concerns impede the actual use of such adhesives. These include increased manufacturing complications, alterations to the grading due to adhesive flow during manufacturing, and whether changing the loading conditions significantly impact the effectiveness of the grading. An analytical study is conducted to address these three concerns. Furthermore, proof-of-concept testing is conducted to show the potential advantages of functionally graded adhesives. In this study, grading is achieved by strategically placing glass beads within the adhesive layer at different densities along the joint. Furthermore, the capability to model non-linear adhesive constitutive behavior with large rotations was developed, and progressive failure of the adhesive was modeled by re-meshing the joint as the adhesive fails. Results predicted using the joint FE was compared with experimental results for various joint configurations, including double cantilever beam and single lap joints.

  19. Temporal rainfall estimation using input data reduction and model inversion

    NASA Astrophysics Data System (ADS)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  20. Reported outcomes in major cardiovascular clinical trials funded by for-profit and not-for-profit organizations: 2000-2005.

    PubMed

    Ridker, Paul M; Torres, Jose

    2006-05-17

    In surveys based on data available prior to 2000, clinical trials funded by for-profit organizations appeared more likely to report positive findings than those funded by not-for-profit organizations. Whether this situation has changed over the past 5 years or whether similar effects are present among jointly funded trials is unknown. To determine in contemporary randomized cardiovascular trials the association between funding source and the likelihood of reporting positive findings. We reviewed 324 consecutive superiority trials of cardiovascular medicine published between January 1, 2000, and July 30, 2005, in JAMA, The Lancet, and the New England Journal of Medicine. The proportion of trials favoring newer treatments over the standard of care was evaluated by funding source. Of the 324 superiority trials, 21 cited no funding source. Of the 104 trials funded solely by not-for-profit organizations, 51 (49%) reported evidence significantly favoring newer treatments over the standard of care, whereas 53 (51%) did not (P = .80). By contrast, 92 (67.2%) of 137 trials funded solely by for-profit organizations favored newer treatments over standard of care (P<.001). Among 62 jointly funded trials, 35 (56.5%), an intermediate proportion, favored newer treatments. For 205 randomized trials evaluating drugs, the proportions favoring newer treatments were 39.5%, not-for-profit; 54.4%, jointly funded; and 65.5%, for-profit trials (P for trend across groups = .002). For the 39 randomized trials evaluating cardiovascular devices, the proportions favoring newer treatments were 50.0%, not-for-profit; 69.2%, jointly funded; and 82.4%, for-profit trials (P for trend across groups = .07). Regardless of funding source, trials using surrogate end points, such as quantitative angiography, intravascular ultrasound, plasma biomarkers, and functional measures were more likely to report positive findings (67%) than trials using clinical end points (54.1%; P = .02). Recent cardiovascular trials funded by for-profit organizations are more likely to report positive findings than trials funded by not-for-profit organizations, as are trials using surrogate rather than clinical end points. Trials jointly funded by not-for-profit and for-profit organizations appear to report positive findings at a rate approximately midway between rates observed in trials supported solely by one or the other of these entities.

  1. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    PubMed

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  2. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso

    PubMed Central

    Kong, Shengchun; Nan, Bin

    2013-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328

  3. Exposure enriched outcome dependent designs for longitudinal studies of gene-environment interaction.

    PubMed

    Sun, Zhichao; Mukherjee, Bhramar; Estes, Jason P; Vokonas, Pantel S; Park, Sung Kyun

    2017-08-15

    Joint effects of genetic and environmental factors have been increasingly recognized in the development of many complex human diseases. Despite the popularity of case-control and case-only designs, longitudinal cohort studies that can capture time-varying outcome and exposure information have long been recommended for gene-environment (G × E) interactions. To date, literature on sampling designs for longitudinal studies of G × E interaction is quite limited. We therefore consider designs that can prioritize a subsample of the existing cohort for retrospective genotyping on the basis of currently available outcome, exposure, and covariate data. In this work, we propose stratified sampling based on summaries of individual exposures and outcome trajectories and develop a full conditional likelihood approach for estimation that adjusts for the biased sample. We compare the performance of our proposed design and analysis with combinations of different sampling designs and estimation approaches via simulation. We observe that the full conditional likelihood provides improved estimates for the G × E interaction and joint exposure effects over uncorrected complete-case analysis, and the exposure enriched outcome trajectory dependent design outperforms other designs in terms of estimation efficiency and power for detection of the G × E interaction. We also illustrate our design and analysis using data from the Normative Aging Study, an ongoing longitudinal cohort study initiated by the Veterans Administration in 1963. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Maximum-likelihood estimation of parameterized wavefronts from multifocal data

    PubMed Central

    Sakamoto, Julia A.; Barrett, Harrison H.

    2012-01-01

    A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282

  5. Radiographic hand osteoarthritis: patterns and associations with hand pain and function in a community-dwelling sample.

    PubMed

    Marshall, M; van der Windt, D; Nicholls, E; Myers, H; Hay, E; Dziedzic, K

    2009-11-01

    Patterns of radiographic osteoarthritis (ROA) of the hand are often examined by row, with the four joints of the thumb studied inconsistently. The objectives of this study were to determine relationships of ROA at different hand joints, use the findings to define radiographic sub-groups and investigate their associations with pain and function. Sixteen joints in each hand were scored for the presence of ROA in a community-dwelling cohort of adults, 50-years-and-over, with self-reported hand pain or problems. Principal components analysis (PCA) with varimax rotation was used to study patterns of ROA in the hand joints and identify distinct sub-groups. Differences in pain and function between these sub-groups were assessed using Australian/Canadian Osteoarthritis Index (AUSCAN), Grip Ability Test (GAT) and grip and pinch strength. PCA was undertaken on data from 592 participants and identified four components: distal interphalangeal joints (DIPs), proximal interphalangeal joints (PIPs), metacarpophalangeal joints (MCPs), thumb joints. However, the left thumb interphalangeal (IP) joint cross-loaded with the PIP and thumb groups. On this basis, participants were categorised into four radiographic sub-groups: no osteoarthritis (OA), finger only OA, thumb only OA and combined thumb and finger OA. Statistically significant differences were found between the sub-groups for AUSCAN function, and in women alone for grip and pinch strength. Participants with combined thumb and finger OA had the worst scores. Individual thumb joints can be clustered together as a joint group in ROA. Four radiographic sub-groups of hand OA can be distinguished. Pain and functional difficulties were highest in participants with both thumb and finger OA.

  6. Beyond Roughness: Maximum-Likelihood Estimation of Topographic "Structure" on Venus and Elsewhere in the Solar System

    NASA Astrophysics Data System (ADS)

    Simons, F. J.; Eggers, G. L.; Lewis, K. W.; Olhede, S. C.

    2015-12-01

    What numbers "capture" topography? If stationary, white, and Gaussian: mean and variance. But "whiteness" is strong; we are led to a "baseline" over which to compute means and variances. We then have subscribed to topography as a correlated process, and to the estimation (noisy, afftected by edge effects) of the parameters of a spatial or spectral covariance function. What if the covariance function or the point process itself aren't Gaussian? What if the region under study isn't regularly shaped or sampled? How can results from differently sized patches be compared robustly? We present a spectral-domain "Whittle" maximum-likelihood procedure that circumvents these difficulties and answers the above questions. The key is the Matern form, whose parameters (variance, range, differentiability) define the shape of the covariance function (Gaussian, exponential, ..., are all special cases). We treat edge effects in simulation and in estimation. Data tapering allows for the irregular regions. We determine the estimation variance of all parameters. And the "best" estimate may not be "good enough": we test whether the "model" itself warrants rejection. We illustrate our methodology on geologically mapped patches of Venus. Surprisingly few numbers capture planetary topography. We derive them, with uncertainty bounds, we simulate "new" realizations of patches that look to the geologists exactly as if they were derived from similar processes. Our approach holds in 1, 2, and 3 spatial dimensions, and generalizes to multiple variables, e.g. when topography and gravity are being considered jointly (perhaps linked by flexural rigidity, erosion, or other surface and sub-surface modifying processes). Our results have widespread implications for the study of planetary topography in the Solar System, and are interpreted in the light of trying to derive "process" from "parameters", the end goal to assign likely formation histories for the patches under consideration. Our results should also be relevant for whomever needed to perform spatial interpolation or out-of-sample extension (e.g. kriging), machine learning and feature detection, on geological data. We present procedural details but focus on high-level results that have real-world implications for the study of Venus, Earth, other planets, and moons.

  7. [Dislocation of the PIP-Joint - Treatment of a common (ball)sports injury].

    PubMed

    Müller-Seubert, Wibke; Bührer, Gregor; Horch, Raymund E

    2017-09-01

    Background  Fractures or fracture dislocations of the proximal interphalangeal joint often occur during sports or accidents. Dislocations of the PIP-joint are the most common ligamentary injuries of the hand. As this kind of injury is so frequent, hand surgeons and other physicians should be aware of the correct treatment. Objectives  This paper summarises the most common injury patterns and the correct treatment of PIP-joint dislocations. Materials and Methods  This paper reviews the current literature and describes the standardised treatment of PIP-joint dislocations. Results  What is most important is that reposition is anatomically correct, and this should be controlled by X-ray examination. Depending on the instability and possible combination with other injuries (e. g. injury to the palmar plate), early functional physiotherapy of the joint or a short immobilisation period is indicated. Conclusions  Early functional treatment of the injured PIP-joint, initially using buddy taping, is important to restore PIP-joint movement and function. Depending on the injury, joint immobilisation using a K-wire may be indicated. Detailed informed consent is necessary to explain to the patient the severity of the injury and possible complications, such as chronic functional disorders or development of arthrosis. © Georg Thieme Verlag KG Stuttgart · New York.

  8. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  9. Clinical classification in low back pain: best-evidence diagnostic rules based on systematic reviews.

    PubMed

    Petersen, Tom; Laslett, Mark; Juhl, Carsten

    2017-05-12

    Clinical examination findings are used in primary care to give an initial diagnosis to patients with low back pain and related leg symptoms. The purpose of this study was to develop best evidence Clinical Diagnostic Rules (CDR] for the identification of the most common patho-anatomical disorders in the lumbar spine; i.e. intervertebral discs, sacroiliac joints, facet joints, bone, muscles, nerve roots, muscles, peripheral nerve tissue, and central nervous system sensitization. A sensitive electronic search strategy using MEDLINE, EMBASE and CINAHL databases was combined with hand searching and citation tracking to identify eligible studies. Criteria for inclusion were: persons with low back pain with or without related leg symptoms, history or physical examination findings suitable for use in primary care, comparison with acceptable reference standards, and statistical reporting permitting calculation of diagnostic value. Quality assessments were made independently by two reviewers using the Quality Assessment of Diagnostic Accuracy Studies tool. Clinical examination findings that were investigated by at least two studies were included and results that met our predefined threshold of positive likelihood ratio ≥ 2 or negative likelihood ratio ≤ 0.5 were considered for the CDR. Sixty-four studies satisfied our eligible criteria. We were able to construct promising CDRs for symptomatic intervertebral disc, sacroiliac joint, spondylolisthesis, disc herniation with nerve root involvement, and spinal stenosis. Single clinical test appear not to be as useful as clusters of tests that are more closely in line with clinical decision making. This is the first comprehensive systematic review of diagnostic accuracy studies that evaluate clinical examination findings for their ability to identify the most common patho-anatomical disorders in the lumbar spine. In some diagnostic categories we have sufficient evidence to recommend a CDR. In others, we have only preliminary evidence that needs testing in future studies. Most findings were tested in secondary or tertiary care. Thus, the accuracy of the findings in a primary care setting has yet to be confirmed.

  10. The Diagnostic Value of the Clarke Sign in Assessing Chondromalacia Patella

    PubMed Central

    Doberstein, Scott T; Romeyn, Richard L; Reineke, David M

    2008-01-01

    Context: Various techniques have been described for assessing conditions that cause pain at the patellofemoral (PF) joint. The Clarke sign is one such test, but the diagnostic value of this test in assessing chondromalacia patella is unknown. Objective: To (1) investigate the diagnostic value of the Clarke sign in assessing the presence of chondromalacia patella using arthroscopic examination of the PF joint as the “gold standard,” and (2) provide a historical perspective of the Clarke sign as a clinical diagnostic test. Design: Validation study. Setting: All patients of one of the investigators who had knee pain or injuries unrelated to the patellofemoral joint and were scheduled for arthroscopic surgery were recruited for this study. Patients or Other Participants: A total of 106 otherwise healthy individuals with no history of patellofemoral pain or dysfunction volunteered. Main Outcome Measure(s): The Clarke sign was performed on the surgical knee by a single investigator in the clinic before surgery. A positive test was indicated by the presence of pain sufficient to prevent the patient from maintaining a quadriceps muscle contraction against manual resistance for longer than 2 seconds. The preoperative result was compared with visual evidence of chondromalacia patella during arthroscopy. Results: Sensitivity was 0.39, specificity was 0.67, likelihood ratio for a positive test was 1.18, likelihood ratio for a negative test was 0.91, positive predictive value was 0.25, and negative predictive value was 0.80. Conclusions: Diagnostic validity values for the use of the Clarke sign in assessing chondromalacia patella were unsatisfactory, supporting suggestions that it has poor diagnostic value as a clinical examination technique. Additionally, an extensive search of the available literature for the Clarke sign reveals multiple problems with the test, causing significant confusion for clinicians. Therefore, the use of the Clarke sign as a routine part of a knee examination is not beneficial, and its use should be discontinued. PMID:18345345

  11. The diagnostic value of the Clarke sign in assessing chondromalacia patella.

    PubMed

    Doberstein, Scott T; Romeyn, Richard L; Reineke, David M

    2008-01-01

    Various techniques have been described for assessing conditions that cause pain at the patellofemoral (PF) joint. The Clarke sign is one such test, but the diagnostic value of this test in assessing chondromalacia patella is unknown. To (1) investigate the diagnostic value of the Clarke sign in assessing the presence of chondromalacia patella using arthroscopic examination of the PF joint as the "gold standard," and (2) provide a historical perspective of the Clarke sign as a clinical diagnostic test. Validation study. All patients of one of the investigators who had knee pain or injuries unrelated to the patellofemoral joint and were scheduled for arthroscopic surgery were recruited for this study. A total of 106 otherwise healthy individuals with no history of patellofemoral pain or dysfunction volunteered. The Clarke sign was performed on the surgical knee by a single investigator in the clinic before surgery. A positive test was indicated by the presence of pain sufficient to prevent the patient from maintaining a quadriceps muscle contraction against manual resistance for longer than 2 seconds. The preoperative result was compared with visual evidence of chondromalacia patella during arthroscopy. Sensitivity was 0.39, specificity was 0.67, likelihood ratio for a positive test was 1.18, likelihood ratio for a negative test was 0.91, positive predictive value was 0.25, and negative predictive value was 0.80. Diagnostic validity values for the use of the Clarke sign in assessing chondromalacia patella were unsatisfactory, supporting suggestions that it has poor diagnostic value as a clinical examination technique. Additionally, an extensive search of the available literature for the Clarke sign reveals multiple problems with the test, causing significant confusion for clinicians. Therefore, the use of the Clarke sign as a routine part of a knee examination is not beneficial, and its use should be discontinued.

  12. Clinical Paresthesia Atlas Illustrates Likelihood of Coverage Based on Spinal Cord Stimulator Electrode Location.

    PubMed

    Taghva, Alexander; Karst, Edward; Underwood, Paul

    2017-08-01

    Concordant paresthesia coverage is an independent predictor of pain relief following spinal cord stimulation (SCS). Using aggregate data, our objective is to produce a map of paresthesia coverage as a function of electrode location in SCS. This retrospective analysis used x-rays, SCS programming data, and paresthesia coverage maps from the EMPOWER registry of SCS implants for chronic neuropathic pain. Spinal level of dorsal column stimulation was determined by x-ray adjudication and active cathodes in patient programs. Likelihood of paresthesia coverage was determined as a function of stimulating electrode location. Segments of paresthesia coverage were grouped anatomically. Fisher's exact test was used to identify significant differences in likelihood of paresthesia coverage as a function of spinal stimulation level. In the 178 patients analyzed, the most prevalent areas of paresthesia coverage were buttocks, anterior and posterior thigh (each 98%), and low back (94%). Unwanted paresthesia at the ribs occurred in 8% of patients. There were significant differences in the likelihood of achieving paresthesia, with higher thoracic levels (T5, T6, and T7) more likely to achieve low back coverage but also more likely to introduce paresthesia felt at the ribs. Higher levels in the thoracic spine were associated with greater coverage of the buttocks, back, and thigh, and with lesser coverage of the leg and foot. This paresthesia atlas uses real-world, aggregate data to determine likelihood of paresthesia coverage as a function of stimulating electrode location. It represents an application of "big data" techniques, and a step toward achieving personalized SCS therapy tailored to the individual's chronic pain. © 2017 International Neuromodulation Society.

  13. On the Power Functions of Test Statistics in Order Restricted Inference.

    DTIC Science & Technology

    1984-10-01

    California-Davis Actuarial Science Davis, California 95616 The University of Iowa Iowa City, Iowa 52242 *F. T. Wright Department of Mathematics and...34 SUMMARY --We study the power functions of both the likelihood ratio and con- trast statistics for detecting a totally ordered trend in a collection...samples from normal populations, Bartholomew (1959 a,b; 1961) studied the likelihood ratio tests (LRTs) for H0 versus H -H assuming in one case that

  14. Proceedings of the Antenna Applications Symposium (32nd) Held in Monticello, Illinois on 16-18 September 2008. Volume 1

    DTIC Science & Technology

    2008-12-20

    Equation 6 for the sample likelihood function gives a “concentrated likelihood function,” which depends on correlation parameters θh and ph. This...step one and estimates correlation parameters using the new data set including all previous sample points and the new data point x. The algorithm...Unclassified b. ABSTRACT Unclassified c. THIS PAGE Unclassified UU 279 19b. TELEPHONE NUMBER (include area code ) N/A

  15. Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection

    NASA Astrophysics Data System (ADS)

    Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan

    2017-08-01

    Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.

  16. Functionally Graded Adhesives for Composite Joints

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott E.; Waas, Anthony M.; Arnold, Steven M.

    2012-01-01

    Adhesives with functionally graded material properties are being considered for use in adhesively bonded joints to reduce the peel stress concentrations located near adherend discontinuities. Several practical concerns impede the actual use of such adhesives. These include increased manufacturing complications, alterations to the grading due to adhesive flow during manufacturing, and whether changing the loading conditions significantly impact the effectiveness of the grading. An analytical study is conducted to address these three concerns. An enhanced joint finite element, which uses an analytical formulation to obtain exact shape functions, is used to model the joint. Furthermore, proof of concept testing is conducted to show the potential advantages of functionally graded adhesives. In this study, grading is achieved by strategically placing glass beads within the adhesive layer at different densities along the joint.

  17. Functional disorders of the temporomandibular joints: Internal derangement of the temporomandibular joint.

    PubMed

    Chang, Chih-Ling; Wang, Ding-Han; Yang, Mu-Chen; Hsu, Wun-Eng; Hsu, Ming-Lun

    2018-04-01

    Temporomandibular joint (TMJ) is one of the most complex joints of the human body. Due to its unique movement, in terms of combination of rotation and translator movement, disc of the joint plays an important role to maintain its normal function. In order to sustain the normal function of the TMJ, disc must be kept in proper position as well as maintain normal shape in all circumstances. Once the disc is not any more in its normal position during function of the joint, disturbance of the joint can be occurred which will lead to subsequent distortion of the disc. Shape of the disc can be influenced by many factors i.e.: abnormal function or composition of the disc itself. Etiology of the internal derangement of the disc remains controversial. Multifactorial theory has been postulated in most of previous manuscripts. Disc is composed of mainly extracellular matrix. Abnormal proportion of collagen type I & III may also leads to joint hypermobility which may be also a predisposing factor of this disorder. Thus it can be recognized as local manifestation of a systemic disorder. Different treatment modalities with from conservative treatment to surgical intervention distinct success rate have been reported. Recently treatment with extracellular matrix injection becomes more and more popular to strengthen the joint itself. Since multifactorial in character, the best solution of the treatment modalities should be aimed to resolve possible etiology from different aspects. Team work may be indication to reach satisfied results. Copyright © 2018. Published by Elsevier Taiwan.

  18. [Injury to the Scapholunate Ligament in Distal Radius Fractures: Peri-Operative Diagnosis and Treatment Results].

    PubMed

    Gajdoš, R; Pilný, J; Pokorná, A

    2016-01-01

    PURPOSE OF THE STUDY Injury to the scapholunate ligament is frequently associated with a fracture of the distal radius. At present neither a unified concept of treatment nor a standard method of diagnosis in these concomitant injuries is available. The aim of the study was to evaluate a group of surgically treated patients with distal radius fractures in order to assess a contribution of combined conventional X-ray and intra-operative fluoroscopic examinations to the diagnosis of associated lesions and to compare short-term functional outcomes of sugically treated patients with those of patients treated conservatively. MATERIAL AND METHODS A group of patients undergoiong surgery for distal radius fractures using plate osteosynthesis was evaluated retrospectively. The peri-operative diagnosis of associated injury to the scapholunate ligament was based on pre-operative standard X-ray views and intra-operative fluoroscopy. The latter consisted of images of maximum radial and ulnar deviation as well as an image of the forearm in traction exerted manually along the long axis. All views were in postero-anterior projection. Results were read directly on the monitor of a fluoroscopic device after its calibration or were obtained by comparing the thickness of an attached Kirschner wire with the distance to be measured. Subsequently, pixels were converted to millimetres. When a scapholunate ligament injury was found and confirmed by examination of the contralateral wrist, the finding was verified by open reduction or arthroscopy. Both static and dynamic instabilities were treated together with the distal radius fracture at one-stage surgery. After surgery, the patients without ligament injury had the wrist immobilised for 4 weeks, then rehabilitation followed. In the patients with a damaged ligament, immobilisation in a short brace lasted until transarticular wires were removed. All patients were followed up for a year at least. At follow-up, the injured wrist was examined for signs of clinical instability of the scapholunate joint, functional outcome was assessed using the Mayo Wrist Score (MWS) and pain intensity was evaluated on the Visual Analoque Scale (VAS). Restriction in daily activities was rated by the Quick Disabilities of the Arm, Shoulder and Hand (QDASH) score and plain X-ray was done. If any of the results was not satisfactory, MRI examination was indicated. RESULTS Of a total of 265 patients, 35 had injury to the scapholunate joint, 16 had static instability diagnosed by a standard fluoroscopic examination and nine patients with an acute phase of injury remained undiagnosed. For detection of associated scapholunate injuries, a standard X-ray examination had sensitivity of 46%, specificity of 99%, accuracy of 92%, positive predictive value of 84%, negative predictive value of 92%, positive likelihood ratio = 35.05 and negative likelihood ratio = 0.55. Dynamic fluoroscopic examination showed sensitivity of 53%, specificity of 99%, accuracy of 95%, positive predictive value of 77%, negative predictive value of 96%, positive likelihood ratio = 36.49 and negative likelihood ratio = 0.48. Using the MWS system, no differences in the outcome of scapholunate instability treatment were found between the patients undergoing surgery and those treated conservatively (p=0.35). Statistically significant differences were detected in the evaluation of subjective parameters - both VAS and QDASH scores were better in the treated than non-treated patients (p=0.02 and p=0.04, respectively). DISCUSSION The high negative predictive values of both standard X-ray and intra-operative fluoroscopy showed that combined use of the two method is more relevant for excluding than for confirming an injury to the scapholunate ligament concomitant with distal radius fracture. Similarly, the low negative likelihood ratio showed that a negative result decreases the pre-test probability of concomitant injury. CONCLUSIONS Negative findings of scapholunate ligament injury on standard X-ray views and intra-operative fluoroscopic images make it unnecessary to perform any further intra-operative examination to detect injury to the scapholunate ligament. Positive findings require verification of the degree of injury by another intra-operative modality, most ideally by arthroscopy. Patients with untreated instability associated with distal radius fracture have, at short-term follow-up, no statistically significant differences in functioning of the injured extremity in comparison with treated patients. Subjectively, however, they feel more pain and more restriction in performing daily activities. Therefore, the treatment of an injured scapholunate ligament together with distal radius fracture at one-stage surgery seems to be a good alternative for the patient. Key words: distal radius fractures, scapholunate ligament, radiographic, diagnosis, outcome distal radius fracture.

  19. An analysis of crash likelihood : age versus driving experience

    DOT National Transportation Integrated Search

    1995-05-01

    The study was designed to determine the crash likelihood of drivers in Michigan as a function of two independent variables: driver age and driving experience. The age variable had eight levels (18, 19, 20, 21, 22, 23, 24, and 25 years old) and the ex...

  20. How Do Executive Functions Fit with the Cattell-Horn-Carroll Model? Some Evidence from a Joint Factor Analysis of the Delis-Kaplan Executive Function System and the Woodcock-Johnson III Tests of Cognitive Abilities

    ERIC Educational Resources Information Center

    Floyd, Randy G.; Bergeron, Renee; Hamilton, Gloria; Parra, Gilbert R.

    2010-01-01

    This study investigated the relations among executive functions and cognitive abilities through a joint exploratory factor analysis and joint confirmatory factor analysis of 25 test scores from the Delis-Kaplan Executive Function System and the Woodcock-Johnson III Tests of Cognitive Abilities. Participants were 100 children and adolescents…

  1. Individual differences in boldness influence patterns of social interactions and the transmission of cuticular bacteria among group-mates

    PubMed Central

    Keiser, Carl N.; Pinter-Wollman, Noa; Augustine, David A.; Ziemba, Michael J.; Hao, Lingran; Lawrence, Jeffrey G.; Pruitt, Jonathan N.

    2016-01-01

    Despite the importance of host attributes for the likelihood of associated microbial transmission, individual variation is seldom considered in studies of wildlife disease. Here, we test the influence of host phenotypes on social network structure and the likelihood of cuticular bacterial transmission from exposed individuals to susceptible group-mates using female social spiders (Stegodyphus dumicola). Based on the interactions of resting individuals of known behavioural types, we assessed whether individuals assorted according to their behavioural traits. We found that individuals preferentially interacted with individuals of unlike behavioural phenotypes. We next applied a green fluorescent protein-transformed cuticular bacterium, Pantoea sp., to individuals and allowed them to interact with an unexposed colony-mate for 24 h. We found evidence for transmission of bacteria in 55% of cases. The likelihood of transmission was influenced jointly by the behavioural phenotypes of both the exposed and susceptible individuals: transmission was more likely when exposed spiders exhibited higher ‘boldness’ relative to their colony-mate, and when unexposed individuals were in better body condition. Indirect transmission via shared silk took place in only 15% of cases. Thus, bodily contact appears key to transmission in this system. These data represent a fundamental step towards understanding how individual traits influence larger-scale social and epidemiological dynamics. PMID:27097926

  2. Individual differences in boldness influence patterns of social interactions and the transmission of cuticular bacteria among group-mates.

    PubMed

    Keiser, Carl N; Pinter-Wollman, Noa; Augustine, David A; Ziemba, Michael J; Hao, Lingran; Lawrence, Jeffrey G; Pruitt, Jonathan N

    2016-04-27

    Despite the importance of host attributes for the likelihood of associated microbial transmission, individual variation is seldom considered in studies of wildlife disease. Here, we test the influence of host phenotypes on social network structure and the likelihood of cuticular bacterial transmission from exposed individuals to susceptible group-mates using female social spiders (Stegodyphus dumicola). Based on the interactions of resting individuals of known behavioural types, we assessed whether individuals assorted according to their behavioural traits. We found that individuals preferentially interacted with individuals of unlike behavioural phenotypes. We next applied a green fluorescent protein-transformed cuticular bacterium,Pantoeasp., to individuals and allowed them to interact with an unexposed colony-mate for 24 h. We found evidence for transmission of bacteria in 55% of cases. The likelihood of transmission was influenced jointly by the behavioural phenotypes of both the exposed and susceptible individuals: transmission was more likely when exposed spiders exhibited higher 'boldness' relative to their colony-mate, and when unexposed individuals were in better body condition. Indirect transmission via shared silk took place in only 15% of cases. Thus, bodily contact appears key to transmission in this system. These data represent a fundamental step towards understanding how individual traits influence larger-scale social and epidemiological dynamics. © 2016 The Author(s).

  3. Multilevel joint competing risk models

    NASA Astrophysics Data System (ADS)

    Karunarathna, G. H. S.; Sooriyarachchi, M. R.

    2017-09-01

    Joint modeling approaches are often encountered for different outcomes of competing risk time to event and count in many biomedical and epidemiology studies in the presence of cluster effect. Hospital length of stay (LOS) has been the widely used outcome measure in hospital utilization due to the benchmark measurement for measuring multiple terminations such as discharge, transferred, dead and patients who have not completed the event of interest at the follow up period (censored) during hospitalizations. Competing risk models provide a method of addressing such multiple destinations since classical time to event models yield biased results when there are multiple events. In this study, the concept of joint modeling has been applied to the dengue epidemiology in Sri Lanka, 2006-2008 to assess the relationship between different outcomes of LOS and platelet count of dengue patients with the district cluster effect. Two key approaches have been applied to build up the joint scenario. In the first approach, modeling each competing risk separately using the binary logistic model, treating all other events as censored under the multilevel discrete time to event model, while the platelet counts are assumed to follow a lognormal regression model. The second approach is based on the endogeneity effect in the multilevel competing risks and count model. Model parameters were estimated using maximum likelihood based on the Laplace approximation. Moreover, the study reveals that joint modeling approach yield more precise results compared to fitting two separate univariate models, in terms of AIC (Akaike Information Criterion).

  4. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  5. Time Independent Functional task Training: a case study on the effect of inter-joint coordination driven haptic guidance in stroke therapy.

    PubMed

    Brokaw, Elizabeth B; Murray, Theresa M; Nef, Tobias; Lum, Peter S; Brokaw, Elizabeth B; Nichols, Diane; Holley, Rahsaan J

    2011-01-01

    After a stroke abnormal joint coordination of the arm may limit functional movement and recovery. To aid in training inter-joint movement coordination a haptic guidance method for functional driven rehabilitation after stroke called Time Independent Functional Training (TIFT) has been developed for the ARMin III robot. The mode helps retraining inter-joint coordination during functional movements, such as putting an object on a shelf, pouring from a pitcher, and sorting objects into bins. A single chronic stroke subject was tested for validation of the modality. The subject was given 1.5 hrs of robotic therapy twice a week for 4 weeks. The therapy and the results of training the single stroke subject are discussed. The subject showed a decrease in training joint error for the sorting task across training sessions and increased self-selected movement time in training. In kinematic reaching analysis the subject showed improvements in range of motion and joint coordination in a reaching task, as well as improvements in supination-pronation range of motion at the wrist. © 2011 IEEE

  6. Configuration control of seven-degree-of-freedom arms

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor); Long, Mark K. (Inventor); Lee, Thomas S. (Inventor)

    1992-01-01

    A seven degree of freedom robot arm with a six degree of freedom end effector is controlled by a processor employing a 6 by 7 Jacobian matrix for defining location and orientation of the end effector in terms of the rotation angles of the joints, a 1 (or more) by 7 Jacobian matrix for defining 1 (or more) user specified kinematic functions constraining location or movement of selected portions of the arm in terms of the joint angles, the processor combining the two Jacobian matrices to produce an augmented 7 (or more) by 7 Jacobian matrix, the processor effecting control by computing in accordance with forward kinematics from the augmented 7 by 7 Jacobian matrix and from the seven joint angles of the arm a set of seven desired joint angles for transmittal to the joint servo loops of the arm. One of the kinematic functions constraints the orientation of the elbow plane of the arm. Another one of the kinematic functions minimizes a sum of gravitational torques on the joints. Still another kinematic function constrains the location of the arm to perform collision avoidance. Generically, one kinematic function minimizes a sum of selected mechanical parameters of at least some of the joints associated with weighting coefficients which may be changed during arm movement. The mechanical parameters may be velocity errors or gravity torques associated with individual joints.

  7. Configuration control of seven degree of freedom arms

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1995-01-01

    A seven-degree-of-freedom robot arm with a six-degree-of-freedom end effector is controlled by a processor employing a 6-by-7 Jacobian matrix for defining location and orientation of the end effector in terms of the rotation angles of the joints, a 1 (or more)-by-7 Jacobian matrix for defining 1 (or more) user-specified kinematic functions constraining location or movement of selected portions of the arm in terms of the joint angles, the processor combining the two Jacobian matrices to produce an augmented 7 (or more)-by-7 Jacobian matrix, the processor effecting control by computing in accordance with forward kinematics from the augmented 7-by-7 Jacobian matrix and from the seven joint angles of the arm a set of seven desired joint angles for transmittal to the joint servo loops of the arms. One of the kinematic functions constrains the orientation of the elbow plane of the arm. Another one of the kinematic functions minimizing a sum of gravitational torques on the joints. Still another one of the kinematic functions constrains the location of the arm to perform collision avoidance. Generically, one of the kinematic functions minimizes a sum of selected mechanical parameters of at least some of the joints associated with weighting coefficients which may be changed during arm movement. The mechanical parameters may be velocity errors or position errors or gravity torques associated with individual joints.

  8. Neural Mechanisms for Integrating Prior Knowledge and Likelihood in Value-Based Probabilistic Inference

    PubMed Central

    Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.

    2015-01-01

    In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152

  9. Profitability of grazing versus mechanical forage harvesting on New York dairy farms.

    PubMed

    Gloy, B A; Tauer, L W; Knoblauch, W

    2002-09-01

    The profitability of rotational grazing versus mechanical harvesting of forages was estimated using data from 237 nongrazing and 57 grazing farms participating in the New York farm business summary program in the year 2000. The objective was to perform an empirical comparison of the profitability of grazing versus mechanical forage harvesting systems. A regression analysis technique that controls for treatment selection bias is used to determine the impact of grazing on the rate of return on assets. This is accomplished by joint maximum likelihood estimation of a probit adoption function and a profit function. The results indicate that treatment selection does not have an important impact on the estimate of the profitability of grazing. There were wide ranges and overlap of profitability among herds using the two systems. However, other things equal, farmers utilizing grazing systems were at least if not more profitable than farmers not using grazing systems. After controlling for the factors influencing the decision to graze, we found that herd size, rate of milk production per cow, and prices received for milk have a strong positive impact on profitability. Farmers who perceive potential lifestyle benefits that might be obtained by implementing a grazing system likely do not have to pay an income penalty for adopting a grazing system.

  10. Joint resonant CMB power spectrum and bispectrum estimation

    NASA Astrophysics Data System (ADS)

    Meerburg, P. Daniel; Münchmeyer, Moritz; Wandelt, Benjamin

    2016-02-01

    We develop the tools necessary to assess the statistical significance of resonant features in the CMB correlation functions, combining power spectrum and bispectrum measurements. This significance is typically addressed by running a large number of simulations to derive the probability density function (PDF) of the feature-amplitude in the Gaussian case. Although these simulations are tractable for the power spectrum, for the bispectrum they require significant computational resources. We show that, by assuming that the PDF is given by a multivariate Gaussian where the covariance is determined by the Fisher matrix of the sine and cosine terms, we can efficiently produce spectra that are statistically close to those derived from full simulations. By drawing a large number of spectra from this PDF, both for the power spectrum and the bispectrum, we can quickly determine the statistical significance of candidate signatures in the CMB, considering both single frequency and multifrequency estimators. We show that for resonance models, cosmology and foreground parameters have little influence on the estimated amplitude, which allows us to simplify the analysis considerably. A more precise likelihood treatment can then be applied to candidate signatures only. We also discuss a modal expansion approach for the power spectrum, aimed at quickly scanning through large families of oscillating models.

  11. Beyond osteoarthritis: recognizing and treating infectious and other inflammatory arthropathies in your practice.

    PubMed

    Haile, Zewdu; Khatua, Sanjeeb

    2010-12-01

    About 15% of patients presenting in a primary care clinic have joint pain as their primary complaint (level B). Disseminated gonorrhea is the most common cause of infectious arthritis in sexually active, previously healthy patients (level B). Prompt arthrocentesis, microscopic examination, and the culture of any purulent material plus appropriate antibiotic therapy are the mainstay of treatment in infectious arthritis (level C). Detailed history, including family history and comprehensive examination, is more useful in accurate diagnosis than expensive laboratory and radiological investigations for noninfectious arthritis (level C). Regarding inflammatory noninfectious arthritis with the potential to cause destructive joint damage, early referral to a subspecialist, when indicated, increases the likelihood of optimal outcome (level C). Nonsteroidal antiinflammatory drugs are the first line of therapeutic agents to reduce pain and swelling in the management of most noninfectious inflammatory arthritis seen in the primary care office (level C). Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Transponder-aided joint calibration and synchronization compensation for distributed radar systems.

    PubMed

    Wang, Wen-Qin

    2015-01-01

    High-precision radiometric calibration and synchronization compensation must be provided for distributed radar system due to separate transmitters and receivers. This paper proposes a transponder-aided joint radiometric calibration, motion compensation and synchronization for distributed radar remote sensing. As the transponder signal can be separated from the normal radar returns, it is used to calibrate the distributed radar for radiometry. Meanwhile, the distributed radar motion compensation and synchronization compensation algorithms are presented by utilizing the transponder signals. This method requires no hardware modifications to both the normal radar transmitter and receiver and no change to the operating pulse repetition frequency (PRF). The distributed radar radiometric calibration and synchronization compensation require only one transponder, but the motion compensation requires six transponders because there are six independent variables in the distributed radar geometry. Furthermore, a maximum likelihood method is used to estimate the transponder signal parameters. The proposed methods are verified by simulation results.

  13. New color-based tracking algorithm for joints of the upper extremities

    NASA Astrophysics Data System (ADS)

    Wu, Xiangping; Chow, Daniel H. K.; Zheng, Xiaoxiang

    2007-11-01

    To track the joints of the upper limb of stroke sufferers for rehabilitation assessment, a new tracking algorithm which utilizes a developed color-based particle filter and a novel strategy for handling occlusions is proposed in this paper. Objects are represented by their color histogram models and particle filter is introduced to track the objects within a probability framework. Kalman filter, as a local optimizer, is integrated into the sampling stage of the particle filter that steers samples to a region with high likelihood and therefore fewer samples is required. A color clustering method and anatomic constraints are used in dealing with occlusion problem. Compared with the general basic particle filtering method, the experimental results show that the new algorithm has reduced the number of samples and hence the computational consumption, and has achieved better abilities of handling complete occlusion over a few frames.

  14. The Effectiveness of a Functional Knee Brace on Joint-Position Sense in Anterior Cruciate Ligament-Reconstructed Individuals.

    PubMed

    Sugimoto, Dai; LeBlanc, Jessica C; Wooley, Sarah E; Micheli, Lyle J; Kramer, Dennis E

    2016-05-01

    It is estimated that approximately 350,000 individuals undergo anterior cruciate ligament (ACL) reconstruction surgery in each year in the US. Although ACL-reconstruction surgery and postoperative rehabilitation are successfully completed, deficits in postural control remain prevalent in ACL-reconstructed individuals. In order to assist the lack of balance ability and reduce the risk of retear of the reconstructed ACL, physicians often provide a functional knee brace on the patients' return to physical activity. However, it is not known whether use of the functional knee brace enhances knee-joint position sense in individuals with ACL reconstruction. Thus, the effect of a functional knee brace on knee-joint position sense in an ACL-reconstructed population needs be critically appraised. After systematically review of previously published literature, 3 studies that investigated the effect of a functional knee brace in ACL-reconstructed individuals using joint-position-sense measures were found. They were rated as level 2b evidence in the Centre of Evidence Based Medicine Level of Evidence chart. Synthesis of the reviewed studies indicated inconsistent evidence of a functional knee brace on joint-position improvement after ACL reconstruction. More research is needed to provide sufficient evidence on the effect of a functional knee brace on joint-position sense after ACL reconstruction. Future studies need to measure joint-position sense in closed-kinetic-chain fashion since ACL injury usually occurs under weight-bearing conditions.

  15. Likelihood-Ratio DIF Testing: Effects of Nonnormality

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    Differential item functioning (DIF) occurs when an item has different measurement properties for members of one group versus another. Likelihood-ratio (LR) tests for DIF based on item response theory (IRT) involve statistically comparing IRT models that vary with respect to their constraints. A simulation study evaluated how violation of the…

  16. Evidence-based Diagnostics: Adult Septic Arthritis

    PubMed Central

    Carpenter, Christopher R.; Schuur, Jeremiah D.; Everett, Worth W.; Pines, Jesse M.

    2011-01-01

    Background Acutely swollen or painful joints are common complaints in the emergency department (ED). Septic arthritis in adults is a challenging diagnosis, but prompt differentiation of a bacterial etiology is crucial to minimize morbidity and mortality. Objectives The objective was to perform a systematic review describing the diagnostic characteristics of history, physical examination, and bedside laboratory tests for nongonococcal septic arthritis. A secondary objective was to quantify test and treatment thresholds using derived estimates of sensitivity and specificity, as well as best-evidence diagnostic and treatment risks and anticipated benefits from appropriate therapy. Methods Two electronic search engines (PUBMED and EMBASE) were used in conjunction with a selected bibliography and scientific abstract hand search. Inclusion criteria included adult trials of patients presenting with monoarticular complaints if they reported sufficient detail to reconstruct partial or complete 2 × 2 contingency tables for experimental diagnostic test characteristics using an acceptable criterion standard. Evidence was rated by two investigators using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS). When more than one similarly designed trial existed for a diagnostic test, meta-analysis was conducted using a random effects model. Interval likelihood ratios (LRs) were computed when possible. To illustrate one method to quantify theoretical points in the probability of disease whereby clinicians might cease testing altogether and either withhold treatment (test threshold) or initiate definitive therapy in lieu of further diagnostics (treatment threshold), an interactive spreadsheet was designed and sample calculations were provided based on research estimates of diagnostic accuracy, diagnostic risk, and therapeutic risk/benefits. Results The prevalence of nongonococcal septic arthritis in ED patients with a single acutely painful joint is approximately 27% (95% confidence interval [CI] = 17% to 38%). With the exception of joint surgery (positive likelihood ratio [+LR] = 6.9) or skin infection overlying a prosthetic joint (+LR = 15.0), history, physical examination, and serum tests do not significantly alter posttest probability. Serum inflammatory markers such as white blood cell (WBC) counts, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) are not useful acutely. The interval LR for synovial white blood cell (sWBC) counts of 0 × 109–25 × 109/ L was 0.33; for 25 × 109–50 × 109/L, 1.06; for 50 × 109–100 × 109/L, 3.59; and exceeding 100 × 109/L, infinity. Synovial lactate may be useful to rule in or rule out the diagnosis of septic arthritis with a +LR ranging from 2.4 to infinity, and negative likelihood ratio (−LR) ranging from 0 to 0.46. Rapid polymerase chain reaction (PCR) of synovial fluid may identify the causative organism within 3 hours. Based on 56% sensitivity and 90% specificity for sWBC counts of >50 × 109/L in conjunction with best-evidence estimates for diagnosis-related risk and treatment-related risk/benefit, the arthrocentesis test threshold is 5%, with a treatment threshold of 39%. Conclusions Recent joint surgery or cellulitis overlying a prosthetic hip or knee were the only findings on history or physical examination that significantly alter the probability of nongonococcal septic arthritis. Extreme values of sWBC (>50 × 109/L) can increase, but not decrease, the probability of septic arthritis. Future ED-based diagnostic trials are needed to evaluate the role of clinical gestalt and the efficacy of nontraditional synovial markers such as lactate. PMID:21843213

  17. Evidence-based diagnostics: adult septic arthritis.

    PubMed

    Carpenter, Christopher R; Schuur, Jeremiah D; Everett, Worth W; Pines, Jesse M

    2011-08-01

    Acutely swollen or painful joints are common complaints in the emergency department (ED). Septic arthritis in adults is a challenging diagnosis, but prompt differentiation of a bacterial etiology is crucial to minimize morbidity and mortality. The objective was to perform a systematic review describing the diagnostic characteristics of history, physical examination, and bedside laboratory tests for nongonococcal septic arthritis. A secondary objective was to quantify test and treatment thresholds using derived estimates of sensitivity and specificity, as well as best-evidence diagnostic and treatment risks and anticipated benefits from appropriate therapy. Two electronic search engines (PUBMED and EMBASE) were used in conjunction with a selected bibliography and scientific abstract hand search. Inclusion criteria included adult trials of patients presenting with monoarticular complaints if they reported sufficient detail to reconstruct partial or complete 2 × 2 contingency tables for experimental diagnostic test characteristics using an acceptable criterion standard. Evidence was rated by two investigators using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS). When more than one similarly designed trial existed for a diagnostic test, meta-analysis was conducted using a random effects model. Interval likelihood ratios (LRs) were computed when possible. To illustrate one method to quantify theoretical points in the probability of disease whereby clinicians might cease testing altogether and either withhold treatment (test threshold) or initiate definitive therapy in lieu of further diagnostics (treatment threshold), an interactive spreadsheet was designed and sample calculations were provided based on research estimates of diagnostic accuracy, diagnostic risk, and therapeutic risk/benefits. The prevalence of nongonococcal septic arthritis in ED patients with a single acutely painful joint is approximately 27% (95% confidence interval [CI] = 17% to 38%). With the exception of joint surgery (positive likelihood ratio [+LR] = 6.9) or skin infection overlying a prosthetic joint (+LR = 15.0), history, physical examination, and serum tests do not significantly alter posttest probability. Serum inflammatory markers such as white blood cell (WBC) counts, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) are not useful acutely. The interval LR for synovial white blood cell (sWBC) counts of 0 × 10(9)-25 × 10(9)/L was 0.33; for 25 × 10(9)-50 × 10(9)/L, 1.06; for 50 × 10(9)-100 × 10(9)/L, 3.59; and exceeding 100 × 10(9)/L, infinity. Synovial lactate may be useful to rule in or rule out the diagnosis of septic arthritis with a +LR ranging from 2.4 to infinity, and negative likelihood ratio (-LR) ranging from 0 to 0.46. Rapid polymerase chain reaction (PCR) of synovial fluid may identify the causative organism within 3 hours. Based on 56% sensitivity and 90% specificity for sWBC counts of >50 × 10(9)/L in conjunction with best-evidence estimates for diagnosis-related risk and treatment-related risk/benefit, the arthrocentesis test threshold is 5%, with a treatment threshold of 39%. Recent joint surgery or cellulitis overlying a prosthetic hip or knee were the only findings on history or physical examination that significantly alter the probability of nongonococcal septic arthritis. Extreme values of sWBC (>50 × 10(9)/L) can increase, but not decrease, the probability of septic arthritis. Future ED-based diagnostic trials are needed to evaluate the role of clinical gestalt and the efficacy of nontraditional synovial markers such as lactate. © 2011 by the Society for Academic Emergency Medicine.

  18. Joint torques in a freely walking insect reveal distinct functions of leg joints in propulsion and posture control

    PubMed Central

    2016-01-01

    Determining the mechanical output of limb joints is critical for understanding the control of complex motor behaviours such as walking. In the case of insect walking, the neural infrastructure for single-joint control is well described. However, a detailed description of the motor output in form of time-varying joint torques is lacking. Here, we determine joint torques in the stick insect to identify leg joint function in the control of body height and propulsion. Torques were determined by measuring whole-body kinematics and ground reaction forces in freely walking animals. We demonstrate that despite strong differences in morphology and posture, stick insects show a functional division of joints similar to other insect model systems. Propulsion was generated by strong depression torques about the coxa–trochanter joint, not by retraction or flexion/extension torques. Torques about the respective thorax–coxa and femur–tibia joints were often directed opposite to fore–aft forces and joint movements. This suggests a posture-dependent mechanism that counteracts collapse of the leg under body load and directs the resultant force vector such that strong depression torques can control both body height and propulsion. Our findings parallel propulsive mechanisms described in other walking, jumping and flying insects, and challenge current control models of insect walking. PMID:26791608

  19. Joint health and functional ability in children with haemophilia who receive intensive replacement therapy.

    PubMed

    Groen, W; van der Net, J; Bos, K; Abad, A; Bergstrom, B-M; Blanchette, V S; Feldman, B M; Funk, S; Helders, P; Hilliard, P; Manco-Johnson, M; Petrini, P; Zourikian, N; Fischer, K

    2011-09-01

    Joint physical examination is an important outcome in haemophilia; however its relationship with functional ability is not well established in children with intensive replacement therapy. Boys aged 4-16 years were recruited from two European and three North American treatment centres. Joint physical structure and function was measured with the Haemophilia Joint Health Score (HJHS) while functional ability was measured with the revised Childhood Health Assessment Questionnaire (CHAQ₃₈. Two haemophilia-specific domains were created by selecting items of the CHAQ₃₈ that cover haemophilia-specific problems. Associations between CHAQ, HJHS, cumulative number of haemarthroses and age were assessed. A total of 226 subjects - mean 10.8 years old (SD 3.8) - participated; the majority (68%) had severe haemophilia. Most severe patients (91%) were on prophylactic treatment. Lifetime number of haemarthroses [median=5; interquartile range (IQR)=1-12] and total HJHS (median = 5; IQR=1-12) correlated strongly (ρ = 0.51). Total HJHS did not correlate with age and only weakly (ρ=-0.19) with functional ability scores (median=0; IQR=-0.06-0). Overall, haemarthroses were reported most frequently in the ankles. Detailed analysis of ankle joint health scores revealed moderate associations (ρ=0.3-0.5) of strength, gait and atrophy with lower extremity tasks (e.g. stair climbing). In this population, HJHS summating six joints did not perform as well as individual joint scores, however, certain elements of ankle impairment, specifically muscle strength, atrophy and gait associated significantly with functional loss in lower extremity activities. Mild abnormalities in ankle assessment by HJHS may lead to functional loss. Therefore, ankle joints may warrant special attention in the follow up of these children. © 2011 Blackwell Publishing Ltd.

  20. Penalized likelihood and multi-objective spatial scans for the detection and inference of irregular clusters

    PubMed Central

    2010-01-01

    Background Irregularly shaped spatial clusters are difficult to delineate. A cluster found by an algorithm often spreads through large portions of the map, impacting its geographical meaning. Penalized likelihood methods for Kulldorff's spatial scan statistics have been used to control the excessive freedom of the shape of clusters. Penalty functions based on cluster geometry and non-connectivity have been proposed recently. Another approach involves the use of a multi-objective algorithm to maximize two objectives: the spatial scan statistics and the geometric penalty function. Results & Discussion We present a novel scan statistic algorithm employing a function based on the graph topology to penalize the presence of under-populated disconnection nodes in candidate clusters, the disconnection nodes cohesion function. A disconnection node is defined as a region within a cluster, such that its removal disconnects the cluster. By applying this function, the most geographically meaningful clusters are sifted through the immense set of possible irregularly shaped candidate cluster solutions. To evaluate the statistical significance of solutions for multi-objective scans, a statistical approach based on the concept of attainment function is used. In this paper we compared different penalized likelihoods employing the geometric and non-connectivity regularity functions and the novel disconnection nodes cohesion function. We also build multi-objective scans using those three functions and compare them with the previous penalized likelihood scans. An application is presented using comprehensive state-wide data for Chagas' disease in puerperal women in Minas Gerais state, Brazil. Conclusions We show that, compared to the other single-objective algorithms, multi-objective scans present better performance, regarding power, sensitivity and positive predicted value. The multi-objective non-connectivity scan is faster and better suited for the detection of moderately irregularly shaped clusters. The multi-objective cohesion scan is most effective for the detection of highly irregularly shaped clusters. PMID:21034451

  1. Impact of divorce on children: developmental considerations.

    PubMed

    Kleinsorge, Christy; Covitz, Lynne M

    2012-04-01

    Although divorce can have significant negative impact on children, a variety of protective factors can increase the likelihood of long-term positive psychological adjustment. • Exposure to high levels of parental conflict is predictive of poor emotional adjustment by the child regardless of the parents' marital status. • Epidemiologic data reveal that custody and parenting arrangements are evolving, with more emphasis on joint custody and access to both parents by the child. • Pediatricians' knowledge of childhood development is essential in providing anticipatory guidance to parents throughout the divorce process and beyond.

  2. Hell Is Other People? Gender and Interactions with Strangers in the Workplace Influence a Person’s Risk of Depression

    PubMed Central

    Fischer, Sebastian; Wiemer, Anita; Diedrich, Laura; Moock, Jörn; Rössler, Wulf

    2014-01-01

    We suggest that interactions with strangers at work influence the likelihood of depressive disorders, as they serve as an environmental stressor, which are a necessary condition for the onset of depression according to diathesis-stress models of depression. We examined a large dataset (N = 76,563 in K = 196 occupations) from the German pension insurance program and the Occupational Information Network dataset on occupational characteristics. We used a multilevel framework with individuals and occupations as levels of analysis. We found that occupational environments influence employees’ risks of depression. In line with the quotation that ‘hell is other people’ frequent conflictual contacts were related to greater likelihoods of depression in both males and females (OR = 1.14, p<.05). However, interactions with the public were related to greater likelihoods of depression for males but lower likelihoods of depression for females (ORintercation = 1.21, p<.01). We theorize that some occupations may involve interpersonal experiences with negative emotional tones that make functional coping difficult and increase the risk of depression. In other occupations, these experiences have neutral tones and allow for functional coping strategies. Functional strategies are more often found in women than in men. PMID:25075855

  3. Uncertainty analysis of wavelet-based feature extraction for isotope identification on NaI gamma-ray spectra

    DOE PAGES

    Stinnett, Jacob; Sullivan, Clair J.; Xiong, Hao

    2017-03-02

    Low-resolution isotope identifiers are widely deployed for nuclear security purposes, but these detectors currently demonstrate problems in making correct identifications in many typical usage scenarios. While there are many hardware alternatives and improvements that can be made, performance on existing low resolution isotope identifiers should be able to be improved by developing new identification algorithms. We have developed a wavelet-based peak extraction algorithm and an implementation of a Bayesian classifier for automated peak-based identification. The peak extraction algorithm has been extended to compute uncertainties in the peak area calculations. To build empirical joint probability distributions of the peak areas andmore » uncertainties, a large set of spectra were simulated in MCNP6 and processed with the wavelet-based feature extraction algorithm. Kernel density estimation was then used to create a new component of the likelihood function in the Bayesian classifier. Furthermore, identification performance is demonstrated on a variety of real low-resolution spectra, including Category I quantities of special nuclear material.« less

  4. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.

    PubMed

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.

  5. Personal and Workplace Environmental Factors Associated With Reduced Worker Productivity Among Older Workers With Chronic Knee Pain: A Cross-Sectional Survey.

    PubMed

    Agaliotis, Maria; Mackey, Martin G; Heard, Robert; Jan, Stephen; Fransen, Marlene

    2017-04-01

    The aim of this study was to explore personal and workplace environmental factors as predictors of reduced worker productivity among older workers with chronic knee pain. A questionnaire-based survey was conducted among 129 older workers who had participated in a randomized clinical trial evaluating dietary supplements. Multivariable analyses were used to explore predictors of reduced work productivity among older workers with chronic knee pain. The likelihood of presenteeism was higher in those reporting knee pain (≥3/10) or problems with other joints, and lower in those reporting job insecurity. The likelihood of work transitions was higher in people reporting knee pain (≥3/10), a high comorbidity score or low coworker support, and lower in those having an occupation involving sitting more than 30% of the day. Allowing access to sitting and promoting positive affiliations between coworkers are likely to provide an enabling workplace environment for older workers with chronic knee pain.

  6. Silence That Can Be Dangerous: A Vignette Study to Assess Healthcare Professionals’ Likelihood of Speaking up about Safety Concerns

    PubMed Central

    Schwappach, David L. B.; Gehring, Katrin

    2014-01-01

    Purpose To investigate the likelihood of speaking up about patient safety in oncology and to clarify the effect of clinical and situational context factors on the likelihood of voicing concerns. Patients and Methods 1013 nurses and doctors in oncology rated four clinical vignettes describing coworkers’ errors and rule violations in a self-administered factorial survey (65% response rate). Multiple regression analysis was used to model the likelihood of speaking up as outcome of vignette attributes, responder’s evaluations of the situation and personal characteristics. Results Respondents reported a high likelihood of speaking up about patient safety but the variation between and within types of errors and rule violations was substantial. Staff without managerial function provided significantly higher levels of decision difficulty and discomfort to speak up. Based on the information presented in the vignettes, 74%−96% would speak up towards a supervisor failing to check a prescription, 45%−81% would point a coworker to a missed hand disinfection, 82%−94% would speak up towards nurses who violate a safety rule in medication preparation, and 59%−92% would question a doctor violating a safety rule in lumbar puncture. Several vignette attributes predicted the likelihood of speaking up. Perceived potential harm, anticipated discomfort, and decision difficulty were significant predictors of the likelihood of speaking up. Conclusions Clinicians’ willingness to speak up about patient safety is considerably affected by contextual factors. Physicians and nurses without managerial function report substantial discomfort with speaking up. Oncology departments should provide staff with clear guidance and trainings on when and how to voice safety concerns. PMID:25116338

  7. Strategy of arm movement control is determined by minimization of neural effort for joint coordination.

    PubMed

    Dounskaia, Natalia; Shimansky, Yury

    2016-06-01

    Optimality criteria underlying organization of arm movements are often validated by testing their ability to adequately predict hand trajectories. However, kinematic redundancy of the arm allows production of the same hand trajectory through different joint coordination patterns. We therefore consider movement optimality at the level of joint coordination patterns. A review of studies of multi-joint movement control suggests that a 'trailing' pattern of joint control is consistently observed during which a single ('leading') joint is rotated actively and interaction torque produced by this joint is the primary contributor to the motion of the other ('trailing') joints. A tendency to use the trailing pattern whenever the kinematic redundancy is sufficient and increased utilization of this pattern during skillful movements suggests optimality of the trailing pattern. The goal of this study is to determine the cost function minimization of which predicts the trailing pattern. We show that extensive experimental testing of many known cost functions cannot successfully explain optimality of the trailing pattern. We therefore propose a novel cost function that represents neural effort for joint coordination. That effort is quantified as the cost of neural information processing required for joint coordination. We show that a tendency to reduce this 'neurocomputational' cost predicts the trailing pattern and that the theoretically developed predictions fully agree with the experimental findings on control of multi-joint movements. Implications for future research of the suggested interpretation of the trailing joint control pattern and the theory of joint coordination underlying it are discussed.

  8. COSMOABC: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Ishida, E. E. O.; Vitenti, S. D. P.; Penna-Lima, M.; Cisewski, J.; de Souza, R. S.; Trindade, A. M. M.; Cameron, E.; Busti, V. C.; COIN Collaboration

    2015-11-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present COSMOABC, a Python ABC sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled COSMOABC with the NUMCOSMO library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. COSMOABC is published under the GPLv3 license on PyPI and GitHub and documentation is available at http://goo.gl/SmB8EX.

  9. A Bayesian trans-dimensional approach for the fusion of multiple geophysical datasets

    NASA Astrophysics Data System (ADS)

    JafarGandomi, Arash; Binley, Andrew

    2013-09-01

    We propose a Bayesian fusion approach to integrate multiple geophysical datasets with different coverage and sensitivity. The fusion strategy is based on the capability of various geophysical methods to provide enough resolution to identify either subsurface material parameters or subsurface structure, or both. We focus on electrical resistivity as the target material parameter and electrical resistivity tomography (ERT), electromagnetic induction (EMI), and ground penetrating radar (GPR) as the set of geophysical methods. However, extending the approach to different sets of geophysical parameters and methods is straightforward. Different geophysical datasets are entered into a trans-dimensional Markov chain Monte Carlo (McMC) search-based joint inversion algorithm. The trans-dimensional property of the McMC algorithm allows dynamic parameterisation of the model space, which in turn helps to avoid bias of the post-inversion results towards a particular model. Given that we are attempting to develop an approach that has practical potential, we discretize the subsurface into an array of one-dimensional earth-models. Accordingly, the ERT data that are collected by using two-dimensional acquisition geometry are re-casted to a set of equivalent vertical electric soundings. Different data are inverted either individually or jointly to estimate one-dimensional subsurface models at discrete locations. We use Shannon's information measure to quantify the information obtained from the inversion of different combinations of geophysical datasets. Information from multiple methods is brought together via introducing joint likelihood function and/or constraining the prior information. A Bayesian maximum entropy approach is used for spatial fusion of spatially dispersed estimated one-dimensional models and mapping of the target parameter. We illustrate the approach with a synthetic dataset and then apply it to a field dataset. We show that the proposed fusion strategy is successful not only in enhancing the subsurface information but also as a survey design tool to identify the appropriate combination of the geophysical tools and show whether application of an individual method for further investigation of a specific site is beneficial.

  10. Gene-expression changes in knee-joint tissues with aging and menopause: implications for the joint as an organ

    PubMed Central

    Rollick, Natalie C; Lemmex, Devin B; Ono, Yohei; Reno, Carol R; Hart, David A; Lo, Ian KY; Thornton, Gail M

    2018-01-01

    Background When considering the “joint as an organ”, the tissues in a joint act as complementary components of an organ, and the “set point” is the cellular activity for homeostasis of the joint tissues. Even in the absence of injury, joint tissues have adaptive responses to processes, like aging and menopause, which result in changes to the set point. Purpose The purpose of this study in a preclinical model was to investigate age-related and menopause-related changes in knee-joint tissues with the hypothesis that tissues will change in unique ways that reflect their differing contributions to maintaining joint function (as measured by joint laxity) and the differing processes of aging and menopause. Methods Rabbit knee-joint tissues from three groups were evaluated: young adult (gene expression, n=8; joint laxity, n=7; water content, n=8), aging adult (gene expression, n=6; joint laxity, n=7; water content, n=5), and menopausal adult (gene expression, n=8; joint laxity, n=7; water content, n=8). Surgical menopause was induced with ovariohysterectomy surgery and gene expression was assessed using reverse-transcription quantitative polymerase chain reaction. Results Aging resulted in changes to 37 of the 150 gene–tissue combinations evaluated, and menopause resulted in changes to 39 of the 150. Despite the similar number of changes, only eleven changes were the same in both aging and menopause. No differences in joint laxity were detected comparing young adult rabbits with aging adult rabbits or with menopausal adult rabbits. Conclusion Aging and menopause affected the gene-expression patterns of the tissues of the knee joint differently, suggesting unique changes to the set point of the knee. Interestingly, aging and menopause did not affect knee-joint laxity, suggesting that joint function was maintained, despite changes in gene expression. Taken together, these findings support the theory of the joint as an organ where the tissues of the joint adapt to maintain joint function. PMID:29535510

  11. Near-Source Mechanism for Creating Shear Content from Buried Explosions

    NASA Astrophysics Data System (ADS)

    Steedman, D. W.; Bradley, C. R.

    2017-12-01

    The Source Physics Experiment (SPE) has the goal of developing a greater understanding of explosion phenomenology at various spatial scales, from near-source to the far-field. SPE Phase I accomplished a series of six chemical explosive tests of varying scaled depth of burial within a borehole in granite. The testbed included an extensive array of triaxial accelerometers. Velocity traces derived from these accelerometers allow for detailed study of the shock environment close in to the explosion. A specific goal of SPE is to identify various mechanisms for generating shear within the propagation environment and how this might be informative on the identification of explosive events that otherwise fail historic compression wave energy/shear wave energy (P/S) event discrimination. One of these sources was hypothesized to derive from slippage along joint sets near to the source. Velocity traces from SPE Phase I events indicate that motion tangential to a theoretically spherical shock wave are initially quiescent after shock arrival. But this period of quiescence is followed by a sudden increase in amplitude that consistently occurs just after the peak of the radial velocity (i.e., onset of shock unloading). The likelihood of occurrence of this response is related to yield-scaled depth-of-burial (SDOB). We describe a mechanism where unloading facilitates dilation of closed joints accompanied by a release of shear energy stored during compression. However, occurrence of this mechanism relies on relative amplitudes between the shock loading caused at a point and the in situ stress: at too large a SDOB the stored energy is insufficient to overcome the combination of the overburden stress and traction on the joint. On the other hand, too small of a SDOB provides that the in situ stress is insufficient to keep joints from storing stress, thus overriding the release mechanism and mitigating rupture-like slippage. We develop a notional relationship between SPE Phase I SDOB and the likelihood of shear release. We then compare this to the six recorded DPRK events in terms of where these events fall in relation to the accepted mb:MS discriminant using estimated SDOB values for those events. To first order SPE SDOBs resulting in shear release appear to map to estimated DPRK SDOBs which display excessive shear magnitude. LA-UR-17-29528.

  12. Revision of an automated microseismic location algorithm for DAS - 3C geophone hybrid array

    NASA Astrophysics Data System (ADS)

    Mizuno, T.; LeCalvez, J.; Raymer, D.

    2017-12-01

    Application of distributed acoustic sensing (DAS) has been studied in several areas in seismology. One of the areas is microseismic reservoir monitoring (e.g., Molteni et al., 2017, First Break). Considering the present limitations of DAS, which include relatively low signal-to-noise ratio (SNR) and no 3C polarization measurements, a DAS - 3C geophone hybrid array is a practical option when using a single monitoring well. Considering the large volume of data from distributed sensing, microseismic event detection and location using a source scanning type algorithm is a reasonable choice, especially for real-time monitoring. The algorithm must handle both strain rate along the borehole axis for DAS and particle velocity for 3C geophones. Only a small quantity of large SNR events will be detected throughout a large aperture encompassing the hybrid array; therefore, the aperture is to be optimized dynamically to eliminate noisy channels for a majority of events. For such hybrid array, coalescence microseismic mapping (CMM) (Drew et al., 2005, SPE) was revised. CMM forms a likelihood function of location of event and its origin time. At each receiver, a time function of event arrival likelihood is inferred using an SNR function, and it is migrated to time and space to determine hypocenter and origin time likelihood. This algorithm was revised to dynamically optimize such a hybrid array by identifying receivers where a microseismic signal is possibly detected and using only those receivers to compute the likelihood function. Currently, peak SNR is used to select receivers. To prevent false results due to small aperture, a minimum aperture threshold is employed. The algorithm refines location likelihood using 3C geophone polarization. We tested this algorithm using a ray-based synthetic dataset. Leaney (2014, PhD thesis, UBC) is used to compute particle velocity at receivers. Strain rate along the borehole axis is computed from particle velocity as DAS microseismic synthetic data. The likelihood function formed by both DAS and geophone behaves as expected with the aperture dynamically selected depending on the SNR of the event. We conclude that this algorithm can be successfully applied for such hybrid arrays to monitor microseismic activity. A study using a recently acquired dataset is planned.

  13. Soldier-relevant body borne loads increase knee joint contact force during a run-to-stop maneuver.

    PubMed

    Ramsay, John W; Hancock, Clifford L; O'Donovan, Meghan P; Brown, Tyler N

    2016-12-08

    The purpose of this study was to understand the effects of load carriage on human performance, specifically during a run-to-stop (RTS) task. Using OpenSim analysis tools, knee joint contact force, grounds reaction force, leg stiffness and lower extremity joint angles and moments were determined for nine male military personnel performing a RTS under three load configurations (light, ~6kg, medium, ~20kg, and heavy, ~40kg). Subject-based means for each biomechanical variable were submitted to repeated measures ANOVA to test the effects of load. During the RTS, body borne load significantly increased peak knee joint contact force by 1.2 BW (p<0.001) and peak vertical (p<0.001) and anterior-posterior (p=0.002) ground reaction forces by 0.6 BW and 0.3 BW, respectively. Body borne load also had a significant effect on hip (p=0.026) posture with the medium load and knee (p=0.046) posture with the heavy load. With the heavy load, participants exhibited a substantial, albeit non-significant increase in leg stiffness (p=0.073 and d=0.615). Increases in joint contact force exhibited during the RTS were primarily due to greater GRFs that impact the soldier with each incremental addition of body borne load. The stiff leg, extended knee and large braking force the soldiers exhibited with the heavy load suggests their injury risk may be greatest with that specific load configuration. Further work is needed to determine if the biomechanical profile exhibited with the heavy load configuration translates to unsafe shear forces at the knee joint and consequently, a higher likelihood of injury. Published by Elsevier Ltd.

  14. Unified halo-independent formalism from convex hulls for direct dark matter searches

    NASA Astrophysics Data System (ADS)

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2017-12-01

    Using the Fenchel-Eggleston theorem for convex hulls (an extension of the Caratheodory theorem), we prove that any likelihood can be maximized by either a dark matter 1- speed distribution F(v) in Earth's frame or 2- Galactic velocity distribution fgal(vec u), consisting of a sum of delta functions. The former case applies only to time-averaged rate measurements and the maximum number of delta functions is (Script N‑1), where Script N is the total number of data entries. The second case applies to any harmonic expansion coefficient of the time-dependent rate and the maximum number of terms is Script N. Using time-averaged rates, the aforementioned form of F(v) results in a piecewise constant unmodulated halo function tilde eta0BF(vmin) (which is an integral of the speed distribution) with at most (Script N-1) downward steps. The authors had previously proven this result for likelihoods comprised of at least one extended likelihood, and found the best-fit halo function to be unique. This uniqueness, however, cannot be guaranteed in the more general analysis applied to arbitrary likelihoods. Thus we introduce a method for determining whether there exists a unique best-fit halo function, and provide a procedure for constructing either a pointwise confidence band, if the best-fit halo function is unique, or a degeneracy band, if it is not. Using measurements of modulation amplitudes, the aforementioned form of fgal(vec u), which is a sum of Galactic streams, yields a periodic time-dependent halo function tilde etaBF(vmin, t) which at any fixed time is a piecewise constant function of vmin with at most Script N downward steps. In this case, we explain how to construct pointwise confidence and degeneracy bands from the time-averaged halo function. Finally, we show that requiring an isotropic Galactic velocity distribution leads to a Galactic speed distribution F(u) that is once again a sum of delta functions, and produces a time-dependent tilde etaBF(vmin, t) function (and a time-averaged tilde eta0BF(vmin)) that is piecewise linear, differing significantly from best-fit halo functions obtained without the assumption of isotropy.

  15. Hypermobility in Adolescent Athletes: Pain, Functional Ability, Quality of Life, and Musculoskeletal Injuries.

    PubMed

    Schmidt, Heidi; Pedersen, Trine Lykke; Junge, Tina; Engelbert, Raoul; Juul-Kristensen, Birgit

    2017-10-01

    Study Design Cross-sectional. Background Generalized joint hypermobility (GJH) may increase pain and likelihood of injuries and also decrease function and health-related quality of life (HRQoL) in elite-level adolescent athletes. Objective To assess the prevalence of GJH in elite-level adolescent athletes, and to study the association of GJH with pain, function, HRQoL, and musculoskeletal injuries. Methods A total of 132 elite-level adolescent athletes (36 adolescent boys, 96 adolescent girls; mean ± SD age, 14.0 ± 0.9 years), including ballet dancers (n = 22), TeamGym gymnasts (n = 57), and team handball players (n = 53), participated in the study. Generalized joint hypermobility was classified by Beighton score as GJH4 (4/9 or greater), GJH5 (5/9 or greater), and GJH6 (6/9 or greater). Function of the lower extremity, musculoskeletal injuries, and HRQoL were assessed with self-reported questionnaires, and part of physical performance was assessed by 4 postural-sway tests and 2 single-legged hop-for-distance tests. Results Overall prevalence rates for GJH4, GJH5, and GJH6 were 27.3%, 15.9%, and 6.8%, respectively, with a higher prevalence of GJH4 in ballet dancers (68.2%) and TeamGym gymnasts (24.6%) than in team handball players (13.2%). There was no significant difference in lower extremity function, injury prevalence and related factors (exacerbation, recurrence, and absence from training), HRQoL, or lengths of hop tests for those with and without GJH. However, the GJH group had significantly larger center-of-pressure path length across sway tests. Conclusion For ballet dancers and TeamGym gymnasts, the prevalence of GJH4 was higher than that of team handball players. For ballet dancers, the prevalence of GJH5 and GJH6 was higher than that of team handball players and the general adolescent population. The GJH group demonstrated larger sway in the balance tests, which, in the current cross-sectional study, did not have an association with injuries or HRQoL. However, the risk of having (ankle) injuries due to larger sway for the GJH group must be studied in future longitudinal studies. J Orthop Sports Phys Ther 2017;47(10):792-800. doi:10.2519/jospt.2017.7682.

  16. Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Lee, Sik-Yum

    2005-01-01

    In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…

  17. Counseling Pretreatment and the Elaboration Likelihood Model of Attitude Change.

    ERIC Educational Resources Information Center

    Heesacker, Martin

    The importance of high levels of involvement in counseling has been related to theories of interpersonal influence. To examine differing effects of counselor credibility as a function of how personally involved counselors are, the Elaboration Likelihood Model (ELM) of attitude change was applied to counseling pretreatment. Students (N=256) were…

  18. A Note on Three Statistical Tests in the Logistic Regression DIF Procedure

    ERIC Educational Resources Information Center

    Paek, Insu

    2012-01-01

    Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…

  19. Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data

    ERIC Educational Resources Information Center

    Xi, Nuo; Browne, Michael W.

    2014-01-01

    A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…

  20. Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures

    ERIC Educational Resources Information Center

    Atar, Burcu; Kamata, Akihito

    2011-01-01

    The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…

  1. Effects of Habitual Anger on Employees’ Behavior during Organizational Change

    PubMed Central

    Bönigk, Mareike; Steffgen, Georges

    2013-01-01

    Organizational change is a particularly emotional event for those being confronted with it. Anger is a frequently experienced emotion under these conditions. This study analyses the influence of employees’ habitual anger reactions on their reported behavior during organizational change. It was explored whether anger reactions conducive to recovering or increasing individual well-being will enhance the likelihood of functional change behavior. Dysfunctional regulation strategies in terms of individual well-being are expected to decrease the likelihood of functional change behavior—mediated by the commitment to change. Four hundred and twelve employees of different organizations in Luxembourg undergoing organizational change participated in the study. Findings indicate that the anger regulation strategy venting, and humor increase the likelihood of deviant resistance to change. Downplaying the incident’s negative impact and feedback increase the likelihood of active support for change. The mediating effect of commitment to change has been found for humor and submission. The empirical findings suggest that a differentiated conceptualization of resistance to change is required. Specific implications for practical change management and for future research are discussed. PMID:24287849

  2. Procedure for estimating stability and control parameters from flight test data by using maximum likelihood methods employing a real-time digital system

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Bowles, R. L.; Mayhew, S. C.

    1972-01-01

    A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.

  3. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  4. Effects of habitual anger on employees' behavior during organizational change.

    PubMed

    Bönigk, Mareike; Steffgen, Georges

    2013-11-25

    Organizational change is a particularly emotional event for those being confronted with it. Anger is a frequently experienced emotion under these conditions. This study analyses the influence of employees' habitual anger reactions on their reported behavior during organizational change. It was explored whether anger reactions conducive to recovering or increasing individual well-being will enhance the likelihood of functional change behavior. Dysfunctional regulation strategies in terms of individual well-being are expected to decrease the likelihood of functional change behavior-mediated by the commitment to change. Four hundred and twelve employees of different organizations in Luxembourg undergoing organizational change participated in the study. Findings indicate that the anger regulation strategy venting, and humor increase the likelihood of deviant resistance to change. Downplaying the incident's negative impact and feedback increase the likelihood of active support for change. The mediating effect of commitment to change has been found for humor and submission. The empirical findings suggest that a differentiated conceptualization of resistance to change is required. Specific implications for practical change management and for future research are discussed.

  5. Poisson point process modeling for polyphonic music transcription.

    PubMed

    Peeling, Paul; Li, Chung-fai; Godsill, Simon

    2007-04-01

    Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.

  6. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  7. Examining Functioning and Contextual Factors in Individuals with Joint Contractures from the Health Professional Perspective Using the ICF: An International Internet-Based Qualitative Expert Survey.

    PubMed

    Fischer, Uli; Müller, Martin; Strobl, Ralf; Bartoszek, Gabriele; Meyer, Gabriele; Grill, Eva

    2016-01-01

    The aim of this study was to identify disease-related aspects of functioning and disability in people with joint contractures from a health professionals' perspective and to describe the findings, using categories of the International Classification of Functioning, Disability, and Health (ICF). An Internet-based expert survey. We asked international health professionals for typical problems in functioning and important contextual factors of individuals with joint contractures using an Internet-based open-ended questionnaire. All answers were linked to the ICF according to established rules. Absolute and relative frequencies of the linked ICF categories were reported. Eighty experts named 1785 meaning units which could be linked to 256 ICF categories. Among the categories, 24.2% belonged to the component Body Functions, 20.7% to Body Structures, 36.3% to Activities and Participation, and 18.8% to Environmental Factors. Health professionals addressed a large variety of functional problems and multifaceted aspects due to the symptom joint contractures. International health professionals reported a large variety of aspects of functioning and health, which are related to joint contractures. © 2014 Association of Rehabilitation Nurses.

  8. A single-index threshold Cox proportional hazard model for identifying a treatment-sensitive subset based on multiple biomarkers.

    PubMed

    He, Ye; Lin, Huazhen; Tu, Dongsheng

    2018-06-04

    In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.

  9. Simulation-based Bayesian inference for latent traits of item response models: Introduction to the ltbayes package for R.

    PubMed

    Johnson, Timothy R; Kuhn, Kristine M

    2015-12-01

    This paper introduces the ltbayes package for R. This package includes a suite of functions for investigating the posterior distribution of latent traits of item response models. These include functions for simulating realizations from the posterior distribution, profiling the posterior density or likelihood function, calculation of posterior modes or means, Fisher information functions and observed information, and profile likelihood confidence intervals. Inferences can be based on individual response patterns or sets of response patterns such as sum scores. Functions are included for several common binary and polytomous item response models, but the package can also be used with user-specified models. This paper introduces some background and motivation for the package, and includes several detailed examples of its use.

  10. Articular soft tissue anatomy of the archosaur hip joint: Structural homology and functional implications.

    PubMed

    Tsai, Henry P; Holliday, Casey M

    2015-06-01

    Archosaurs evolved a wide diversity of locomotor postures, body sizes, and hip joint morphologies. The two extant archosaurs clades (birds and crocodylians) possess highly divergent hip joint morphologies, and the homologies and functions of their articular soft tissues, such as ligaments, cartilage, and tendons, are poorly understood. Reconstructing joint anatomy and function of extinct vertebrates is critical to understanding their posture, locomotor behavior, ecology, and evolution. However, the lack of soft tissues in fossil taxa makes accurate inferences of joint function difficult. Here, we describe the soft tissue anatomies and their osteological correlates in the hip joint of archosaurs and their sauropsid outgroups, and infer structural homology across the extant taxa. A comparative sample of 35 species of birds, crocodylians, lepidosaurs, and turtles ranging from hatchling to skeletally mature adult were studied using dissection, imaging, and histology. Birds and crocodylians possess topologically and histologically consistent articular soft tissues in their hip joints. Epiphyseal cartilages, fibrocartilages, and ligaments leave consistent osteological correlates. The archosaur acetabulum possesses distinct labrum and antitrochanter structures on the supraacetabulum. The ligamentum capitis femoris consists of distinct pubic- and ischial attachments, and is homologous with the ventral capsular ligament of lepidosaurs. The proximal femur has a hyaline cartilage core attached to the metaphysis via a fibrocartilaginous sleeve. This study provides new insight into soft tissue structures and their osteological correlates (e.g., the antitrochanter, the fovea capitis, and the metaphyseal collar) in the archosaur hip joint. The topological arrangement of fibro- and hyaline cartilage may provide mechanical support for the chondroepiphysis. The osteological correlates identified here will inform systematic and functional analyses of archosaur hindlimb evolution and provide the anatomical foundation for biomechanical investigations of joint tissues. © 2014 Wiley Periodicals, Inc.

  11. Kinematic and kinetic synergies of the lower extremities during the pull in olympic weightlifting.

    PubMed

    Kipp, Kristof; Redden, Josh; Sabick, Michelle; Harris, Chad

    2012-07-01

    The purpose of this study was to identify multijoint lower extremity kinematic and kinetic synergies in weightlifting and compare these synergies between joints and across different external loads. Subjects completed sets of the clean exercise at loads equal to 65, 75, and 85% of their estimated 1-RM. Functional data analysis was used to extract principal component functions (PCF's) for hip, knee, and ankle joint angles and moments of force during the pull phase of the clean at all loads. The PCF scores were then compared between joints and across loads to determine how much of each PCF was present at each joint and how it differed across loads. The analyses extracted two kinematic and four kinetic PCF's. The statistical comparisons indicated that all kinematic and two of the four kinetic PCF's did not differ across load, but scaled according to joint function. The PCF's captured a set of joint- and load-specific synergies that quantified biomechanical function of the lower extremity during Olympic weightlifting and revealed important technical characteristics that should be considered in sports training and future research.

  12. Optimized Vertex Method and Hybrid Reliability

    NASA Technical Reports Server (NTRS)

    Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.

    2002-01-01

    A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.

  13. SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.

    PubMed

    Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman

    2017-03-01

    We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).

  14. Measuring coherence of computer-assisted likelihood ratio methods.

    PubMed

    Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H

    2015-04-01

    Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  15. The Value of History, Physical Examination, and Radiographic Findings in the Diagnosis of Symptomatic Meniscal Tear among Middle-Age Subjects with Knee Pain

    PubMed Central

    Katz, Jeffrey N.; Smith, Savannah R.; Yang, Heidi Y.; Martin, Scott D.; Wright, John; Donnell-Fink, Laurel A.; Losina, Elena

    2016-01-01

    Objective To evaluate the utility of clinical history, radiographic and physical exam findings in the diagnosis of symptomatic meniscal tear (SMT) in patients over age 45, in whom concomitant osteoarthritis is prevalent. Methods In a cross-sectional study of patients from two orthopedic surgeons’ clinics we assessed clinical history, physical examination and radiographic findings in patients over 45 with knee pain. The orthopedic surgeons rated their confidence that subjects’ symptoms were due to MT; we defined the diagnosis of SMT as at least 70% confidence. We used logistic regression to identify factors independently associated with diagnosis of SMT and we used the regression results to construct an index of the likelihood of SMT. Results In 174 participants, six findings were associated independently with the expert clinician having ≥70% confidence that symptoms were due to MT: localized pain, ability to fully bend the knee, pain duration <1 year, lack of varus alignment, lack of pes planus, and absence of joint space narrowing on radiographs. The index identified a low risk group with 3% likelihood of SMT. Conclusion While clinicians traditionally rely upon mechanical symptoms in this diagnostic setting, our findings did not support the conclusion that mechanical symptoms were associated with the expert’s confidence that symptoms were due to MT. An index that includes history of localized pain, full flexion, duration <1 year, pes planus, varus alignment, and joint space narrowing can be used to stratify patients according to their risk of SMT and it identifies a subgroup with very low risk. PMID:27390312

  16. Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET

    NASA Astrophysics Data System (ADS)

    Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E.; Nuyts, Johan

    2016-02-01

    Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov’s momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved.

  17. The effects of velocities and lensing on moments of the Hubble diagram

    NASA Astrophysics Data System (ADS)

    Macaulay, E.; Davis, T. M.; Scovacricchi, D.; Bacon, D.; Collett, T.; Nichol, R. C.

    2017-05-01

    We consider the dispersion on the supernova distance-redshift relation due to peculiar velocities and gravitational lensing, and the sensitivity of these effects to the amplitude of the matter power spectrum. We use the Method-of-the-Moments (MeMo) lensing likelihood developed by Quartin et al., which accounts for the characteristic non-Gaussian distribution caused by lensing magnification with measurements of the first four central moments of the distribution of magnitudes. We build on the MeMo likelihood by including the effects of peculiar velocities directly into the model for the moments. In order to measure the moments from sparse numbers of supernovae, we take a new approach using Kernel density estimation to estimate the underlying probability density function of the magnitude residuals. We also describe a bootstrap re-sampling approach to estimate the data covariance matrix. We then apply the method to the joint light-curve analysis (JLA) supernova catalogue. When we impose only that the intrinsic dispersion in magnitudes is independent of redshift, we find σ _8=0.44^{+0.63}_{-0.44} at the one standard deviation level, although we note that in tests on simulations, this model tends to overestimate the magnitude of the intrinsic dispersion, and underestimate σ8. We note that the degeneracy between intrinsic dispersion and the effects of σ8 is more pronounced when lensing and velocity effects are considered simultaneously, due to a cancellation of redshift dependence when both effects are included. Keeping the model of the intrinsic dispersion fixed as a Gaussian distribution of width 0.14 mag, we find σ _8 = 1.07^{+0.50}_{-0.76}.

  18. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    NASA Astrophysics Data System (ADS)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  19. Hunting high and low: disentangling primordial and late-time non-Gaussianity with cosmic densities in spheres

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Pajer, E.; Pichon, C.; Nishimichi, T.; Codis, S.; Bernardeau, F.

    2018-03-01

    Non-Gaussianities of dynamical origin are disentangled from primordial ones using the formalism of large deviation statistics with spherical collapse dynamics. This is achieved by relying on accurate analytical predictions for the one-point probability distribution function and the two-point clustering of spherically averaged cosmic densities (sphere bias). Sphere bias extends the idea of halo bias to intermediate density environments and voids as underdense regions. In the presence of primordial non-Gaussianity, sphere bias displays a strong scale dependence relevant for both high- and low-density regions, which is predicted analytically. The statistics of densities in spheres are built to model primordial non-Gaussianity via an initial skewness with a scale dependence that depends on the bispectrum of the underlying model. The analytical formulas with the measured non-linear dark matter variance as input are successfully tested against numerical simulations. For local non-Gaussianity with a range from fNL = -100 to +100, they are found to agree within 2 per cent or better for densities ρ ∈ [0.5, 3] in spheres of radius 15 Mpc h-1 down to z = 0.35. The validity of the large deviation statistics formalism is thereby established for all observationally relevant local-type departures from perfectly Gaussian initial conditions. The corresponding estimators for the amplitude of the non-linear variance σ8 and primordial skewness fNL are validated using a fiducial joint maximum likelihood experiment. The influence of observational effects and the prospects for a future detection of primordial non-Gaussianity from joint one- and two-point densities-in-spheres statistics are discussed.

  20. Clinical Findings and Pain Symptoms as Potential Risk Factors for Chronic TMD: Descriptive Data and Empirically Identified Domains from the OPPERA Case-Control Study

    PubMed Central

    Ohrbach, Richard; Fillingim, Roger B.; Mulkey, Flora; Gonzalez, Yoly; Gordon, Sharon; Gremillion, Henry; Lim, Pei-Feng; Ribeiro-Dasilva, Margarete; Greenspan, Joel D.; Knott, Charles; Maixner, William; Slade, Gary

    2011-01-01

    Clinical characteristics might be associated with temporomandibular disorders (TMD) because they are antecedent risk factors that increase the likelihood of a healthy person developing the condition or because they represent signs or symptoms of either subclinical or overt TMD. In this baseline case-control study of the multisite Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) project, 1,633 controls and 185 cases with chronic, painful TMD completed questionnaires and received clinical examinations. Odds ratios measuring association between each clinical factor and TMD were computed, with adjustment for study-site as well as age, sex, and race/ethnicity. Compared to controls, TMD cases reported more trauma, greater parafunction, more headaches and other pain disorders, more functional limitation in using the jaw, more nonpain symptoms in the facial area, more temporomandibular joint noises and jaw locking, more neural or sensory medical conditions, and worse overall medical status. They also exhibited on examination reduced jaw mobility, more joint noises, and a greater number of painful masticatory, cervical, and body muscles upon palpation. The results indicated that TMD cases differ substantially from controls across almost all variables assessed. Future analyses of follow-up data will determine whether these clinical characteristics predict increased risk for developing first-onset pain-related TMD Perspective Clinical findings from OPPERA’s baseline case-control study indicate significant differences between chronic TMD cases and controls with respect to trauma history, parafunction, other pain disorders, health status, and clinical examination data. Future analyses will examine their contribution to TMD onset. PMID:22074750

  1. Two-dimensional probabilistic inversion of plane-wave electromagnetic data: methodology, model constraints and joint inversion with electrical resistivity data

    NASA Astrophysics Data System (ADS)

    Rosas-Carbajal, Marina; Linde, Niklas; Kalscheuer, Thomas; Vrugt, Jasper A.

    2014-03-01

    Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.

  2. Looking through the same lens: Shear calibration for LSST, Euclid, and WFIRST with stage 4 CMB lensing

    NASA Astrophysics Data System (ADS)

    Schaan, Emmanuel; Krause, Elisabeth; Eifler, Tim; Doré, Olivier; Miyatake, Hironao; Rhodes, Jason; Spergel, David N.

    2017-06-01

    The next-generation weak lensing surveys (i.e., LSST, Euclid, and WFIRST) will require exquisite control over systematic effects. In this paper, we address shear calibration and present the most realistic forecast to date for LSST/Euclid/WFIRST and CMB lensing from a stage 4 CMB experiment ("CMB S4"). We use the cosmolike code to simulate a joint analysis of all the two-point functions of galaxy density, galaxy shear, and CMB lensing convergence. We include the full Gaussian and non-Gaussian covariances and explore the resulting joint likelihood with Monte Carlo Markov chains. We constrain shear calibration biases while simultaneously varying cosmological parameters, galaxy biases, and photometric redshift uncertainties. We find that CMB lensing from CMB S4 enables the calibration of the shear biases down to 0.2%-3% in ten tomographic bins for LSST (below the ˜0.5 % requirements in most tomographic bins), down to 0.4%-2.4% in ten bins for Euclid, and 0.6%-3.2% in ten bins for WFIRST. For a given lensing survey, the method works best at high redshift where shear calibration is otherwise most challenging. This self-calibration is robust to Gaussian photometric redshift uncertainties and to a reasonable level of intrinsic alignment. It is also robust to changes in the beam and the effectiveness of the component separation of the CMB experiment, and slowly dependent on its depth, making it possible with third-generation CMB experiments such as AdvACT and SPT-3G, as well as the Simons Observatory.

  3. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    PubMed

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  4. Comparison of Quadrapolar™ radiofrequency lesions produced by standard versus modified technique: an experimental model.

    PubMed

    Safakish, Ramin

    2017-01-01

    Lower back pain (LBP) is a global public health issue and is associated with substantial financial costs and loss of quality of life. Over the years, different literature has provided different statistics regarding the causes of the back pain. The following statistic is the closest estimation regarding our patient population. The sacroiliac (SI) joint pain is responsible for LBP in 18%-30% of individuals with LBP. Quadrapolar™ radiofrequency ablation, which involves ablation of the nerves of the SI joint using heat, is a commonly used treatment for SI joint pain. However, the standard Quadrapolar radiofrequency procedure is not always effective at ablating all the sensory nerves that cause the pain in the SI joint. One of the major limitations of the standard Quadrapolar radiofrequency procedure is that it produces small lesions of ~4 mm in diameter. Smaller lesions increase the likelihood of failure to ablate all nociceptive input. In this study, we compare the standard Quadrapolar radiofrequency ablation technique to a modified Quadrapolar ablation technique that has produced improved patient outcomes in our clinic. The methodology of the two techniques are compared. In addition, we compare results from an experimental model comparing the lesion sizes produced by the two techniques. Taken together, the findings from this study suggest that the modified Quadrapolar technique provides longer lasting relief for the back pain that is caused by SI joint dysfunction. A randomized controlled clinical trial is the next step required to quantify the difference in symptom relief and quality of life produced by the two techniques.

  5. Less-Complex Method of Classifying MPSK

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2006-01-01

    An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may "know" little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term "modulation classification" or simply "classification" signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum- likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of carrier phase, of a complicated sum of functions of the baseband sample values, the carrier phase, the carrier-signal and noise magnitudes, and M or M'. Then the likelihood ratio, defined as the ratio between the likelihood functions, is computed, leading to the choice of whichever hypothesis - M or M'- is more likely. In the alternative method, the integral in each likelihood function is approximated by a sum over values of the integrand sampled at a number, 1, of equally spaced values of carrier phase. Used in this way, 1 is a parameter that can be adjusted to trade computational complexity against the probability of misclassification. In the limit as 1 approaches infinity, one obtains the integral form of the likelihood function and thus recovers the ML classification. The present approximate method has been tested in comparison with the ML method by means of computational simulations. The results of the simulations have shown that the performance (as quantified by probability of misclassification) of the approximate method is nearly indistinguishable from that of the ML method (see figure).

  6. Identification of patients with suboptimal results after hip arthroplasty: development of a preliminary prediction algorithm.

    PubMed

    Lungu, Eugen; Vendittoli, Pascal-André; Desmeules, François

    2015-10-05

    The ability to predict preoperatively the identity of patients undergoing hip arthroplasty at risk of suboptimal outcomes could help implement interventions targeted at improving surgical results. The objective was to develop a preliminary prediction algorithm (PA) allowing the identification of patients at risk of unsatisfactory outcomes one to two years following hip arthroplasty. Retrospective data on a cohort of 265 patients having undergone primary unilateral hip replacement (188 total arthroplasties and 77 resurfacing arthroplasties) from 2004 to 2010 were collected from our arthroplasty database. Hip pain and function, as measured by the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) were collected, as well as self-reported hip joint perception after surgery. Demographic and clinical variables recorded at the time of the surgery were considered as potential predictors. Patients were considered as having a suboptimal surgical outcome if they were in the worst quartile of the postoperative total WOMAC score and perceived their operated hip as artificial with minimal or major limitations. The PA was developed using recursive partitioning. Mean postoperative surgical follow-up was 446 ± 171 days. Forty patients (15.1 %) had a postoperative total WOMAC score in the worst quartile (>11.5/100) and perceived their joint as artificial with minimal or major restrictions. A PA consisting of the following variables achieved the most acceptable level of prediction: gender, age at the time of surgery, body mass index (BMI), and three items of the preoperative WOMAC (degree of pain with walking on a flat surface and during the night as well as degree of difficulty with putting socks or stockings). The rule had a sensitivity of 75.0 % (95 % CI: 59.8-85.8), a specificity of 77.8 % (95 % CI: 71.9-82.7), a positive predictive value of 37.5 % (95 % CI: 27.7-48.5), a negative predictive value of 94.6 % (95 % CI: 90.3-97.0) and positive and negative likelihood ratios of 3.38 (95 % CI: 2.49-4.57) and 0.34 (95 % CI: 0.19-0.55) respectively. The preliminary PA shows promising results at identifying patients at risk of significant functional limitations, increased pain and inadequate joint perception after hip arthroplasty. Clinical use should not be implemented before additional validation and refining.

  7. Return to preinjury sports participation following anterior cruciate ligament reconstruction: contributions of demographic, knee impairment, and self-report measures.

    PubMed

    Lentz, Trevor A; Zeppieri, Giorgio; Tillman, Susan M; Indelicato, Peter A; Moser, Michael W; George, Steven Z; Chmielewski, Terese L

    2012-11-01

    Cross-sectional cohort. (1) To examine differences in clinical variables (demographics, knee impairments, and self-report measures) between those who return to preinjury level of sports participation and those who do not at 1 year following anterior cruciate ligament reconstruction, (2) to determine the factors most strongly associated with return-to-sport status in a multivariate model, and (3) to explore the discriminatory value of clinical variables associated with return to sport at 1 year postsurgery. Demographic, physical impairment, and psychosocial factors individually prohibit return to preinjury levels of sports participation. However, it is unknown which combination of factors contributes to sports participation status. Ninety-four patients (60 men; mean age, 22.4 years) 1 year post-anterior cruciate ligament reconstruction were included. Clinical variables were collected and included demographics, knee impairment measures, and self-report questionnaire responses. Patients were divided into "yes return to sports" or "no return to sports" groups based on their answer to the question, "Have you returned to the same level of sports as before your injury?" Group differences in demographics, knee impairments, and self-report questionnaire responses were analyzed. Discriminant function analysis determined the strongest predictors of group classification. Receiver-operating-characteristic curves determined the discriminatory accuracy of the identified clinical variables. Fifty-two of 94 patients (55%) reported yes return to sports. Patients reporting return to preinjury levels of sports participation were more likely to have had less knee joint effusion, fewer episodes of knee instability, lower knee pain intensity, higher quadriceps peak torque-body weight ratio, higher score on the International Knee Documentation Committee Subjective Knee Evaluation Form, and lower levels of kinesiophobia. Knee joint effusion, episodes of knee instability, and score on the International Knee Documentation Committee Subjective Knee Evaluation Form were identified as the factors most strongly associated with self-reported return-to-sport status. The highest positive likelihood ratio for the yes-return-to-sports group classification (14.54) was achieved when patients met all of the following criteria: no knee effusion, no episodes of instability, and International Knee Documentation Committee Subjective Knee Evaluation Form score greater than 93. In multivariate analysis, the factors most strongly associated with return-to-sport status included only self-reported knee function, episodes of knee instability, and knee joint effusion.

  8. Early Osteoarthritis of the Trapeziometacarpal Joint Is Not Associated With Joint Instability during Typical Isometric Loading

    PubMed Central

    Halilaj, Eni; Moore, Douglas C.; Patel, Tarpit K.; Ladd, Amy L.; Weiss, Arnold-Peter C.; Crisco, Joseph J.

    2015-01-01

    The saddle-shaped trapeziometacarpal (TMC) joint contributes importantly to the function of the human thumb. A balance between mobility and stability is essential in this joint, which experiences high loads and is prone to osteoarthritis (OA). Since instability is considered a risk factor for TMC OA, we assessed TMC joint instability during the execution of three isometric functional tasks (key pinch, jar grasp, and jar twist) in 76 patients with early TMC OA and 44 asymptomatic controls. Computed tomography images were acquired while subjects held their hands relaxed and while they applied 80% of their maximum effort for each task. Six degree-of-freedom rigid body kinematics of the metacarpal with respect to the trapezium from the unloaded to the loaded task positions were computed in terms of a TMC joint coordinate system. Joint instability was expressed as a function of the metacarpal translation and the applied force. We found that the TMC joint was more unstable during a key pinch task than during a jar grasp or a jar twist task. Sex, age, and early OA did not have an effect on TMC joint instability, suggesting that instability during these three tasks is not a predisposing factor in TMC OA. PMID:25941135

  9. Optimal Methods for Classification of Digitally Modulated Signals

    DTIC Science & Technology

    2013-03-01

    of using a ratio of likelihood functions, the proposed approach uses the Kullback - Leibler (KL) divergence. KL...58 List of Acronyms ALRT Average LRT BPSK Binary Shift Keying BPSK-SS BPSK Spread Spectrum or CDMA DKL Kullback - Leibler Information Divergence...blind demodulation for develop classification algorithms for wider set of signals types. Two methodologies were used : Likelihood Ratio Test

  10. A spatially explicit capture-recapture estimator for single-catch traps.

    PubMed

    Distiller, Greg; Borchers, David L

    2015-11-01

    Single-catch traps are frequently used in live-trapping studies of small mammals. Thus far, a likelihood for single-catch traps has proven elusive and usually the likelihood for multicatch traps is used for spatially explicit capture-recapture (SECR) analyses of such data. Previous work found the multicatch likelihood to provide a robust estimator of average density. We build on a recently developed continuous-time model for SECR to derive a likelihood for single-catch traps. We use this to develop an estimator based on observed capture times and compare its performance by simulation to that of the multicatch estimator for various scenarios with nonconstant density surfaces. While the multicatch estimator is found to be a surprisingly robust estimator of average density, its performance deteriorates with high trap saturation and increasing density gradients. Moreover, it is found to be a poor estimator of the height of the detection function. By contrast, the single-catch estimators of density, distribution, and detection function parameters are found to be unbiased or nearly unbiased in all scenarios considered. This gain comes at the cost of higher variance. If there is no interest in interpreting the detection function parameters themselves, and if density is expected to be fairly constant over the survey region, then the multicatch estimator performs well with single-catch traps. However if accurate estimation of the detection function is of interest, or if density is expected to vary substantially in space, then there is merit in using the single-catch estimator when trap saturation is above about 60%. The estimator's performance is improved if care is taken to place traps so as to span the range of variables that affect animal distribution. As a single-catch likelihood with unknown capture times remains intractable for now, researchers using single-catch traps should aim to incorporate timing devices with their traps.

  11. Subjective results of joint lavage and viscosupplementation in hemophilic arthropathy

    PubMed Central

    de Rezende, Márcia Uchoa; Rosa, Thiago Bittencourt Carvalho; Pasqualin, Thiago; Frucchi, Renato; Okazaki, Erica; Villaça, Paula Ribeiro

    2015-01-01

    OBJECTIVE: To assess whether joint lavage, viscosupplementation and triamcinolone improve joint pain, function and quality of life in patients with severe hemophilic arthropathy. METHODS: Fourteen patients with knee and/or ankle hemophilic arthritis with and without involvement of other joints underwent joint lavage and subsequent injection of hylan G-F20 and triamcinolone in all affected joints. The patients answered algo-functional questionnaires (Lequesne and WOMAC), visual analog scale for pain (VAS) and SF-36 preoperatively, and at one, three, six and twelve months postoperatively. RESULTS: Sixteen knees, 15 ankles, 8 elbows and one shoulder were treated in 14 patients. Six patients had musculoskeletal bleeding [ankle (1), leg muscle (2) and knees (4)] at 3 months affecting the results. Pain did not improve significantly. Function improved (WOMAC p=0.02 and Lequesne p=0.01). The physical component of SF-36 improved at all time points except at 3 months, with best results at one-year follow-up (baseline = 33.4; 1 month = 39.6; 3 months= 37.6; 6 months 39.6 and 1 year = 44.6; p < 0.001). CONCLUSION: Joint lavage followed by injection of triamcinolone and hylan G-F20 improves function and quality of life progressively up to a year, even in severe hemophilic arthropathy. Level of Evidence IV, Case Series. PMID:26207096

  12. A Functional Model of the Digital Extensor Mechanism: Demonstrating Biomechanics with Hair Bands

    ERIC Educational Resources Information Center

    Cloud, Beth A.; Youdas, James W.; Hellyer, Nathan J.; Krause, David A.

    2010-01-01

    The action of muscles about joints can be explained through analysis of their spatial relationship. A functional model of these relationships can be valuable in learning and understanding the muscular action about a joint. A model can be particularly helpful when examining complex actions across multiple joints such as in the digital extensor…

  13. JANUS: Joint Academic Network Using Satellite. Brief Description of Project. IET Papers on Broadcasting: No. 287.

    ERIC Educational Resources Information Center

    Bates, A. W.

    The JANUS (Joint Academic Network Using Satellite) satellite network is being planned to link European institutions wishing to jointly produce distance teaching materials. Earth stations with capabilities for transmit/receive functions, voice/data functions, two 64 kbs channels, and connection to local telephone exchange and computer networks will…

  14. Mobile sensing of point-source fugitive methane emissions using Bayesian inference: the determination of the likelihood function

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Albertson, J. D.

    2016-12-01

    Natural gas is considered as a bridge fuel towards clean energy due to its potential lower greenhouse gas emission comparing with other fossil fuels. Despite numerous efforts, an efficient and cost-effective approach to monitor fugitive methane emissions along the natural gas production-supply chain has not been developed yet. Recently, mobile methane measurement has been introduced which applies a Bayesian approach to probabilistically infer methane emission rates and update estimates recursively when new measurements become available. However, the likelihood function, especially the error term which determines the shape of the estimate uncertainty, is not rigorously defined and evaluated with field data. To address this issue, we performed a series of near-source (< 30 m) controlled methane release experiments using a specialized vehicle mounted with fast response methane analyzers and a GPS unit. Methane concentrations were measured at two different heights along mobile traversals downwind of the sources, and concurrent wind and temperature data are recorded by nearby 3-D sonic anemometers. With known methane release rates, the measurements were used to determine the functional form and the parameterization of the likelihood function in the Bayesian inference scheme under different meteorological conditions.

  15. Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence

    NASA Astrophysics Data System (ADS)

    Lewis, Nicholas; Grünwald, Peter

    2018-03-01

    Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).

  16. Examining social support, rumination, and optimism in relation to binge eating among Caucasian and African-American college women.

    PubMed

    Mason, Tyler B; Lewis, Robin J

    2017-12-01

    Binge eating is a significant concern among college age women-both Caucasian and African-American women. Research has shown that social support, coping, and optimism are associated with engaging in fewer negative health behaviors including binge eating among college students. However, the impact of sources of social support (i.e., support from family, friends, and a special person), rumination, and optimism on binge eating as a function of race/ethnicity has received less attention. The purpose of this study was to examine the association between social support, rumination, and optimism and binge eating among Caucasian and American-American women, separately. Caucasian (n = 100) and African-American (n = 84) women from a university in the Mid-Atlantic US completed an online survey about eating behaviors and psychosocial health. Social support from friends was associated with less likelihood of binge eating among Caucasian women. Social support from family was associated with less likelihood of binge eating among African-American women, but greater likelihood of binge eating among Caucasian women. Rumination was associated with greater likelihood of binge eating among Caucasian and African-American women. Optimism was associated with less likelihood of binge eating among African-American women. These results demonstrate similarities and differences in correlates of binge eating as a function of race/ethnicity.

  17. Association between marijuana exposure and pulmonary function over 20 years.

    PubMed

    Pletcher, Mark J; Vittinghoff, Eric; Kalhan, Ravi; Richman, Joshua; Safford, Monika; Sidney, Stephen; Lin, Feng; Kertesz, Stefan

    2012-01-11

    Marijuana smoke contains many of the same constituents as tobacco smoke, but whether it has similar adverse effects on pulmonary function is unclear. To analyze associations between marijuana (both current and lifetime exposure) and pulmonary function. The Coronary Artery Risk Development in Young Adults (CARDIA) study, a longitudinal study collecting repeated measurements of pulmonary function and smoking over 20 years (March 26, 1985-August 19, 2006) in a cohort of 5115 men and women in 4 US cities. Mixed linear modeling was used to account for individual age-based trajectories of pulmonary function and other covariates including tobacco use, which was analyzed in parallel as a positive control. Lifetime exposure to marijuana joints was expressed in joint-years, with 1 joint-year of exposure equivalent to smoking 365 joints or filled pipe bowls. Forced expiratory volume in the first second of expiration (FEV(1)) and forced vital capacity (FVC). Marijuana exposure was nearly as common as tobacco exposure but was mostly light (median, 2-3 episodes per month). Tobacco exposure, both current and lifetime, was linearly associated with lower FEV(1) and FVC. In contrast, the association between marijuana exposure and pulmonary function was nonlinear (P < .001): at low levels of exposure, FEV(1) increased by 13 mL/joint-year (95% CI, 6.4 to 20; P < .001) and FVC by 20 mL/joint-year (95% CI, 12 to 27; P < .001), but at higher levels of exposure, these associations leveled or even reversed. The slope for FEV(1) was -2.2 mL/joint-year (95% CI, -4.6 to 0.3; P = .08) at more than 10 joint-years and -3.2 mL per marijuana smoking episode/mo (95% CI, -5.8 to -0.6; P = .02) at more than 20 episodes/mo. With very heavy marijuana use, the net association with FEV(1) was not significantly different from baseline, and the net association with FVC remained significantly greater than baseline (eg, at 20 joint-years, 76 mL [95% CI, 34 to 117]; P < .001). Occasional and low cumulative marijuana use was not associated with adverse effects on pulmonary function.

  18. Interprofessional approach for teaching functional knee joint anatomy.

    PubMed

    Meyer, Jakob J; Obmann, Markus M; Gießler, Marianne; Schuldis, Dominik; Brückner, Ann-Kathrin; Strohm, Peter C; Sandeck, Florian; Spittau, Björn

    2017-03-01

    Profound knowledge in functional and clinical anatomy is a prerequisite for efficient diagnosis in medical practice. However, anatomy teaching does not always consider functional and clinical aspects. Here we introduce a new interprofessional approach to effectively teach the anatomy of the knee joint. The presented teaching approach involves anatomists, orthopaedists and physical therapists to teach anatomy of the knee joint in small groups under functional and clinical aspects. The knee joint courses were implemented during early stages of the medical curriculum and medical students were grouped with students of physical therapy to sensitize students to the importance of interprofessional work. Evaluation results clearly demonstrate that medical students and physical therapy students appreciated this teaching approach. First evaluations of following curricular anatomy exams suggest a benefit of course participants in knee-related multiple choice questions. Together, the interprofessional approach presented here proves to be a suitable approach to teach functional and clinical anatomy of the knee joint and further trains interprofessional work between prospective physicians and physical therapists as a basis for successful healthcare management. Copyright © 2016 The Authors. Published by Elsevier GmbH.. All rights reserved.

  19. No evidence for the use of DIR, D–D fusions, chromosome 15 open reading frames or VHreplacement in the peripheral repertoire was found on application of an improved algorithm, JointML, to 6329 human immunoglobulin H rearrangements

    PubMed Central

    Ohm-Laursen, Line; Nielsen, Morten; Larsen, Stine R; Barington, Torben

    2006-01-01

    Antibody diversity is created by imprecise joining of the variability (V), diversity (D) and joining (J) gene segments of the heavy and light chain loci. Analysis of rearrangements is complicated by somatic hypermutations and uncertainty concerning the sources of gene segments and the precise way in which they recombine. It has been suggested that D genes with irregular recombination signal sequences (DIR) and chromosome 15 open reading frames (OR15) can replace conventional D genes, that two D genes or inverted D genes may be used and that the repertoire can be further diversified by heavy chain V gene (VH) replacement. Safe conclusions require large, well-defined sequence samples and algorithms minimizing stochastic assignment of segments. Two computer programs were developed for analysis of heavy chain joints. JointHMM is a profile hidden Markow model, while JointML is a maximum-likelihood-based method taking the lengths of the joint and the mutational status of the VH gene into account. The programs were applied to a set of 6329 clonally unrelated rearrangements. A conventional D gene was found in 80% of unmutated sequences and 64% of mutated sequences, while D-gene assignment was kept below 5% in artificial (randomly permutated) rearrangements. No evidence for the use of DIR, OR15, multiple D genes or VH replacements was found, while inverted D genes were used in less than 1‰ of the sequences. JointML was shown to have a higher predictive performance for D-gene assignment in mutated and unmutated sequences than four other publicly available programs. An online version 1·0 of JointML is available at http://www.cbs.dtu.dk/services/VDJsolver. PMID:17005006

  20. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    PubMed

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).

  1. Functional vs. Traditional Analysis in Biomechanical Gait Data: An Alternative Statistical Approach

    PubMed Central

    Seeley, Matthew K.; Francom, Devin; Reese, C. Shane; Hopkins, J. Ty

    2017-01-01

    Abstract In human motion studies, discrete points such as peak or average kinematic values are commonly selected to test hypotheses. The purpose of this study was to describe a functional data analysis and describe the advantages of using functional data analyses when compared with a traditional analysis of variance (ANOVA) approach. Nineteen healthy participants (age: 22 ± 2 yrs, body height: 1.7 ± 0.1 m, body mass: 73 ± 16 kg) walked under two different conditions: control and pain+effusion. Pain+effusion was induced by injection of sterile saline into the joint capsule and hypertonic saline into the infrapatellar fat pad. Sagittal-plane ankle, knee, and hip joint kinematics were recorded and compared following injections using 2×2 mixed model ANOVAs and FANOVAs. The results of ANOVAs detected a condition × time interaction for the peak ankle (F1,18 = 8.56, p = 0.01) and hip joint angle (F1,18 = 5.77, p = 0.03), but did not for the knee joint angle (F1,18 = 0.36, p = 0.56). The functional data analysis, however, found several differences at initial contact (ankle and knee joint), in the mid-stance (each joint) and at toe off (ankle). Although a traditional ANOVA is often appropriate for discrete or summary data, in biomechanical applications, the functional data analysis could be a beneficial alternative. When using the functional data analysis approach, a researcher can (1) evaluate the entire data as a function, and (2) detect the location and magnitude of differences within the evaluated function. PMID:29339984

  2. Functional vs. Traditional Analysis in Biomechanical Gait Data: An Alternative Statistical Approach.

    PubMed

    Park, Jihong; Seeley, Matthew K; Francom, Devin; Reese, C Shane; Hopkins, J Ty

    2017-12-01

    In human motion studies, discrete points such as peak or average kinematic values are commonly selected to test hypotheses. The purpose of this study was to describe a functional data analysis and describe the advantages of using functional data analyses when compared with a traditional analysis of variance (ANOVA) approach. Nineteen healthy participants (age: 22 ± 2 yrs, body height: 1.7 ± 0.1 m, body mass: 73 ± 16 kg) walked under two different conditions: control and pain+effusion. Pain+effusion was induced by injection of sterile saline into the joint capsule and hypertonic saline into the infrapatellar fat pad. Sagittal-plane ankle, knee, and hip joint kinematics were recorded and compared following injections using 2×2 mixed model ANOVAs and FANOVAs. The results of ANOVAs detected a condition × time interaction for the peak ankle (F1,18 = 8.56, p = 0.01) and hip joint angle (F1,18 = 5.77, p = 0.03), but did not for the knee joint angle (F1,18 = 0.36, p = 0.56). The functional data analysis, however, found several differences at initial contact (ankle and knee joint), in the mid-stance (each joint) and at toe off (ankle). Although a traditional ANOVA is often appropriate for discrete or summary data, in biomechanical applications, the functional data analysis could be a beneficial alternative. When using the functional data analysis approach, a researcher can (1) evaluate the entire data as a function, and (2) detect the location and magnitude of differences within the evaluated function.

  3. Functional vs. Traditional Analysis in Biomechanical Gait Data: An Alternative Statistical Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jihong; Seeley, Matthew K.; Francom, Devin

    In human motion studies, discrete points such as peak or average kinematic values are commonly selected to test hypotheses. The purpose of this study was to describe a functional data analysis and describe the advantages of using functional data analyses when compared with a traditional analysis of variance (ANOVA) approach. Nineteen healthy participants (age: 22 ± 2 yrs, body height: 1.7 ± 0.1 m, body mass: 73 ± 16 kg) walked under two different conditions: control and pain+effusion. Pain+effusion was induced by injection of sterile saline into the joint capsule and hypertonic saline into the infrapatellar fat pad. Sagittal-plane ankle,more » knee, and hip joint kinematics were recorded and compared following injections using 2×2 mixed model ANOVAs and FANOVAs. The results of ANOVAs detected a condition × time interaction for the peak ankle (F1,18 = 8.56, p = 0.01) and hip joint angle (F1,18 = 5.77, p = 0.03), but did not for the knee joint angle (F1,18 = 0.36, p = 0.56). The functional data analysis, however, found several differences at initial contact (ankle and knee joint), in the mid-stance (each joint) and at toe off (ankle). Although a traditional ANOVA is often appropriate for discrete or summary data, in biomechanical applications, the functional data analysis could be a beneficial alternative. Thus when using the functional data analysis approach, a researcher can (1) evaluate the entire data as a function, and (2) detect the location and magnitude of differences within the evaluated function.« less

  4. Functional vs. Traditional Analysis in Biomechanical Gait Data: An Alternative Statistical Approach

    DOE PAGES

    Park, Jihong; Seeley, Matthew K.; Francom, Devin; ...

    2017-12-28

    In human motion studies, discrete points such as peak or average kinematic values are commonly selected to test hypotheses. The purpose of this study was to describe a functional data analysis and describe the advantages of using functional data analyses when compared with a traditional analysis of variance (ANOVA) approach. Nineteen healthy participants (age: 22 ± 2 yrs, body height: 1.7 ± 0.1 m, body mass: 73 ± 16 kg) walked under two different conditions: control and pain+effusion. Pain+effusion was induced by injection of sterile saline into the joint capsule and hypertonic saline into the infrapatellar fat pad. Sagittal-plane ankle,more » knee, and hip joint kinematics were recorded and compared following injections using 2×2 mixed model ANOVAs and FANOVAs. The results of ANOVAs detected a condition × time interaction for the peak ankle (F1,18 = 8.56, p = 0.01) and hip joint angle (F1,18 = 5.77, p = 0.03), but did not for the knee joint angle (F1,18 = 0.36, p = 0.56). The functional data analysis, however, found several differences at initial contact (ankle and knee joint), in the mid-stance (each joint) and at toe off (ankle). Although a traditional ANOVA is often appropriate for discrete or summary data, in biomechanical applications, the functional data analysis could be a beneficial alternative. Thus when using the functional data analysis approach, a researcher can (1) evaluate the entire data as a function, and (2) detect the location and magnitude of differences within the evaluated function.« less

  5. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  6. AGN host galaxy mass function in COSMOS. Is AGN feedback responsible for the mass-quenching of galaxies?

    NASA Astrophysics Data System (ADS)

    Bongiorno, A.; Schulze, A.; Merloni, A.; Zamorani, G.; Ilbert, O.; La Franca, F.; Peng, Y.; Piconcelli, E.; Mainieri, V.; Silverman, J. D.; Brusa, M.; Fiore, F.; Salvato, M.; Scoville, N.

    2016-04-01

    We investigate the role of supermassive black holes in the global context of galaxy evolution by measuring the host galaxy stellar mass function (HGMF) and the specific accretion rate, that is, λSAR, the distribution function (SARDF), up to z ~ 2.5 with ~1000 X-ray selected AGN from XMM-COSMOS. Using a maximum likelihood approach, we jointly fit the stellar mass function and specific accretion rate distribution function, with the X-ray luminosity function as an additional constraint. Our best-fit model characterizes the SARDF as a double power-law with mass-dependent but redshift-independent break, whose low λSAR slope flattens with increasing redshift while the normalization increases. This implies that for a given stellar mass, higher λSAR objects have a peak in their space density at earlier epoch than the lower λSAR objects, following and mimicking the well-known AGN cosmic downsizing as observed in the AGN luminosity function. The mass function of active galaxies is described by a Schechter function with an almost constant M∗⋆ and a low-mass slope α that flattens with redshift. Compared to the stellar mass function, we find that the HGMF has a similar shape and that up to log (M⋆/M⊙) ~ 11.5, the ratio of AGN host galaxies to star-forming galaxies is basically constant (~10%). Finally, the comparison of the AGN HGMF for different luminosity and specific accretion rate subclasses with a previously published phenomenological model prediction for the "transient" population, which are galaxies in the process of being mass-quenched, reveals that low-luminosity AGN do not appear to be able to contribute significantly to the quenching and that at least at high masses, that is, M⋆ > 1010.7 M⊙, feedback from luminous AGN (log Lbol ≳ 46 [erg/s]) may be responsible for the quenching of star formation in the host galaxy.

  7. [Fetal bone and joint disorders].

    PubMed

    Jakobovits, Akos

    2008-12-21

    The article discusses the physiology and pathology of fetal bone and joint development and functions. The bones provide static support for the body. The skull and the bones of spinal column encase the central and part of the peripheral nervous system. The ribs and the sternum shield the heart and the lungs, while the bones of the pelvis protect the intraabdominal organs. Pathological changes of these bony structures may impair the functions of the respective systems or internal organs. Movements of the bones are brought about by muscles. The deriving motions are facilitated by joints. Bony anomalies of the extremities limit their effective functions. Apart from skeletal and joint abnormalities, akinesia may also be caused by neurological, muscular and skin diseases that secondarily affect the functions of bones and joints. Such pathological changes may lead to various degrees of physical disability and even to death. Some of the mentioned anomalies are recognizable in utero by ultrasound. The diagnosis may serve as medical indication for abortion in those instances when the identified abnormality is incompatible with independent life.

  8. Model selection and parameter estimation in structural dynamics using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Ben Abdessalem, Anis; Dervilis, Nikolaos; Wagg, David; Worden, Keith

    2018-01-01

    This paper will introduce the use of the approximate Bayesian computation (ABC) algorithm for model selection and parameter estimation in structural dynamics. ABC is a likelihood-free method typically used when the likelihood function is either intractable or cannot be approached in a closed form. To circumvent the evaluation of the likelihood function, simulation from a forward model is at the core of the ABC algorithm. The algorithm offers the possibility to use different metrics and summary statistics representative of the data to carry out Bayesian inference. The efficacy of the algorithm in structural dynamics is demonstrated through three different illustrative examples of nonlinear system identification: cubic and cubic-quintic models, the Bouc-Wen model and the Duffing oscillator. The obtained results suggest that ABC is a promising alternative to deal with model selection and parameter estimation issues, specifically for systems with complex behaviours.

  9. Free energy reconstruction from steered dynamics without post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athenes, Manuel, E-mail: Manuel.Athenes@cea.f; Condensed Matter and Materials Division, Physics and Life Sciences Directorate, LLNL, Livermore, CA 94551; Marinica, Mihai-Cosmin

    2010-09-20

    Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, wemore » accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.« less

  10. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when themore » FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.« less

  11. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    PubMed Central

    Gopich, Irina V.

    2015-01-01

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated. PMID:25612692

  12. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  13. Uncertainty analysis of signal deconvolution using a measured instrument response function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartouni, E. P.; Beeman, B.; Caggiano, J. A.

    2016-10-05

    A common analysis procedure minimizes the ln-likelihood that a set of experimental observables matches a parameterized model of the observation. The model includes a description of the underlying physical process as well as the instrument response function (IRF). Here, we investigate the National Ignition Facility (NIF) neutron time-of-flight (nTOF) spectrometers, the IRF is constructed from measurements and models. IRF measurements have a finite precision that can make significant contributions to the uncertainty estimate of the physical model’s parameters. Finally, we apply a Bayesian analysis to properly account for IRF uncertainties in calculating the ln-likelihood function used to find the optimummore » physical parameters.« less

  14. Functional anatomy of the temporomandibular joint (I).

    PubMed

    Sava, Anca; Scutariu, Mihaela Monica

    2012-01-01

    Jaw movement is analyzed as the action between two rigid components jointed together in a particular way, the movable mandible against the stabilized cranium. Jaw articulation distinguishes form most other synovial joints of the body by the coincidence of certain characteristic features. Its articular surfaces are not covered by hyaline cartilage as elsewhere. The two jointed components carry teeth the shape, position and occlusion of which having a unique influence on specific positions and movements within the joint. A fibrocartilaginous disc is interposed between upper and lower articular surfaces; this disc compensates for the incongruities in opposing parts and allows sliding, pivoting, and rotating movements between the bony components. These are the reasons for our review of the functional anatomy of the temporomandibular joint.

  15. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

  16. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  17. Functional roles of lower-limb joint moments while walking in water.

    PubMed

    Miyoshi, Tasuku; Shirota, Takashi; Yamamoto, Shin-Ichiro; Nakazawa, Kimitaka; Akai, Masami

    2005-02-01

    To clarify the functional roles of lower-limb joint moments and their contribution to support and propulsion tasks while walking in water compared with that on land. Sixteen healthy, young subjects walked on land and in water at several different speeds with and without additional loads. Walking in water is a major rehabilitation therapy for patients with orthopedic disorders. However, the functional role of lower-limb joint moments while walking in water is still unclear. Kinematics, electromyographic activities in biceps femoris and gluteus maximums, and ground reaction forces were measured under the following conditions: walking on land and in water at a self-determined pace, slow walking on land, and fast walking in water with or without additional loads (8 kg). The hip, knee, and ankle joint moments were calculated by inverse dynamics. The contribution of the walking speed increased the hip extension moment, and the additional weight increased the ankle plantar flexion and knee extension moment. The major functional role was different in each lower-limb joint muscle. That of the muscle group in the ankle is to support the body against gravity, and that of the muscle group involved in hip extension is to contribute to propulsion. In addition, walking in water not only reduced the joint moments but also completely changed the inter-joint coordination. It is of value for clinicians to be aware that the greater the viscosity of water produces a greater load on the hip joint when fast walking in water.

  18. Extending the Applicability of the Generalized Likelihood Function for Zero-Inflated Data Series

    NASA Astrophysics Data System (ADS)

    Oliveira, Debora Y.; Chaffe, Pedro L. B.; Sá, João. H. M.

    2018-03-01

    Proper uncertainty estimation for data series with a high proportion of zero and near zero observations has been a challenge in hydrologic studies. This technical note proposes a modification to the Generalized Likelihood function that accounts for zero inflation of the error distribution (ZI-GL). We compare the performance of the proposed ZI-GL with the original Generalized Likelihood function using the entire data series (GL) and by simply suppressing zero observations (GLy>0). These approaches were applied to two interception modeling examples characterized by data series with a significant number of zeros. The ZI-GL produced better uncertainty ranges than the GL as measured by the precision, reliability and volumetric bias metrics. The comparison between ZI-GL and GLy>0 highlights the need for further improvement in the treatment of residuals from near zero simulations when a linear heteroscedastic error model is considered. Aside from the interception modeling examples illustrated herein, the proposed ZI-GL may be useful for other hydrologic studies, such as for the modeling of the runoff generation in hillslopes and ephemeral catchments.

  19. A scan statistic for binary outcome based on hypergeometric probability model, with an application to detecting spatial clusters of Japanese encephalitis.

    PubMed

    Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong

    2013-01-01

    As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.

  20. Topics in inference and decision-making with partial knowledge

    NASA Technical Reports Server (NTRS)

    Safavian, S. Rasoul; Landgrebe, David

    1990-01-01

    Two essential elements needed in the process of inference and decision-making are prior probabilities and likelihood functions. When both of these components are known accurately and precisely, the Bayesian approach provides a consistent and coherent solution to the problems of inference and decision-making. In many situations, however, either one or both of the above components may not be known, or at least may not be known precisely. This problem of partial knowledge about prior probabilities and likelihood functions is addressed. There are at least two ways to cope with this lack of precise knowledge: robust methods, and interval-valued methods. First, ways of modeling imprecision and indeterminacies in prior probabilities and likelihood functions are examined; then how imprecision in the above components carries over to the posterior probabilities is examined. Finally, the problem of decision making with imprecise posterior probabilities and the consequences of such actions are addressed. Application areas where the above problems may occur are in statistical pattern recognition problems, for example, the problem of classification of high-dimensional multispectral remote sensing image data.

  1. Effects of balance training by knee joint motions on muscle activity in adult men with functional ankle instability.

    PubMed

    Nam, Seung-Min; Kim, Won-Bok; Yun, Chang-Kyo

    2016-05-01

    [Purpose] This study examined the effects of balance training by applying knee joint movements on muscle activity in male adults with functional ankle instability. [Subjects and Methods] 28 adults with functional ankle instability, divided randomly into an experimental group, which performed balance training by applying knee joint movements for 20 minutes and ankle joint exercises for 10 minutes, and a control group, which performed ankle joint exercise for 30 minutes. Exercises were completed three times a week for 8 weeks. Electromyographic values of the tibialis anterior, peroneus longus, peroneus brevis, and the lateral gastrocnemius muscles were obtained to compare and analyze muscle activity before and after the experiments in each group. [Results] The experimental group had significant increases in muscle activity in the tibialis anterior, peroneus longus, and lateral gastrocnemius muscles, while muscle activity in the peroneus brevis increased without significance. The control group had significant increases in muscle activity in the tibialis anterior and peroneus longus, while muscle activity in the peroneus brevis and lateral gastrocnemius muscles increased without significance. [Conclusion] In conclusion, balance training by applying knee joint movements can be recommended as a treatment method for patients with functional ankle instability.

  2. 24 CFR 943.128 - How does a consortium carry out planning and reporting functions?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... HOUSING, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT PUBLIC HOUSING AGENCY CONSORTIA AND JOINT VENTURES... the consortium agreement, the consortium must submit joint five-year Plans and joint Annual Plans for... the joint PHA Plan. ...

  3. Empirical likelihood based detection procedure for change point in mean residual life functions under random censorship.

    PubMed

    Chen, Ying-Ju; Ning, Wei; Gupta, Arjun K

    2016-05-01

    The mean residual life (MRL) function is one of the basic parameters of interest in survival analysis that describes the expected remaining time of an individual after a certain age. The study of changes in the MRL function is practical and interesting because it may help us to identify some factors such as age and gender that may influence the remaining lifetimes of patients after receiving a certain surgery. In this paper, we propose a detection procedure based on the empirical likelihood for the changes in MRL functions with right censored data. Two real examples are also given: Veterans' administration lung cancer study and Stanford heart transplant to illustrate the detecting procedure. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation

    PubMed Central

    Meyer, Karin

    2016-01-01

    Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681

  5. Effects of hook plate on shoulder function after treatment of acromioclavicular joint dislocation.

    PubMed

    Chen, Chang-Hong; Dong, Qi-Rong; Zhou, Rong-Kui; Zhen, Hua-Qing; Jiao, Ya-Jun

    2014-01-01

    Internal fixation with hook plate has been used to treat acromioclavicular joint dislocation. This study aims to evaluate the effect of its use on shoulder function, to further analyze the contributing factors, and provide a basis for selection and design of improved internal fixation treatment of the acromioclavicular joint dislocation in the future. A retrospective analysis was performed on patients treated with a hook plate for acromioclavicular joint dislocation in our hospital from January 2010 to February 2013. There were 33 cases in total, including 25 males and 8 females, with mean age of 48.27 ± 8.7 years. There were 29 cases of Rockwood type III acromioclavicular dislocation, 4 cases of type V. The Constant-Murley shoulder function scoring system was used to evaluate the shoulder function recovery status after surgery. Anteroposterior shoulder X-ray was used to assess the position of the hook plate, status of acromioclavicular joint reduction and the occurrence of postoperative complications. According to the Constant-Murley shoulder function scoring system, the average scores were 78 ± 6 points 8 to 12 months after the surgery and before the removal of the hook plate, the average scores were 89 ± 5 minutes two months after the removal of hook plate. Postoperative X-ray imaging showed osteolysis in 10 cases (30.3%), osteoarthritis in six cases (18.1%), osteolysis associated with osteoarthritis in four cases(12.1%), and steel hook broken in one case (3%). The use of hook plate on open reduction and internal fixation of the acromioclavicular joint dislocation had little adverse effect on shoulder function and is an effective method for the treatment of acromioclavicular joint dislocation. Osteoarthritis and osteolysis are the two common complications after hook plate use, which are associated with the impairment of shoulder function. Shoulder function will be improved after removal of the hook plate.

  6. Effects of hook plate on shoulder function after treatment of acromioclavicular joint dislocation

    PubMed Central

    Chen, Chang-Hong; Dong, Qi-Rong; Zhou, Rong-Kui; Zhen, Hua-Qing; Jiao, Ya-Jun

    2014-01-01

    Introduction: Internal fixation with hook plate has been used to treat acromioclavicular joint dislocation. This study aims to evaluate the effect of its use on shoulder function, to further analyze the contributing factors, and provide a basis for selection and design of improved internal fixation treatment of the acromioclavicular joint dislocation in the future. Methods: A retrospective analysis was performed on patients treated with a hook plate for acromioclavicular joint dislocation in our hospital from January 2010 to February 2013. There were 33 cases in total, including 25 males and 8 females, with mean age of 48.27 ± 8.7 years. There were 29 cases of Rockwood type III acromioclavicular dislocation, 4 cases of type V. The Constant-Murley shoulder function scoring system was used to evaluate the shoulder function recovery status after surgery. Anteroposterior shoulder X-ray was used to assess the position of the hook plate, status of acromioclavicular joint reduction and the occurrence of postoperative complications. Results: According to the Constant-Murley shoulder function scoring system, the average scores were 78 ± 6 points 8 to 12 months after the surgery and before the removal of the hook plate, the average scores were 89 ± 5 minutes two months after the removal of hook plate. Postoperative X-ray imaging showed osteolysis in 10 cases (30.3%), osteoarthritis in six cases (18.1%), osteolysis associated with osteoarthritis in four cases(12.1%), and steel hook broken in one case (3%). Conclusion: The use of hook plate on open reduction and internal fixation of the acromioclavicular joint dislocation had little adverse effect on shoulder function and is an effective method for the treatment of acromioclavicular joint dislocation. Osteoarthritis and osteolysis are the two common complications after hook plate use, which are associated with the impairment of shoulder function. Shoulder function will be improved after removal of the hook plate. PMID:25356110

  7. Effect of Stretching Combined With Ultrashort Wave Diathermy on Joint Function and Its Possible Mechanism in a Rabbit Knee Contracture Model.

    PubMed

    Zhang, Quan Bing; Zhou, Yun; Zhong, Hua Zhang; Liu, Yi

    2018-05-01

    The aim of this study was to investigate the therapeutic effect of stretching combined with ultrashort wave on joint contracture and explore its possible mechanism. Thirty-two rabbits underwent unilateral immobilization of a knee joint at full extension to cause joint contracture. At 6 wks after immobilization, the rabbits were randomly divided into the following four groups: natural recovery group, stretching treatment group, ultrashort wave treatment group, and combined treatment group. For comparison, eight control group animals of corresponding age were also examined. The effect of stretching and ultrashort wave treatment on joint contracture was assessed by measuring the joint range of motion, evaluating the collagen deposition of joint capsule and assessing the mRNA and protein levels for transforming growth factor β1 in the joint capsule. The combined treatment group led to the best recovery of joint function. The combined treatment with stretching and ultrashort wave was more effective than stretching or ultrashort wave treatment alone against the synovial thickening of suprapatellar joint capsule, the collagen deposition of anterior joint capsule, and the elevated expression of transforming growth factor β1 in the joint capsule. Stretching combined with ultrashort wave treatment was effective in improving joint range of motion, reducing the biomechanical, histological, and molecular manifestations of joint capsule fibrosis in a rabbit model of extending joint contracture.

  8. An anatomically based protocol for the description of foot segment kinematics during gait.

    PubMed

    Leardini, A; Benedetti, M G; Catani, F; Simoncini, L; Giannini, S

    1999-10-01

    To design a technique for the in vivo description of ankle and other foot joint rotations to be applied in routine functional evaluation using non-invasive stereophotogrammetry. Position and orientation of tibia/fibula, calcaneus, mid-foot, 1st metatarsal and hallux segments were tracked during the stance phase of walking in nine asymptomatic subjects. Rigid clusters of reflective markers were used for foot segment pose estimation. Anatomical landmark calibration was applied for the reconstruction of anatomical landmarks. Previous studies have analysed only a limited number of joints or have proposed invasive techniques. Anatomical landmark trajectories were reconstructed in the laboratory frame using data from the anatomical calibration procedure. Anatomical co-ordinate frames were defined using the obtained landmark trajectories. Joint co-ordinate systems were used to calculate corresponding joint rotations in all three anatomical planes. The patterns of the joint rotations were highly repeatable within subjects. Consistent patterns between subjects were also exhibited at most of the joints. The method proposed enables a detailed description of ankle and other foot joint rotations on an anatomical base. Joint rotations can therefore be expressed in the well-established terminology necessary for their clinical interpretation. Functional evaluation of patients affected by foot diseases has recently called for more detailed and non-invasive protocols for the description of foot joint rotations during gait. The proposed method can help clinicians to distinguish between normal and pathological pattern of foot joint rotations, and to quantitatively assess the restoration of normal function after treatment.

  9. Evaluation of Radiographic Instability of the Trapeziometacarpal Joint in Women With Carpal Tunnel Syndrome.

    PubMed

    Kim, Jeong Hwan; Gong, Hyun Sik; Kim, Youn Ho; Rhee, Seung Hwan; Kim, Jihyoung; Baek, Goo Hyun

    2015-07-01

    To determine whether median nerve dysfunction measured by electrophysiologic studies in carpal tunnel syndrome (CTS) is associated with thumb trapeziometacarpal (TMC) joint instability. We evaluated 71 women with CTS and 31 asymptomatic control women. Patients with generalized laxity or TMC joint osteoarthritis were excluded. We classified the electrophysiologic severity of CTS based on nerve conduction time and amplitude and assessed radiographic instability of the TMC joint based on TMC joint stress radiographs. We compared subluxation ratio between patients with CTS and controls and performed correlation analysis of the relationship between the electrophysiologic grade and subluxation ratio. Thirty-one patients were categorized into the mild CTS subgroup and 41 into the severe CTS subgroup. There was no significant difference in subluxation ratio between the control group and CTS patients or between the control group and CTS subgroup patients. Furthermore, there was no significant correlation between electrophysiologic grade and subluxation ratio. This study demonstrated that patients with CTS did not have greater radiographic TMC joint instability compared with controls, and suggests that TMC joint stability is not affected by impaired median nerve function. Further studies could investigate how to better evaluate proprioceptive function of TMC joint and whether other nerves have effects on TMC joint motor/proprioceptive function, to elucidate the relationship between neuromuscular control of the TMC joint, its stability, and its progression to osteoarthritis. Diagnostic II. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  10. Multiple Cognitive Control Effects of Error Likelihood and Conflict

    PubMed Central

    Brown, Joshua W.

    2010-01-01

    Recent work on cognitive control has suggested a variety of performance monitoring functions of the anterior cingulate cortex, such as errors, conflict, error likelihood, and others. Given the variety of monitoring effects, a corresponding variety of control effects on behavior might be expected. This paper explores whether conflict and error likelihood produce distinct cognitive control effects on behavior, as measured by response time. A change signal task (Brown & Braver, 2005) was modified to include conditions of likely errors due to tardy as well as premature responses, in conditions with and without conflict. The results discriminate between competing hypotheses of independent vs. interacting conflict and error likelihood control effects. Specifically, the results suggest that the likelihood of premature vs. tardy response errors can lead to multiple distinct control effects, which are independent of cognitive control effects driven by response conflict. As a whole, the results point to the existence of multiple distinct cognitive control mechanisms and challenge existing models of cognitive control that incorporate only a single control signal. PMID:19030873

  11. Improving joint pain and function in osteoarthritis.

    PubMed

    Owens, Claire; Conaghan, Philip G

    2016-12-01

    Osteoarthritis has become a major chronic pain condition. It affects more than 10% of adults and accounts for almost 10% of health service resources. The impact of osteoarthritis is amplified by underuse of effective muscle strengthening exercises and a focus on often less effective and poorly tolerated analgesic therapies. Although traditionally considered to be primarily a disease of cartilage, there is now ample evidence that typical clinical osteoarthritis involves multiple tissue pathologies. Increased BMI is associated with a higher incidence of knee osteoarthritis. Anatomical abnormalities such as valgus alignment or previous joint trauma including meniscectomy, anterior cruciate ligament rupture and fracture through the joint are also associated with increased incidence of osteoarthritis. Pain is the main presenting symptom. However, we still have a poor understanding of the causes of pain in osteoarthritis. In patients aged 45 or over the diagnosis should be made clinically without investigations if the patient has activity-related joint pain in addition to early morning joint stiffness lasting less than 30 minutes. Muscle strengthening and aerobic exercise have been shown to improve joint pain and function. Weight loss not only improves joint pain and function but has a myriad of other health benefits, reducing the incidence of lifestyle associated diseases such as cardiovascular disease and type 2 diabetes, and mechanical stress on the joints.

  12. Altered neuromuscular control and ankle joint kinematics during walking in subjects with functional instability of the ankle joint.

    PubMed

    Delahunt, Eamonn; Monaghan, Kenneth; Caulfield, Brian

    2006-12-01

    The ankle joint requires very precise neuromuscular control during the transition from terminal swing to the early stance phase of the gait cycle. Altered ankle joint arthrokinematics and muscular activity have been cited as potential factors that may lead to an inversion sprain during the aforementioned time periods. However, to date, no study has investigated patterns of muscle activity and 3D joint kinematics simultaneously in a group of subjects with functional instability compared with a noninjured control group during these phases of the gait cycle. To compare the patterns of lower limb 3D joint kinematics and electromyographic activity during treadmill walking in a group of subjects with functional instability with those observed in a control group. Controlled laboratory study. Three-dimensional angular velocities and displacements of the hip, knee, and ankle joints, as well as surface electromyography of the rectus femoris, peroneus longus, tibialis anterior, and soleus muscles, were recorded simultaneously while subjects walked on a treadmill at a velocity of 4 km/h. Before heel strike, subjects with functional instability exhibited a decrease in vertical foot-floor clearance (12.62 vs 22.84 mm; P < .05), as well as exhibiting a more inverted position of the ankle joint before, at, and immediately after heel strike (1.69 degrees , 2.10 degrees , and -0.09 degrees vs -1.43 degrees , -1.43 degrees , and -2.78 degrees , respectively [minus value = eversion]; P < .05) compared with controls. Subjects with functional instability were also observed to have an increase in peroneus longus integral electromyography during the post-heel strike time period (107.91%.millisecond vs 64.53%.millisecond; P < .01). The altered kinematics observed in this study could explain the reason subjects with functional instability experience repeated episodes of ankle inversion injury in situations with only slight or no external provocation. It is hypothesized that the observed increase in peroneus longus activity may be the result of a change in preprogrammed feed-forward motor control.

  13. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure

    PubMed Central

    Richards, V. M.; Dai, W.

    2014-01-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826

  14. Massive Halos in Millennium Gas Simulations: Multivariate Scaling Relations

    NASA Astrophysics Data System (ADS)

    Stanek, R.; Rasia, E.; Evrard, A. E.; Pearce, F.; Gazzola, L.

    2010-06-01

    The joint likelihood of observable cluster signals reflects the astrophysical evolution of the coupled baryonic and dark matter components in massive halos, and its knowledge will enhance cosmological parameter constraints in the coming era of large, multiwavelength cluster surveys. We present a computational study of intrinsic covariance in cluster properties using halo populations derived from Millennium Gas Simulations (MGS). The MGS are re-simulations of the original 500 h -1 Mpc Millennium Simulation performed with gas dynamics under two different physical treatments: shock heating driven by gravity only (GO) and a second treatment with cooling and preheating (PH). We examine relationships among structural properties and observable X-ray and Sunyaev-Zel'dovich (SZ) signals for samples of thousands of halos with M 200 >= 5 × 1013 h -1 M sun and z < 2. While the X-ray scaling behavior of PH model halos at low redshift offers a good match to local clusters, the model exhibits non-standard features testable with larger surveys, including weakly running slopes in hot gas observable-mass relations and ~10% departures from self-similar redshift evolution for 1014 h -1 M sun halos at redshift z ~ 1. We find that the form of the joint likelihood of signal pairs is generally well described by a multivariate, log-normal distribution, especially in the PH case which exhibits less halo substructure than the GO model. At fixed mass and epoch, joint deviations of signal pairs display mainly positive correlations, especially the thermal SZ effect paired with either hot gas fraction (r = 0.88/0.69 for PH/GO at z = 0) or X-ray temperature (r = 0.62/0.83). The levels of variance in X-ray luminosity, temperature, and gas mass fraction are sensitive to the physical treatment, but offsetting shifts in the latter two measures maintain a fixed 12% scatter in the integrated SZ signal under both gas treatments. We discuss halo mass selection by signal pairs, and find a minimum mass scatter of 4% in the PH model by combining thermal SZ and gas fraction measurements.

  15. Kinematic functions for the 7 DOF robotics research arm

    NASA Technical Reports Server (NTRS)

    Kreutz, K.; Long, M.; Seraji, Homayoun

    1989-01-01

    The Robotics Research Model K-1207 manipulator is a redundant 7R serial link arm with offsets at all joints. To uniquely determine joint angles for a given end-effector configuration, the redundancy is parameterized by a scalar variable which corresponds to the angle between the manipulator elbow plane and the vertical plane. The forward kinematic mappings from joint-space to end-effector configuration and elbow angle, and the augmented Jacobian matrix which gives end-effector and elbow angle rates as a function of joint rates, are also derived.

  16. Joint Entropy for Space and Spatial Frequency Domains Estimated from Psychometric Functions of Achromatic Discrimination

    PubMed Central

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised. PMID:24466158

  17. Knee joint laxity does not moderate the relationship between quadriceps strength and physical function in knee osteoarthritis patients: A cross-sectional study.

    PubMed

    Altubasi, Ibrahim M

    2018-06-07

    Knee osteoarthritis is a common and a disabling musculoskeletal disorder. Patients with knee osteoarthritis have activity limitations which are linked to the strength of the quadriceps muscle. Previous research reported that the relationship between quadriceps muscle strength and physical function is moderated by the level of knee joint frontal plane laxity. The purpose of the current study is to reexamine the moderation effect of the knee joint laxity as measured by stress radiographs on the relationship between quadriceps muscle strength and physical function. One-hundred and sixty osteoarthritis patients participated in this cross-sectional study. Isometric quadriceps muscle strength was measured using an isokinetic dynamometer. Self-rated and performance-based physical function were measured using the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) physical function subscale and Get Up and Go test, respectively. Stress radiographs which were taken while applying varus and valgus loads to knee using the TELOS device. Knee joint laxity was determined by measuring the distance between joint surfaces on the medial and lateral sides. Hierarchical multiple regression models were constructed to study the moderation effect of laxity on the strength function relationship. Two regression models were constructed for self-rated and performance-based function. After controlling for demographics, strength contributed significantly in the models. The addition of laxity and laxity-strength interaction did not add significant contributions in the regression models. Frontal plane knee joint laxity measured by stress radiographs does not moderate the relationship between quadriceps muscle strength and physical function in patients with osteoarthritis. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Joint entropy for space and spatial frequency domains estimated from psychometric functions of achromatic discrimination.

    PubMed

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised.

  19. Joint carrier phase and frequency-offset estimation with parallel implementation for dual-polarization coherent receiver.

    PubMed

    Lu, Jianing; Li, Xiang; Fu, Songnian; Luo, Ming; Xiang, Meng; Zhou, Huibin; Tang, Ming; Liu, Deming

    2017-03-06

    We present dual-polarization complex-weighted, decision-aided, maximum-likelihood algorithm with superscalar parallelization (SSP-DP-CW-DA-ML) for joint carrier phase and frequency-offset estimation (FOE) in coherent optical receivers. By pre-compensation of the phase offset between signals in dual polarizations, the performance can be substantially improved. Meanwhile, with the help of modified SSP-based parallel implementation, the acquisition time of FO and the required number of training symbols are reduced by transferring the complex weights of the filters between adjacent buffers, where differential coding/decoding is not required. Simulation results show that the laser linewidth tolerance of our proposed algorithm is comparable to traditional blind phase search (BPS), while a complete FOE range of ± symbol rate/2 can be achieved. Finally, performance of our proposed algorithm is experimentally verified under the scenario of back-to-back (B2B) transmission using 10 Gbaud DP-16/32-QAM formats.

  20. Trends and Patterns in Age Discrimination in Employment Act (ADEA) Charges.

    PubMed

    von Schrader, Sarah; Nazarov, Zafar E

    2016-07-01

    The Age Discrimination in Employment Act (ADEA) protects individuals aged 40 years and over from discrimination throughout the employment process. Using data on ADEA charges from the Equal Employment Opportunity Commission from 1993 to 2010, we present labor force-adjusted charge rates demonstrating that the highest charge rates are among those in the preretirement age range, and only the rate of charges among those aged 65 years and older has not decreased. We examine characteristics of ADEA charges including the prevalence of different alleged discriminatory actions (or issues) and highlight the increasing proportion of age discrimination charges that are jointly filed with other antidiscrimination statutes. Through a regression analysis, we find that the likelihood of citing various issues differs by charging party characteristics, such as age, gender, and minority status, and on charges that cite only age discrimination as compared to those that are jointly filed. Implications of these findings for employers are discussed. © The Author(s) 2016.

  1. Joint trajectories of cigarette smoking and depressive symptoms from the mid-20s to the mid-30s predicting generalized anxiety disorder.

    PubMed

    Lee, Jung Yeon; Brook, Judith S; Finch, Stephen J; De La Rosa, Mario; Brook, David W

    2017-01-01

    The current study examines longitudinal patterns of cigarette smoking and depressive symptoms as predictors of generalized anxiety disorder using data from the Harlem Longitudinal Development Study. There were 674 African American (53%) and Puerto Rican (47%) participants. Among the 674 participants, 60% were females. In the logistic regression analyses, the indicators of membership in each of the joint trajectories of cigarette smoking and depressive symptoms from the mid-20s to the mid-30s were used as the independent variables, and the diagnosis of generalized anxiety disorder in the mid-30s was used as the dependent variable. The high cigarette smoking with high depressive symptoms group and the low cigarette smoking with high depressive symptoms group were associated with an increased likelihood of having generalized anxiety disorder as compared to the no cigarette smoking with low depressive symptoms group. The findings shed light on the prevention and treatment of generalized anxiety disorder.

  2. Joint trajectories of cigarette smoking and depressive symptoms from the mid-twenties to the mid-thirties predicting generalized anxiety disorder

    PubMed Central

    Lee, Jung Yeon; Brook, Judith S.; Finch, Stephen J.; De La Rosa, Mario; Brook, David W.

    2017-01-01

    The current study examines the longitudinal patterns of both cigarette smoking and depressive symptoms as predictors of generalized anxiety disorder (GAD) using data from the Harlem Longitudinal Development Study. There were 674 African American (53%) and Puerto Rican (47%) participants. Among the 674 participants, 60% were females. In the logistic regression analyses, the indicator variables of membership in each of the joint trajectories of cigarette smoking and depressive symptoms from the mid 20s to the mid 30s were used as the independent variables, and the diagnosis of GAD in the mid 30s was used as the dependent variable. The high cigarette smoking with high depressive symptoms group and the low cigarette smoking with high depressive symptoms group were associated with an increased likelihood of having GAD as compared to the no cigarette smoking with low depressive symptoms group. The findings shed light on the prevention and treatment of GAD. PMID:28281938

  3. [The application of delayed skin grafting combined traction in severe joint cicatricial contracture].

    PubMed

    Xu, Zihan; Zhang, Zhenxin; Wang, Benfeng; Sun, Yaowen; Guo, Yadong; Gao, Wenjie; Qin, Gaoping

    2014-11-01

    To investigate the effect of delayed skin grafting combined traction in severe joint cicatricial contracture. At the first stage, the joint cicatricial contracture was released completely with protection of vessels, nerves and tendons. The wound was covered with allogenetic skin or biomaterials. After skin traction for 7-14 days, the joint could reach the extension position. Then the skin graft was performed on the wound. 25 cases were treated from Mar. 2000 to May. 2013. Primary healing was achieved at the second stage in all the cases. The skin graft had a satisfactory color and elasticity. Joint function was normal. All the patients were followed up for 3 months to 11 years with no hypertrophic scar and contraction relapse, except for one case who didn' t have enough active exercise on shoulder joint. Delayed skin grafting combined traction can effectively increase the skin graft survival rate and improve the joint function recovery.

  4. [Treatment of ulnar collateral ligament avulsion fracture of thumb metacarpophalangeal joint using a combination of Kirschner wire and silk tension band].

    PubMed

    Gao, Shunhong; Feng, Shiming; Jiao, Cheng

    2012-12-01

    To investigate the effectiveness of Kirschner wire combined with silk tension band in the treatment of ulnar collateral ligament avulsion fracture of the thumb metacarpophalangeal joint. Between September 2008 and October 2011, 14 patients with ulnar collateral ligament avulsion fracture of the thumb metacarpophalangeal joint were treated using a combination of Kirschner wire and silk tension band. There were 8 males and 6 females, aged 23-55 years (mean, 40.8 years). The causes of injury were machinery twist injury in 5 cases, manual twist injury in 4 cases, falling in 4 cases, sports injury in 1 case. The time from injury to operation was 2 hours-14 days. All the patients presented pain over the ulnar aspect of the metacarpophalangeal joint of the thumb, limitation of motion, and joint instability with pinch and grip. The lateral stress testing of the metacarpophalangeal joint was positive. Function training was given at 2 weeks after operation. All incisions healed by first intention. The lateral stress testing of the metacarpophalangeal joint was negative. All the patients were followed up 6-18 months (mean, 13.1 months). The X-ray films showed good fracture reduction and healing with an average time of 7 weeks (range, 4-10 weeks). At last follow-up, the thumbs had stable flexion and extension of the metacarpophalangeal joint, normal opposition function and grip and pinch strengths. According to Saetta et al. criteria for functional assessment, the results were excellent in 11 cases and good in 3 cases; the excellent and good rate was 100%. It is an easy and simple method to treat ulnar collateral ligament avulsion fracture of the thumb metacarpophalangeal joint using Kirschner wire combined with silk tension band, which can meet the good finger function.

  5. The effect of lower body burns on physical function.

    PubMed

    Benjamin, Nicole C; Andersen, Clark R; Herndon, David N; Suman, Oscar E

    2015-12-01

    To attenuate burn-induced catabolism, patients are often enrolled in a resistance exercise program as part of their physical rehabilitation. This study assessed how lower body burn locations affected strength and cardiopulmonary function. Children enrolled in an exercise study between 2003 and 2013, were 7-18 years of age, and burned ≥30% of their total body surface area were included. Analysis of variance was used to model the relationship of lower body strength (PTW) and cardiopulmonary function (VO2peak) due to burns which traverse the subject's lower body joints. There was a significant relationship between PTW and burns at the hip and toe joints, showing a 26 N m/kg (p=0.010) and 33 N m/kg (p=0.013) decrease in peak torque, respectively. Burns at the hip joint corresponded to a significant decrease in VO2peak by 4.9 ml kg(-1) min(-1) (p=0.010) in peak cardiopulmonary function. Physical function and performance are detrimentally affected by burns that traverse specific lower body joints. The most significant relationship on exercise performance was that of hip joint burns as it affected both strength and cardiopulmonary measurements. Ultimately, burns at hip and toe joints need to be considered when interpreting exercise test results involving the lower body. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.

  6. The Effect of Lower Body Burns on Physical Function

    PubMed Central

    Benjamin, Nicole C.; Andersen, Clark R.; Herndon, David N.; Suman, Oscar E.

    2015-01-01

    Objective To attenuate burn-induced catabolism, patients are often enrolled in a resistance exercise program as part of their physical rehabilitation. This study assessed how lower body burn locations affected strength and cardiopulmonary function. Methods Children enrolled in an exercise study between 2003 and 2013, were 7–18 years of age, and burned ≥ 30% of their total body surface area were included. Analysis of variance was used to model the relationship of lower body strength (PTW) and cardiopulmonary function (VO2peak) due to burns which traverse the subject’s lower body joints. Results There was a significant relationship between PTW and burns at the hip and toe joints, showing a 26 Newton·meters/kilogram (p=0.010) and 33 Newton·meters/kilogram (p=0.013) decrease in peak torque, respectively. Burns at the hip joint corresponded to a significant decrease in VO2peak by 4.9 mL·kg−1·min−1 (p=0.010) in peak cardiopulmonary function. Conclusion Physical function and performance are detrimentally affected by burns that traverse specific lower body joints. The most significant relationship on exercise performance was that of hip joint burns as it affected both strength and cardiopulmonary measurements. Ultimately, burns at hip and toe joints need to be considered when interpreting exercise test results involving the lower body. PMID:26421695

  7. Pain and pain management in haemophilia

    PubMed Central

    Auerswald, Günter; Dolan, Gerry; Duffy, Anne; Hermans, Cedric; Jiménez-Yuste, Victor; Ljung, Rolf; Morfini, Massimo; Lambert, Thierry; Šalek, Silva Zupančić

    2016-01-01

    Joint pain is common in haemophilia and may be acute or chronic. Effective pain management in haemophilia is essential to reduce the burden that pain imposes on patients. However, the choice of appropriate pain-relieving measures is challenging, as there is a complex interplay of factors affecting pain perception. This can manifest as differences in patients’ experiences and response to pain, which require an individualized approach to pain management. Prophylaxis with factor replacement reduces the likelihood of bleeds and bleed-related pain, whereas on-demand therapy ensures rapid bleed resolution and pain relief. Although use of replacement or bypassing therapy is often the first intervention for pain, additional pain relief strategies may be required. There is an array of analgesic options, but consideration should be paid to the adverse effects of each class. Nevertheless, a combination of medications that act at different points in the pain pathway may be beneficial. Nonpharmacological measures may also help patients and include active coping strategies; rest, ice, compression, and elevation; complementary therapies; and physiotherapy. Joint aspiration may also reduce acute joint pain, and joint steroid injections may alleviate chronic pain. In the longer term, increasing use of prophylaxis or performing surgery may be necessary to reduce the burden of pain caused by the degenerative effects of repeated bleeds. Whichever treatment option is chosen, it is important to monitor pain and adjust patient management accordingly. Beyond specific pain management approaches, ongoing collaboration between multidisciplinary teams, which should include physiotherapists and pain specialists, may improve outcomes for patients. PMID:27439216

  8. Diagnosis, treatment, and prevention of gout.

    PubMed

    Hainer, Barry L; Matheson, Eric; Wilkes, R Travis

    2014-12-15

    Gout is characterized by painful joint inflammation, most commonly in the first metatarsophalangeal joint, resulting from precipitation of monosodium urate crystals in a joint space. Gout is typically diagnosed using clinical criteria from the American College of Rheumatology. Diagnosis may be confirmed by identification of monosodium urate crystals in synovial fluid of the affected joint. Acute gout may be treated with nonsteroidal anti-inflammatory drugs, corticosteroids, or colchicine. To reduce the likelihood of recurrent flares, patients should limit their consumption of certain purine-rich foods (e.g., organ meats, shellfish) and avoid alcoholic drinks (especially beer) and beverages sweetened with high-fructose corn syrup. Consumption of vegetables and low-fat or nonfat dairy products should be encouraged. The use of loop and thiazide diuretics can increase uric acid levels, whereas the use of the angiotensin receptor blocker losartan increases urinary excretion of uric acid. Reduction of uric acid levels is key to avoiding gout flares. Allopurinol and febuxostat are first-line medications for the prevention of recurrent gout, and colchicine and/or probenecid are reserved for patients who cannot tolerate first-line agents or in whom first-line agents are ineffective. Patients receiving urate-lowering medications should be treated concurrently with nonsteroidal anti-inflammatory drugs, colchicine, or low-dose corticosteroids to prevent flares. Treatment should continue for at least three months after uric acid levels fall below the target goal in those without tophi, and for six months in those with a history of tophi.

  9. Quantitative histological grading methods to assess subchondral bone and synovium changes subsequent to medial meniscus transection in the rat

    PubMed Central

    Kloefkorn, Heidi E.; Allen, Kyle D.

    2017-01-01

    Aim of the Study The importance of the medial meniscus to knee health is demonstrated by studies which show meniscus injuries significantly increase the likelihood of developing osteoarthritis (OA), and knee OA can be modeled in rodents using simulated meniscus injuries. Traditionally, histological assessments of OA in these models have focused on damage to the articular cartilage; however, OA is now viewed as a disease of the entire joint as an organ system. The aim of this study was to develop quantitative histological measures of bone and synovial changes in a rat medial meniscus injury model of knee OA. Materials and Methods To initiate OA, a medial meniscus transection (MMT) and a medial collateral ligament transection (MCLT) were performed in 32 male Lewis rats (MMT group). MCLT alone served as the sham procedure in 32 additional rats (MCLT sham group). At weeks 1, 2, 4, and 6 post-surgery, histological assessment of subchondral bone and synovium was performed (n = 8 per group per time point). Results Trabecular bone area and the ossification width at the osteochondral interface increased in both the MMT and MCLT groups. Subintimal synovial cell morphology also changed in MMT and MCLT groups relative to naïve animals. Conclusions OA affects the joint as an organ system, and quantifying changes throughout an entire joint can improve our understanding of the relationship between joint destruction and painful OA symptoms following meniscus injury. PMID:27797605

  10. Classification of cassava genotypes based on qualitative and quantitative data.

    PubMed

    Oliveira, E J; Oliveira Filho, O S; Santos, V S

    2015-02-02

    We evaluated the genetic variation of cassava accessions based on qualitative (binomial and multicategorical) and quantitative traits (continuous). We characterized 95 accessions obtained from the Cassava Germplasm Bank of Embrapa Mandioca e Fruticultura; we evaluated these accessions for 13 continuous, 10 binary, and 25 multicategorical traits. First, we analyzed the accessions based only on quantitative traits; next, we conducted joint analysis (qualitative and quantitative traits) based on the Ward-MLM method, which performs clustering in two stages. According to the pseudo-F, pseudo-t2, and maximum likelihood criteria, we identified five and four groups based on quantitative trait and joint analysis, respectively. The smaller number of groups identified based on joint analysis may be related to the nature of the data. On the other hand, quantitative data are more subject to environmental effects in the phenotype expression; this results in the absence of genetic differences, thereby contributing to greater differentiation among accessions. For most of the accessions, the maximum probability of classification was >0.90, independent of the trait analyzed, indicating a good fit of the clustering method. Differences in clustering according to the type of data implied that analysis of quantitative and qualitative traits in cassava germplasm might explore different genomic regions. On the other hand, when joint analysis was used, the means and ranges of genetic distances were high, indicating that the Ward-MLM method is very useful for clustering genotypes when there are several phenotypic traits, such as in the case of genetic resources and breeding programs.

  11. Altered joint tribology in osteoarthritis: Reduced lubricin synthesis due to the inflammatory process. New horizons for therapeutic approaches.

    PubMed

    Szychlinska, M A; Leonardi, R; Al-Qahtani, M; Mobasheri, A; Musumeci, G

    2016-06-01

    Osteoarthritis (OA) is the most common form of joint disease. This review aimed to consolidate the current evidence that implicates the inflammatory process in the attenuation of synovial lubrication and joint tissue homeostasis in OA. Moreover, with these findings, we propose some evidence for novel therapeutic strategies for preventing and/or treating this complex disorder. The studies reviewed support that inflammatory mediators participate in the onset and progression of OA after joint injury. The flow of pro-inflammatory cytokines following an acute injury seems to be directly associated with altered lubricating ability in the joint tissue. The latter is associated with reduced level of lubricin, one of the major joint lubricants. Future research should focus on the development of new therapies that attenuate the inflammatory process and restore lubricin synthesis and function. This approach could support joint tribology and synovial lubrication leading to improved joint function and pain relief. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  12. An interdigit signalling centre instructs coordinate phalanx-joint formation governed by 5′Hoxd–Gli3 antagonism

    PubMed Central

    Huang, Bau-Lin; Trofka, Anna; Furusawa, Aki; Norrie, Jacqueline L.; Rabinowitz, Adam H.; Vokes, Steven A.; Mark Taketo, M.; Zakany, Jozsef; Mackem, Susan

    2016-01-01

    The number of phalanges and joints are key features of digit ‘identity' and are central to limb functionality and evolutionary adaptation. Prior chick work indicated that digit phalanges and their associated joints arise in a different manner than the more sparsely jointed long bones, and their identity is regulated by differential signalling from adjacent interdigits. Currently, there is no genetic evidence for this model, and the molecular mechanisms governing digit joint specification remain poorly understood. Using genetic approaches in mouse, here we show that functional 5′Hoxd–Gli3 antagonism acts indirectly, through Bmp signalling from the interdigital mesenchyme, to regulate specification of joint progenitors, which arise in conjunction with phalangeal precursors at the digit tip. Phalanx number, although co-regulated, can be uncoupled from joint specification. We propose that 5′Hoxd genes and Gli3 are part of an interdigital signalling centre that sets net Bmp signalling levels from different interdigits to coordinately regulate phalanx and joint formation. PMID:27713395

  13. QM-8 field joint protection system, volume 7

    NASA Technical Reports Server (NTRS)

    Hale, Elgie

    1989-01-01

    The pre-launch functioning data of the Field Joint Protection System (JPS) used on QM-8 are presented. Also included is the post fire condition of the JPS components following the test firing of the motor. The JPS components are: field joint heaters; field joint sensors; field joint moisture seal; moisture seal kevlar retaining straps; field joint external extruded cork insulation; vent valve; power cables; and igniter heater.

  14. Postoperative Therapy for Chronic Thumb Carpometacarpal (CMC) Joint Dislocation.

    PubMed

    Wollstein, Ronit; Michael, Dafna; Harel, Hani

    2016-01-01

    Surgical arthroplasty of thumb carpometacarpal (CMC) joint osteoarthritis is commonly performed. Postoperative therapeutic protocols aim to improve range of motion and function of the revised thumb. We describe a case in which the thumb CMC joint had been chronically dislocated before surgery, with shortening of the soft-tissue dynamic and static stabilizers of the joint. The postoperative protocol addressed the soft tissues using splinting and exercises aimed at lengthening and strengthening these structures, with good results. It may be beneficial to evaluate soft-tissue tension and the pattern of thumb use after surgery for thumb CMC joint osteoarthritis to improve postoperative functional results. Copyright © 2016 by the American Occupational Therapy Association, Inc.

  15. Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function

    NASA Astrophysics Data System (ADS)

    Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.

    2017-06-01

    This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.

  16. Analysis of an all-digital maximum likelihood carrier phase and clock timing synchronizer for eight phase-shift keying modulation

    NASA Astrophysics Data System (ADS)

    Degaudenzi, Riccardo; Vanghi, Vieri

    1994-02-01

    In all-digital Trellis-Coded 8PSK (TC-8PSK) demodulator well suited for VLSI implementation, including maximum likelihood estimation decision-directed (MLE-DD) carrier phase and clock timing recovery, is introduced and analyzed. By simply removing the trellis decoder the demodulator can efficiently cope with uncoded 8PSK signals. The proposed MLE-DD synchronization algorithm requires one sample for the phase and two samples per symbol for the timing loop. The joint phase and timing discriminator characteristics are analytically derived and numerical results checked by means of computer simulations. An approximated expression for steady-state carrier phase and clock timing mean square error has been derived and successfully checked with simulation findings. Synchronizer deviation from the Cramer Rao bound is also discussed. Mean acquisition time for the digital synchronizer has also been computed and checked, using the Monte Carlo simulation technique. Finally, TC-8PSK digital demodulator performance in terms of bit error rate and mean time to lose lock, including digital interpolators and synchronization loops, is presented.

  17. Northwest Climate Risk Assessment

    NASA Astrophysics Data System (ADS)

    Mote, P.; Dalton, M. M.; Snover, A. K.

    2012-12-01

    As part of the US National Climate Assessment, the Northwest region undertook a process of climate risk assessment. This process included an expert evaluation of previously identified impacts, their likelihoods, and consequences, and engaged experts from both academia and natural resource management practice (federal, tribal, state, local, private, and non-profit) in a workshop setting. An important input was a list of 11 risks compiled by state agencies in Oregon and similar adaptation efforts in Washington. By considering jointly the likelihoods, consequences, and adaptive capacity, participants arrived at an approximately ranked list of risks which was further assessed and prioritized through a series of risk scoring exercises to arrive at the top three climate risks facing the Northwest: 1) changes in amount and timing of streamflow related to snowmelt, causing far-reaching ecological and socioeconomic consequences; 2) coastal erosion and inundation, and changing ocean acidity, combined with low adaptive capacity in the coastal zone to create large risks; and 3) the combined effects of wildfire, insect outbreaks, and diseases will cause large areas of forest mortality and long-term transformation of forest landscapes.

  18. Substance Use, Education, Employment, and Criminal Activity Outcomes of Adolescents in Outpatient Chemical Dependency Programs

    PubMed Central

    Balsa, Ana I.; Homer, Jenny F.; French, Michael T.; Weisner, Constance M.

    2010-01-01

    Although the primary outcome of interest in clinical evaluations of addiction treatment programs is usually abstinence, participation in these programs can have a wide range of consequences. This study evaluated the effects of treatment initiation on substance use, school attendance, employment, and involvement in criminal activity at 12 months post-admission for 419 adolescents (aged 12 to 18) enrolled in chemical dependency recovery programs in a large managed care health plan. Instrumental variables estimation methods were used to account for unobserved selection into treatment by jointly modeling the likelihood of participation in treatment and the odds of attaining a certain outcome or level of an outcome. Treatment initiation significantly increased the likelihood of attending school, promoted abstinence, and decreased the probability of adolescent employment, but it did not significantly affect participation in criminal activity at the 12-month follow-up. These findings highlight the need to address selection in a non-experimental study and demonstrate the importance of considering multiple outcomes when assessing the effectiveness of adolescent treatment. PMID:18064572

  19. Maximum likelihood sequence estimation for optical complex direct modulation.

    PubMed

    Che, Di; Yuan, Feng; Shieh, William

    2017-04-17

    Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.

  20. MODELING LEFT-TRUNCATED AND RIGHT-CENSORED SURVIVAL DATA WITH LONGITUDINAL COVARIATES

    PubMed Central

    Su, Yu-Ru; Wang, Jane-Ling

    2018-01-01

    There is a surge in medical follow-up studies that include longitudinal covariates in the modeling of survival data. So far, the focus has been largely on right censored survival data. We consider survival data that are subject to both left truncation and right censoring. Left truncation is well known to produce biased sample. The sampling bias issue has been resolved in the literature for the case which involves baseline or time-varying covariates that are observable. The problem remains open however for the important case where longitudinal covariates are present in survival models. A joint likelihood approach has been shown in the literature to provide an effective way to overcome those difficulties for right censored data, but this approach faces substantial additional challenges in the presence of left truncation. Here we thus propose an alternative likelihood to overcome these difficulties and show that the regression coefficient in the survival component can be estimated unbiasedly and efficiently. Issues about the bias for the longitudinal component are discussed. The new approach is illustrated numerically through simulations and data from a multi-center AIDS cohort study. PMID:29479122

  1. Modeling Progressive Failure of Bonded Joints Using a Single Joint Finite Element

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott E.; Waas, Anthony M.; Bednarcyk, Brett A.

    2010-01-01

    Enhanced finite elements are elements with an embedded analytical solution which can capture detailed local fields, enabling more efficient, mesh-independent finite element analysis. In the present study, an enhanced finite element is applied to generate a general framework capable of modeling an array of joint types. The joint field equations are derived using the principle of minimum potential energy, and the resulting solutions for the displacement fields are used to generate shape functions and a stiffness matrix for a single joint finite element. This single finite element thus captures the detailed stress and strain fields within the bonded joint, but it can function within a broader structural finite element model. The costs associated with a fine mesh of the joint can thus be avoided while still obtaining a detailed solution for the joint. Additionally, the capability to model non-linear adhesive constitutive behavior has been included within the method, and progressive failure of the adhesive can be modeled by using a strain-based failure criteria and re-sizing the joint as the adhesive fails. Results of the model compare favorably with experimental and finite element results.

  2. Relationship of Antibiotic Treatment to Recovery after Acute FEV1 Decline in Children with Cystic Fibrosis.

    PubMed

    Morgan, Wayne J; Wagener, Jeffrey S; Pasta, David J; Millar, Stefanie J; VanDevanter, Donald R; Konstan, Michael W

    2017-06-01

    Children with cystic fibrosis often experience acute declines in lung function. We previously showed that such declines are not always treated with antibiotics, but we did not assess whether treatment improves the likelihood of recovery. To determine whether new antibiotic treatment was associated with recovery from acute FEV 1 decline. We studied episodes of FEV 1 decline (≥10% from baseline) in the Epidemiologic Study of Cystic Fibrosis. Treatments were hospitalization, home intravenous antibiotic, new inhaled oral quinolone, or other oral antibiotic. We used logistic regression to evaluate whether treatment was associated with recovery to baseline or near baseline. Logistic regression of 9,875 patients showed that new antibiotic treatment was associated with an increased likelihood of recovery to 90% of baseline (P < 0.001), especially for hospitalization compared with no new antibiotic (odds ratio [OR], 2.79; 95% confidence interval, 2.41-3.23). All four outpatient treatments were associated with greater likelihood of recovery compared with no treatment (OR, 1.27-1.64). Inpatient treatment was better than outpatient treatment (OR, 1.94; 95% confidence interval, 1.68-2.23). Treatment-type ORs were similar across recovery criteria and levels of baseline lung function. New antibiotic therapy, and especially inpatient treatment, is associated with greater likelihood of recovery after acute decline in FEV 1 . Benefits extend across all disease stages and are especially important in patients with high lung function, who are at greatest risk for FEV 1 decline.

  3. Joint attention responses of children with autism spectrum disorder to simple versus complex music.

    PubMed

    Kalas, Amy

    2012-01-01

    Joint attention deficits are viewed as one of the earliest manifestations and most characteristic features of the social deficits in Autism Spectrum Disorder (ASD). The purpose of this study was to examine the effect of simple versus complex music on joint attention of children with ASD. Thirty children with a diagnosis of ASD participated in this study. Fifteen of the participants were diagnosed with severe ASD and 15 were diagnosed with mild/moderate ASD. Each participant took part in six, 10-minute individual music conditions (3 simple & 3 complex) over a 3-week period. Each condition was designed to elicit responses to joint attention. RESULTS indicated a statistically significant interaction between music modality and functioning level. Therefore, the effect of simple versus complex music was dependent on functioning level. Specifically, the Simple Music Condition was more effective in eliciting Responses to Joint Attention (RJA) for children diagnosed with severe ASD, whereas the Complex Music Condition was more effective in eliciting RJA for children diagnosed with mild/moderate ASD. The results of the present study indicate that for children in the severe range of functioning, music that is simple, with clear and predictable pattems, may be most effective in eliciting responses to bids for joint attention. On the contrary, for children in the mild/moderate range of functioning, music that is more complex and variable may be most effective in eliciting responses to bids for joint attention. These results demonstrate that careful manipulation of specific musical elements can help provide the optimal conditions for facilitating joint attention with children with ASD.

  4. Synchrony in Joint Action Is Directed by Each Participant’s Motor Control System

    PubMed Central

    Noy, Lior; Weiser, Netta; Friedman, Jason

    2017-01-01

    In this work, we ask how the probability of achieving synchrony in joint action is affected by the choice of motion parameters of each individual. We use the mirror game paradigm to study how changes in leader’s motion parameters, specifically frequency and peak velocity, affect the probability of entering the state of co-confidence (CC) motion: a dyadic state of synchronized, smooth and co-predictive motions. In order to systematically study this question, we used a one-person version of the mirror game, where the participant mirrored piece-wise rhythmic movements produced by a computer on a graphics tablet. We systematically varied the frequency and peak velocity of the movements to determine how these parameters affect the likelihood of synchronized joint action. To assess synchrony in the mirror game we used the previously developed marker of co-confident (CC) motions: smooth, jitter-less and synchronized motions indicative of co-predicative control. We found that when mirroring movements with low frequencies (i.e., long duration movements), the participants never showed CC, and as the frequency of the stimuli increased, the probability of observing CC also increased. This finding is discussed in the framework of motor control studies showing an upper limit on the duration of smooth motion. We confirmed the relationship between motion parameters and the probability to perform CC with three sets of data of open-ended two-player mirror games. These findings demonstrate that when performing movements together, there are optimal movement frequencies to use in order to maximize the possibility of entering a state of synchronized joint action. It also shows that the ability to perform synchronized joint action is constrained by the properties of our motor control systems. PMID:28443047

  5. Lifestyle Cardiovascular Risk Score, Genetic Risk Score, and Myocardial Infarction in Hispanic/Latino Adults Living in Costa Rica.

    PubMed

    Sotos-Prieto, Mercedes; Baylin, Ana; Campos, Hannia; Qi, Lu; Mattei, Josiemer

    2016-12-20

    A lifestyle cardiovascular risk score (LCRS) and a genetic risk score (GRS) have been independently associated with myocardial infarction (MI) in Hispanics/Latinos. Interaction or joint association between these scores has not been examined. Thus, our aim was to assess interactive and joint associations between LCRS and GRS, and each individual lifestyle risk factor, on likelihood of MI. Data included 1534 Costa Rican adults with nonfatal acute MI and 1534 matched controls. The LCRS used estimated coefficients as weights for each factor: unhealthy diet, physical inactivity, smoking, elevated waist:hip ratio, low/high alcohol intake, low socioeconomic status. The GRS included 14 MI-associated risk alleles. Conditional logistic regressions were used to calculate adjusted odds ratios. The odds ratios for MI were 2.72 (2.33, 3.17) per LCRS unit and 1.13 (95% CI 1.06, 1.21) per GRS unit. A significant joint association for highest GRS tertile and highest LCRS tertile and odds of MI was detected (odds ratio=5.43 [3.71, 7.94]; P<1.00×10 -7 ), compared to both lowest tertiles. The odds ratios were 1.74 (1.22, 2.49) under optimal lifestyle and unfavorable genetic profile, and 5.02 (3.46, 7.29) under unhealthy lifestyle but advantageous genetic profile. Significant joint associations were observed for the highest GRS tertile and the highest of each lifestyle component risk category. The interaction term was nonsignificant (P=0.33). Lifestyle risk factors and genetics are jointly associated with higher odds of MI among Hispanics/Latinos. Individual and combined lifestyle risk factors showed stronger associations. Efforts to improve lifestyle behaviors could help prevent MI regardless of genetic susceptibility. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  6. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.

  7. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.

  8. Use of Bayes theorem to correct size-specific sampling bias in growth data.

    PubMed

    Troynikov, V S

    1999-03-01

    The bayesian decomposition of posterior distribution was used to develop a likelihood function to correct bias in the estimates of population parameters from data collected randomly with size-specific selectivity. Positive distributions with time as a parameter were used for parametrization of growth data. Numerical illustrations are provided. The alternative applications of the likelihood to estimate selectivity parameters are discussed.

  9. Cox Regression Models with Functional Covariates for Survival Data.

    PubMed

    Gellar, Jonathan E; Colantuoni, Elizabeth; Needham, Dale M; Crainiceanu, Ciprian M

    2015-06-01

    We extend the Cox proportional hazards model to cases when the exposure is a densely sampled functional process, measured at baseline. The fundamental idea is to combine penalized signal regression with methods developed for mixed effects proportional hazards models. The model is fit by maximizing the penalized partial likelihood, with smoothing parameters estimated by a likelihood-based criterion such as AIC or EPIC. The model may be extended to allow for multiple functional predictors, time varying coefficients, and missing or unequally-spaced data. Methods were inspired by and applied to a study of the association between time to death after hospital discharge and daily measures of disease severity collected in the intensive care unit, among survivors of acute respiratory distress syndrome.

  10. Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.

    PubMed

    Chaudhuri, Shomesh E; Merfeld, Daniel M

    2013-03-01

    Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.

  11. Hearing loss and disability exit: Measurement issues and coping strategies.

    PubMed

    Christensen, Vibeke Tornhøj; Datta Gupta, Nabanita

    2017-02-01

    Hearing loss is one of the most common conditions related to aging, and previous descriptive evidence links it to early exit from the labor market. These studies are usually based on self-reported hearing difficulties, which are potentially endogenous to labor supply. We use unique representative data collected in the spring of 2005 through in-home interviews. The data contains self-reported functional and clinically-measured hearing ability for a representative sample of the Danish population aged 50-64. We estimate the causal effect of hearing loss on early retirement via disability benefits, taking into account the endogeneity of functional hearing. Our identification strategy involves the simultaneous estimation of labor supply, functional hearing, and coping strategies (i.e. accessing assistive devices at work or informing one's employer about the problem). We use hearing aids as an instrument for functional hearing. Our main empirical findings are that endogeneity bias is more severe for men than women and that functional hearing problems significantly increase the likelihood of receiving disability benefits for both men and women. However, relative to the baseline the effect is larger for men (47% vs. 20%, respectively). Availability of assistive devices in the workplace decreases the likelihood of receiving disability benefits, whereas informing an employer about hearing problems increases this likelihood. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Approximate likelihood approaches for detecting the influence of primordial gravitational waves in cosmic microwave background polarization

    NASA Astrophysics Data System (ADS)

    Pan, Zhen; Anderes, Ethan; Knox, Lloyd

    2018-05-01

    One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.

  13. Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)

    PubMed Central

    Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.

    2015-01-01

    Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128

  14. Comprehensive joint feedback control for standing by functional neuromuscular stimulation-a simulation study.

    PubMed

    Nataraj, Raviraj; Audu, Musa L; Kirsch, Robert F; Triolo, Ronald J

    2010-12-01

    Previous investigations of feedback control of standing after spinal cord injury (SCI) using functional neuromuscular stimulation (FNS) have primarily targeted individual joints. This study assesses the potential efficacy of comprehensive (trunk, hips, knees, and ankles) joint feedback control against postural disturbances using a bipedal, 3-D computer model of SCI stance. Proportional-derivative feedback drove an artificial neural network trained to produce muscle excitation patterns consistent with maximal joint stiffness values achievable about neutral stance given typical SCI muscle properties. Feedback gains were optimized to minimize upper extremity (UE) loading required to stabilize against disturbances. Compared to the baseline case of maximum constant muscle excitations used clinically, the controller reduced UE loading by 55% in resisting external force perturbations and by 84% during simulated one-arm functional tasks. Performance was most sensitive to inaccurate measurements of ankle plantar/dorsiflexion position and hip ab/adduction velocity feedback. In conclusion, comprehensive joint feedback demonstrates potential to markedly improve FNS standing function. However, alternative control structures capable of effective performance with fewer sensor-based feedback parameters may better facilitate clinical usage.

  15. Comprehensive Joint Feedback Control for Standing by Functional Neuromuscular Stimulation – a Simulation Study

    PubMed Central

    Nataraj, Raviraj; Audu, Musa L.; Kirsch, Robert F.; Triolo, Ronald J.

    2013-01-01

    Previous investigations of feedback control of standing after spinal cord injury (SCI) using functional neuromuscular stimulation (FNS) have primarily targeted individual joints. This study assesses the potential efficacy of comprehensive (trunk, hips, knees, and ankles) joint-feedback control against postural disturbances using a bipedal, three-dimensional computer model of SCI stance. Proportional-derivative feedback drove an artificial neural network trained to produce muscle excitation patterns consistent with maximal joint stiffness values achievable about neutral stance given typical SCI muscle properties. Feedback gains were optimized to minimize upper extremity (UE) loading required to stabilize against disturbances. Compared to the baseline case of maximum constant muscle excitations used clinically, the controller reduced UE loading by 55% in resisting external force perturbations and by 84% during simulated one-arm functional tasks. Performance was most sensitive to inaccurate measurements of ankle plantar/dorsiflexion position and hip ab/adduction velocity feedback. In conclusion, comprehensive joint-feedback demonstrates potential to markedly improve FNS standing function. However, alternative control structures capable of effective performance with fewer sensor-based feedback parameters may better facilitate clinical usage. PMID:20923741

  16. Receiver function HV ratio: a new measurement for reducing non-uniqueness of receiver function waveform inversion

    NASA Astrophysics Data System (ADS)

    Chong, Jiajun; Chu, Risheng; Ni, Sidao; Meng, Qingjun; Guo, Aizhi

    2018-02-01

    It is known that a receiver function has relatively weak constraint on absolute seismic wave velocity, and that joint inversion of the receiver function with surface wave dispersion has been widely applied to reduce the trade-off of velocity with interface depth. However, some studies indicate that the receiver function itself is capable for determining the absolute shear-wave velocity. In this study, we propose to measure the receiver function HV ratio which takes advantage of the amplitude information of the receiver function to constrain the shear-wave velocity. Numerical analysis indicates that the receiver function HV ratio is sensitive to the average shear-wave velocity in the depth range it samples, and can help to reduce the non-uniqueness of receiver function waveform inversion. A joint inversion scheme has been developed, and both synthetic tests and real data application proved the feasibility of the joint inversion.

  17. Value of History, Physical Examination, and Radiographic Findings in the Diagnosis of Symptomatic Meniscal Tear Among Middle-Aged Subjects With Knee Pain.

    PubMed

    Katz, Jeffrey N; Smith, Savannah R; Yang, Heidi Y; Martin, Scott D; Wright, John; Donnell-Fink, Laurel A; Losina, Elena

    2017-04-01

    To evaluate the utility of clinical history, radiographic findings, and physical examination findings in the diagnosis of symptomatic meniscal tear (SMT) in patients over age 45 years, in whom concomitant osteoarthritis is prevalent. In a cross-sectional study of patients from 2 orthopedic surgeons' clinics, we assessed clinical history, physical examination findings, and radiographic findings in patients age >45 years with knee pain. The orthopedic surgeons rated their confidence that subjects' symptoms were due to meniscal tear; we defined the diagnosis of SMT as at least 70% confidence. We used logistic regression to identify factors independently associated with diagnosis of SMT, and we used the regression results to construct an index of the likelihood of SMT. In 174 participants, 6 findings were associated independently with the expert clinician having ≥70% confidence that symptoms were due to meniscal tear: localized pain, ability to fully bend the knee, pain duration <1 year, lack of varus alignment, lack of pes planus, and absence of joint space narrowing on radiographs. The index identified a low-risk group with 3% likelihood of SMT. While clinicians traditionally rely upon mechanical symptoms in this diagnostic setting, our findings did not support the conclusion that mechanical symptoms were associated with the expert's confidence that symptoms were due to meniscal tear. An index that includes history of localized pain, full flexion, duration <1 year, pes planus, varus alignment, and joint space narrowing can be used to stratify patients according to their risk of SMT, and it identifies a subgroup with very low risk. © 2016, American College of Rheumatology.

  18. Influence of movie smoking exposure and team sports participation on established smoking.

    PubMed

    Adachi-Mejia, Anna M; Primack, Brian A; Beach, Michael L; Titus-Ernstoff, Linda; Longacre, Meghan R; Weiss, Julia E; Dalton, Madeline A

    2009-07-01

    To examine the joint effects of movie smoking exposure and team sports participation on established smoking. Longitudinal study. School- and telephone-based surveys in New Hampshire and Vermont between September 1999 through November 1999 and February 2006 through February 2007. A total of 2048 youths aged 16 to 21 years at follow-up. Main Exposures Baseline movie smoking exposure categorized in quartiles assessed when respondents were aged 9 to 14 years and team sports participation assessed when respondents were aged 16 to 21 years. Main Outcome Measure Established smoking (having smoked > or =100 cigarettes in one's lifetime) at follow-up. At follow-up, 353 respondents (17.2%) were established smokers. Exposure to the highest quartile of movie smoking compared with the lowest increased the likelihood of established smoking (odds ratio = 1.63; 95% confidence interval, 1.03-2.57), and team sports nonparticipants compared with participants were twice as likely to be established smokers (odds ratio = 2.01; 95% confidence interval, 1.47-2.74). The joint effects of movie smoking exposure and team sports participation revealed that at each quartile of movie smoking exposure, the odds of established smoking were greater for team sports nonparticipants than for participants. We saw a dose-response relationship of movie smoking exposure for established smoking only among team sports participants. Team sports participation clearly plays a protective role against established smoking, even in the face of exposure to movie smoking. However, movie smoking exposure increases the risk of established smoking among both team sports participants and nonparticipants. Parents, teachers, coaches, and clinicians should be aware that encouraging team sports participation in tandem with minimizing early exposure to movie smoking may offer the greatest likelihood of preventing youth smoking.

  19. Influence of Movie Smoking Exposure and Team Sports Participation on Established Smoking

    PubMed Central

    Adachi-Mejia, Anna M.; Primack, Brian A.; Beach, Michael L.; Titus-Ernstoff, Linda; Longacre, Meghan R.; Weiss, Julia E.; Dalton, Madeline A.

    2010-01-01

    Objective To examine the joint effects of movie smoking exposure and team sports participation on established smoking. Design Longitudinal study. Setting School- and telephone-based surveys in New Hampshire and Vermont between September 1999 through November 1999 and February 2006 through February 2007. Participants A total of 2048 youths aged 16 to 21 years at follow-up. Main Exposures Baseline movie smoking exposure categorized in quartiles assessed when respondents were aged 9 to 14 years and team sports participation assessed when respondents were aged 16 to 21 years. Main Outcome Measure Established smoking (having smoked ≥100 cigarettes in one’s lifetime) at follow-up. Results At follow-up, 353 respondents (17.2%) were established smokers. Exposure to the highest quartile of movie smoking compared with the lowest increased the likelihood of established smoking (odds ratio=1.63; 95% confidence interval, 1.03–2.57), and team sports nonparticipants compared with participants were twice as likely to be established smokers (odds ratio=2.01; 95% confidence interval, 1.47–2.74). The joint effects of movie smoking exposure and team sports participation revealed that at each quartile of movie smoking exposure, the odds of established smoking were greater for team sports nonparticipants than for participants. We saw a dose-response relationship of movie smoking exposure for established smoking only among team sports participants. Conclusions Team sports participation clearly plays a protective role against established smoking, even in the face of exposure to movie smoking. However, movie smoking exposure increases the risk of established smoking among both team sports participants and nonparticipants. Parents, teachers, coaches, and clinicians should be aware that encouraging team sports participation in tandem with minimizing early exposure to movie smoking may offer the greatest likelihood of preventing youth smoking. PMID:19581547

  20. Conditional High-Order Boltzmann Machines for Supervised Relation Learning.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang; Tan, Tieniu

    2017-09-01

    Relation learning is a fundamental problem in many vision tasks. Recently, high-order Boltzmann machine and its variants have shown their great potentials in learning various types of data relation in a range of tasks. But most of these models are learned in an unsupervised way, i.e., without using relation class labels, which are not very discriminative for some challenging tasks, e.g., face verification. In this paper, with the goal to perform supervised relation learning, we introduce relation class labels into conventional high-order multiplicative interactions with pairwise input samples, and propose a conditional high-order Boltzmann Machine (CHBM), which can learn to classify the data relation in a binary classification way. To be able to deal with more complex data relation, we develop two improved variants of CHBM: 1) latent CHBM, which jointly performs relation feature learning and classification, by using a set of latent variables to block the pathway from pairwise input samples to output relation labels and 2) gated CHBM, which untangles factors of variation in data relation, by exploiting a set of latent variables to multiplicatively gate the classification of CHBM. To reduce the large number of model parameters generated by the multiplicative interactions, we approximately factorize high-order parameter tensors into multiple matrices. Then, we develop efficient supervised learning algorithms, by first pretraining the models using joint likelihood to provide good parameter initialization, and then finetuning them using conditional likelihood to enhance the discriminant ability. We apply the proposed models to a series of tasks including invariant recognition, face verification, and action similarity labeling. Experimental results demonstrate that by exploiting supervised relation labels, our models can greatly improve the performance.

  1. A pilot study on the improvement of the lying area of finishing pigs by a soft lying mat.

    PubMed

    Savary, Pascal; Gygax, Lorenz; Jungbluth, Thomas; Wechsler, Beat; Hauser, Rudolf

    2011-01-01

    In this pilot study, we tested whether a soft mat (foam covered with a heat-sealed thermoplastic) reduces alterations and injuries at the skin and the leg joints.The soft mat in the lying area of partly slatted pens was compared to a lying area consisting of either bare or slightly littered (100 g straw per pig and day) concrete flooring. In this study we focused on skin lesions on the legs of finishing pigs as indicators of impaired welfare. Pigs were kept in 19 groups of 8-10 individuals and were examined for skin lesions around the carpal and tarsal joints either at a weight of <35 kg, or at close to 100 kg. The likelihood of hairless patches and wounds at the tarsal joints was significantly lower in pens with the soft lying mat than in pens with a bare concrete floor. Pens with a littered concrete floor did not differ compared to pens with a bare concrete floor. The soft lying mat thus improved floor quality in the lying area in terms of preventing skin lesions compared to bare and slightly littered concrete flooring. Such soft lying mats have thus the potential to improve lying comfort and welfare of finishing pigs.

  2. Does Talocrural Joint-Thrust Manipulation Improve Outcomes After Inversion Ankle Sprain?

    PubMed

    Krueger, Brett; Becker, Laura; Leemkuil, Greta; Durall, Christopher

    2015-08-01

    Clinical Scenario: Ankle sprains account for roughly 10% of sport-related injuries in the active population. The majority of these injuries occur from excessive ankle inversion, leading to lateral ligamentous injury. In addition to pain and swelling, limitations in ankle range of motion (ROM) and self-reported function are common findings. These limitations are thought to be due in part to loss of mobility in the talocrural joint. Accordingly, some investigators have reported using high-velocity, low-amplitude thrust-manipulation techniques directed at the talocrural joint to address deficits in dorsiflexion (DF) ROM and function. This review was conducted to ascertain the impact of talocrural joint-thrust manipulation (TJM) on DF ROM, self-reported function, and pain in patients with a history of ankle sprain. Focused Clinical Question: In patients with a history of inversion ankle sprain, does TJM improve outcomes in DF ROM, self-reported function, and/or pain?

  3. Maximum voluntary joint torque as a function of joint angle and angular velocity: model development and application to the lower limb.

    PubMed

    Anderson, Dennis E; Madigan, Michael L; Nussbaum, Maury A

    2007-01-01

    Measurements of human strength can be important during analyses of physical activities. Such measurements have often taken the form of the maximum voluntary torque at a single joint angle and angular velocity. However, the available strength varies substantially with joint position and velocity. When examining dynamic activities, strength measurements should account for these variations. A model is presented of maximum voluntary joint torque as a function of joint angle and angular velocity. The model is based on well-known physiological relationships between muscle force and length and between muscle force and velocity and was tested by fitting it to maximum voluntary joint torque data from six different exertions in the lower limb. Isometric, concentric and eccentric maximum voluntary contractions were collected during hip extension, hip flexion, knee extension, knee flexion, ankle plantar flexion and dorsiflexion. Model parameters are reported for each of these exertion directions by gender and age group. This model provides an efficient method by which strength variations with joint angle and angular velocity may be incorporated into comparisons between joint torques calculated by inverse dynamics and the maximum available joint torques.

  4. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  5. Image classification at low light levels

    NASA Astrophysics Data System (ADS)

    Wernick, Miles N.; Morris, G. Michael

    1986-12-01

    An imaging photon-counting detector is used to achieve automatic sorting of two image classes. The classification decision is formed on the basis of the cross correlation between a photon-limited input image and a reference function stored in computer memory. Expressions for the statistical parameters of the low-light-level correlation signal are given and are verified experimentally. To obtain a correlation-based system for two-class sorting, it is necessary to construct a reference function that produces useful information for class discrimination. An expression for such a reference function is derived using maximum-likelihood decision theory. Theoretically predicted results are used to compare on the basis of performance the maximum-likelihood reference function with Fukunaga-Koontz basis vectors and average filters. For each method, good class discrimination is found to result in milliseconds from a sparse sampling of the input image.

  6. Shear joint capability versus bolt clearance

    NASA Technical Reports Server (NTRS)

    Lee, H. M.

    1992-01-01

    The results of a conservative analysis approach into the determination of shear joint strength capability for typical space-flight hardware as a function of the bolt-hole clearance specified in the design are presented. These joints are comprised of high-strength steel fasteners and abutments constructed of aluminum alloys familiar to the aerospace industry. A general analytical expression was first arrived at which relates bolt-hole clearance to the bolt shear load required to place all joint fasteners into a shear transferring position. Extension of this work allowed the analytical development of joint load capability as a function of the number of fasteners, shear strength of the bolt, bolt-hole clearance, and the desired factor of safety. Analysis results clearly indicate that a typical space-flight hardware joint can withstand significant loading when less than ideal bolt hole clearances are used in the design.

  7. [Staple fixation for the treatment of hamate metacarpal joint injury].

    PubMed

    Tang, Yang-Hua; Zeng, Lin-Ru; Huang, Zhong-Ming; Yue, Zhen-Shuang; Xin, Da-Wei; Xu, Can-Da

    2014-03-01

    To investigate the effcacy of the staple fixation for the treatment of hamate metacarpal joint injury. From May 2009 to November 2012,16 patients with hamate metacarpal joint injury were treated with staple fixation including 10 males and 6 females with an average age of 33.6 years old ranging from 21 to 57 years. Among them, 11 cases were on the fourth or fifth metacarpal base dislocation without fractures, 5 cases were the fourth or fifth metacarpal base dislocation with avulsion fractures of the back of hamatum. Regular X-ray review was used to observe the fracture healing, joint replacement and position of staple fixation. The function of carpometacarpal joint and metacarpophalangeal joint were evaluated according to ASIA (TAM) system evaluation method. All incision were healed well with no infection. All patients were followed up from 16 to 24 months with an average of (10.0 +/- 2.7) months. No dislocation recurred, the position of internal fixator was good,no broken nail and screw withdrawal were occurred. Five patients with avulsion fracture of the back of hamatum achieved bone healing. The function of carpometacarpal joint and metacarpophalangeal was excellent in 10 cases,good in 5 cases, moderate in 1 case. The application of the staple for the treatment of hamatometacarpal joint injury has the advantages of simple operation, small trauma, reliable fixation, early postoperative function exercise and other advantages, which is the ideal operation mode for hamatometacarpal joint injury.

  8. Individuals with knee impairments identify items in need of clarification in the Patient Reported Outcomes Measurement Information System (PROMIS®) pain interference and physical function item banks - a qualitative study.

    PubMed

    Lynch, Andrew D; Dodds, Nathan E; Yu, Lan; Pilkonis, Paul A; Irrgang, James J

    2016-05-11

    The content and wording of the Patient Reported Outcome Measurement Information System (PROMIS) Physical Function and Pain Interference item banks have not been qualitatively assessed by individuals with knee joint impairments. The purpose of this investigation was to identify items in the PROMIS Physical Function and Pain Interference Item Banks that are irrelevant, unclear, or otherwise difficult to respond to for individuals with impairment of the knee and to suggest modifications based on cognitive interviews. Twenty-nine individuals with knee joint impairments qualitatively assessed items in the Pain Interference and Physical Function Item Banks in a mixed-methods cognitive interview. Field notes were analyzed to identify themes and frequency counts were calculated to identify items not relevant to individuals with knee joint impairments. Issues with clarity were identified in 23 items in the Physical Function Item Bank, resulting in the creation of 43 new or modified items, typically changing words within the item to be clearer. Interpretation issues included whether or not the knee joint played a significant role in overall health and age/gender differences in items. One quarter of the original items (31 of 124) in the Physical Function Item Bank were identified as irrelevant to the knee joint. All 41 items in the Pain Interference Item Bank were identified as clear, although individuals without significant pain substituted other symptoms which interfered with their life. The Physical Function Item Bank would benefit from additional items that are relevant to individuals with knee joint impairments and, by extension, to other lower extremity impairments. Several issues in clarity were identified that are likely to be present in other patient cohorts as well.

  9. Single-joint outcome measures: preliminary validation of patient-reported outcomes and physical examination.

    PubMed

    Heald, Alison E; Fudman, Edward J; Anklesaria, Pervin; Mease, Philip J

    2010-05-01

    To assess the validity, responsiveness, and reliability of single-joint outcome measures for determining target joint (TJ) response in patients with inflammatory arthritis. Patient-reported outcomes (PRO), consisting of responses to single questions about TJ global status on a 100-mm visual analog scale (VAS; TJ global score), function on a 100-mm VAS (TJ function score), and pain on a 5-point Likert scale (TJ pain score) were piloted in 66 inflammatory arthritis subjects in a phase 1/2 clinical study of an intraarticular gene transfer agent and compared to physical examination measures (TJ swelling, TJ tenderness) and validated function questionnaires (Disabilities of the Arm, Shoulder and Hand scale, Rheumatoid Arthritis Outcome Score, and the Health Assessment Questionnaire). Construct validity was assessed by evaluating the correlation between the single-joint outcome measures and validated function questionnaires using Spearman's rank correlation. Responsiveness or sensitivity to change was assessed through calculating effect size and standardized response means (SRM). Reliability of physical examination measures was assessed by determining interobserver agreement. The single-joint PRO were highly correlated with each other and correlated well with validated functional measures. The TJ global score exhibited modest effect size and modest SRM that correlated well with the patient's assessment of response on a 100-mm VAS. Physical examination measures exhibited high interrater reliability, but correlated less well with validated functional measures and the patient's assessment of response. Single-joint PRO, particularly the TJ global score, are simple to administer and demonstrate construct validity and responsiveness in patients with inflammatory arthritis. (ClinicalTrials.gov identifier NCT00126724).

  10. 3D Traffic Scene Understanding From Movable Platforms.

    PubMed

    Geiger, Andreas; Lauer, Martin; Wojek, Christian; Stiller, Christoph; Urtasun, Raquel

    2014-05-01

    In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry, and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar, or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow, and occupancy grids. For each of these cues, we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.

  11. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  12. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  13. Joining by plating: optimization of occluded angle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dini, J.W.; Johnson, H.R.; Kan, Y.R.

    1978-11-01

    An empirical method has been developed for predicting the minimum angle required for maximum joint strength for materials joined by plating. This is done through a proposed power law failure function, whose coefficients are taken from ring shear and conical head tensile data for plating/substrate combinations and whose exponent is determined from one set of plated-joint data. Experimental results are presented for Al-Ni-Al (7075-T6) and AM363-Ni-AM363 joints, and the failure function is used to predict joint strengths for Al-Ni-Al (2024-T6), UTi-Ni-UTi, and Be-Ti-Be.

  14. Meniscal Preservation is Important for the Knee Joint

    PubMed Central

    Patil, Shantanu Sudhakar; Shekhar, Anshu; Tapasvi, Sachin Ramchandra

    2017-01-01

    Native joint preservation has gained importance in recent years. This is mostly to find solutions for limitations of arthroplasty. In the knee joint, the menisci perform critical functions, adding stability during range of motion and efficiently transferring load across the tibiofemoral articulation while protecting the cartilage. The menisci are the most common injury seen by orthopedicians, especially in the younger active patients. Advances in technology and our knowledge on functioning of the knee joint have made meniscus repair an important mode of treatment. This review summarizes the various techniques of meniscus tear repair and also describes biological enhancements of healing. PMID:28966381

  15. Patient characteristics as predictors of clinical outcome of distraction in treatment of severe ankle osteoarthritis.

    PubMed

    Marijnissen, A C A; Hoekstra, M C L; Pré, B C du; van Roermund, P M; van Melkebeek, J; Amendola, A; Maathuis, P; Lafeber, F P J G; Welsing, P M J

    2014-01-01

    Osteoarthritis (OA) is a slowly progressive joint disease. Joint distraction can be a treatment of choice in case of severe OA. Prediction of failure will facilitate implementation of joint distraction in clinical practice. Patients with severe ankle OA, who underwent joint distraction were included. Survival analysis was performed over 12 years (n = 25 after 12 years). Regression analyses were used to predict failures and clinical benefit at 2 years after joint distraction (n = 111). Survival analysis showed that 44% of the patients failed, 17% within 2 years and 37% within 5 years after joint distraction (n = 48 after 5 years). Survival analysis in subgroups showed that the percentage failure was only different in women (30% after 2 years) versus men (after 11 years still no 30% failure). In the multivariate analyses female gender was predictive for failure 2 years after joint distraction. Gender and functional disability at baseline predicted more pain. Functional disability and pain at baseline were associated with more functional disability. Joint distraction shows a long-term clinical beneficial outcome. However, failure rate is considerable over the years. Female patients have a higher chance of failure during follow-up. Unfortunately, not all potential predictors could be investigated and other clinically significant predictors were not found. © 2013 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  16. Two is better than one: joint statistics of density and velocity in concentric spheres as a cosmological probe

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Hahn, O.; Pichon, C.; Bernardeau, F.

    2017-08-01

    The analytical formalism to obtain the probability distribution functions (PDFs) of spherically averaged cosmic densities and velocity divergences in the mildly non-linear regime is presented. A large-deviation principle is applied to those cosmic fields assuming their most likely dynamics in spheres is set by the spherical collapse model. We validate our analytical results using state-of-the-art dark matter simulations with a phase-space resolved velocity field finding a 2 per cent level agreement for a wide range of velocity divergences and densities in the mildly non-linear regime (˜10 Mpc h-1 at redshift zero), usually inaccessible to perturbation theory. From the joint PDF of densities and velocity divergences measured in two concentric spheres, we extract with the same accuracy velocity profiles and conditional velocity PDF subject to a given over/underdensity that are of interest to understand the non-linear evolution of velocity flows. Both PDFs are used to build a simple but accurate maximum likelihood estimator for the redshift evolution of the variance of both the density and velocity divergence fields, which have smaller relative errors than their sample variances when non-linearities appear. Given the dependence of the velocity divergence on the growth rate, there is a significant gain in using the full knowledge of both PDFs to derive constraints on the equation of state-of-dark energy. Thanks to the insensitivity of the velocity divergence to bias, its PDF can be used to obtain unbiased constraints on the growth of structures (σ8, f) or it can be combined with the galaxy density PDF to extract bias parameters.

  17. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review

    PubMed Central

    McClelland, James L.

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered. PMID:23970868

  18. Biomechanics of Hyperflexion and Kneeling before and after Total Knee Arthroplasty

    PubMed Central

    2014-01-01

    The capacity to perform certain activities is frequently compromised after total knee arthroplasty (TKA) due to a functional decline resulting from decreased range of motion and a diminished ability to kneel. In this manuscript, the current biomechanical understanding of hyperflexion and kneeling before and after TKA will be discussed. Patellofemoral and tibiofemoral joint contact area, contact pressure, and kinematics were evaluated in cadaveric studies using a Tekscan pressure measuring system and Microscribe. Testing was performed on intact knees and following cruciate retaining and posterior stabilized TKA at knee flexion angles of 90°, 105°, 120°, and 135°. Three loading conditions were used to simulate squatting, double stance kneeling, and single stance kneeling. Following TKA with double stance kneeling, patellofemoral contact areas did not increase significantly at high knee flexion angle (135°). Kneeling resulted in tibial posterior translation and external rotation at all flexion angles. Moving from double to single stance kneeling tended to increase pressures in the cruciate retaining group, but decreased pressures in the posterior stabilized group. The cruciate retaining group had significantly larger contact areas than the posterior stabilized group, although no significant differences in pressures were observed comparing the two TKA designs (p < 0.05). If greater than 120° of postoperative knee range of motion can be achieved following TKA, then kneeling may be performed with less risk in the patellofemoral joint than was previously believed to be the case. However, kneeling may increase the likelihood of damage to cartilage and menisci in intact knees and after TKA increases in tibiofemoral contact area and pressures may lead to polyethyelene wear if performed on a chronic, repetitive basis. PMID:24900891

  19. Biomechanics of hyperflexion and kneeling before and after total knee arthroplasty.

    PubMed

    Lee, Thay Q

    2014-06-01

    The capacity to perform certain activities is frequently compromised after total knee arthroplasty (TKA) due to a functional decline resulting from decreased range of motion and a diminished ability to kneel. In this manuscript, the current biomechanical understanding of hyperflexion and kneeling before and after TKA will be discussed. Patellofemoral and tibiofemoral joint contact area, contact pressure, and kinematics were evaluated in cadaveric studies using a Tekscan pressure measuring system and Microscribe. Testing was performed on intact knees and following cruciate retaining and posterior stabilized TKA at knee flexion angles of 90°, 105°, 120°, and 135°. Three loading conditions were used to simulate squatting, double stance kneeling, and single stance kneeling. Following TKA with double stance kneeling, patellofemoral contact areas did not increase significantly at high knee flexion angle (135°). Kneeling resulted in tibial posterior translation and external rotation at all flexion angles. Moving from double to single stance kneeling tended to increase pressures in the cruciate retaining group, but decreased pressures in the posterior stabilized group. The cruciate retaining group had significantly larger contact areas than the posterior stabilized group, although no significant differences in pressures were observed comparing the two TKA designs (p < 0.05). If greater than 120° of postoperative knee range of motion can be achieved following TKA, then kneeling may be performed with less risk in the patellofemoral joint than was previously believed to be the case. However, kneeling may increase the likelihood of damage to cartilage and menisci in intact knees and after TKA increases in tibiofemoral contact area and pressures may lead to polyethyelene wear if performed on a chronic, repetitive basis.

  20. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review.

    PubMed

    McClelland, James L

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.

  1. Central Sensitization and Neuropathic Features of Ongoing Pain in a Rat Model of Advanced Osteoarthritis.

    PubMed

    Havelin, Joshua; Imbert, Ian; Cormier, Jennifer; Allen, Joshua; Porreca, Frank; King, Tamara

    2016-03-01

    Osteoarthritis (OA) pain is most commonly characterized by movement-triggered joint pain. However, in advanced disease, OA pain becomes persistent, ongoing and resistant to treatment with nonsteroidal anti-inflammatory drugs (NSAIDs). The mechanisms underlying ongoing pain in advanced OA are poorly understood. We recently showed that intra-articular (i.a.) injection of monosodium iodoacetate (MIA) into the rat knee joint produces concentration-dependent outcomes. Thus, a low dose of i.a. MIA produces NSAID-sensitive weight asymmetry without evidence of ongoing pain and a high i.a. MIA dose produces weight asymmetry and NSAID-resistant ongoing pain. In the present study, palpation of the ipsilateral hind limb of rats treated 14 days previously with high, but not low, doses of i.a. MIA produced expression of the early oncogene, FOS, in the spinal dorsal horn. Inactivation of descending pain facilitatory pathways using a microinjection of lidocaine within the rostral ventromedial medulla induced conditioned place preference selectively in rats treated with the high dose of MIA. Conditioned place preference to intra-articular lidocaine was blocked by pretreatment with duloxetine (30 mg/kg, intraperitoneally at -30 minutes). These observations are consistent with the likelihood of a neuropathic component of OA that elicits ongoing, NSAID-resistant pain and central sensitization that is mediated, in part, by descending modulatory mechanisms. This model provides a basis for exploration of underlying mechanisms promoting neuropathic components of OA pain and for the identification of mechanisms that might guide drug discovery for treatment of advanced OA pain without the need for joint replacement. Difficulty in managing advanced OA pain often results in joint replacement therapy in these patients. Improved understanding of mechanisms driving NSAID-resistant ongoing OA pain might facilitate development of alternatives to joint replacement therapy. Our findings suggest that central sensitization and neuropathic features contribute to NSAID-resistant ongoing OA joint pain. Copyright © 2016 American Pain Society. Published by Elsevier Inc. All rights reserved.

  2. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    NASA Astrophysics Data System (ADS)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  3. Assessment of Musculoskeletal Function and its Correlation with Radiological Joint Score in Children with Hemophilia A.

    PubMed

    Gupta, Samriti; Garg, Kapil; Singh, Jagdish

    2015-12-01

    To evaluate the functional independence of children with hemophilia A and its correlation to radiological joint score. The present cross sectional study was conducted at SPMCHI, SMS Medical College, Jaipur, India. Children in the age group of 4-18 y affected with severe, moderate and mild hemophilia A and with a history of hemarthrosis who attended the OPD, emergency or got admitted in wards of SPMCHI, SMS Medical College were examined. Musculoskeletal function was measured in 98 patients using Functional Independence Score in Hemophilia (FISH) and index joints (joints most commonly affected with repeated bleeding) were assessed radiologically with plain X rays using Pettersson score. The mean FISH score was 28.07 ± 3.90 (range 17-32) with squatting, running and step climbing as most affected tasks. The mean Pettersson score was 3.8 ± 3.2. A significant correlation was found between mean Pettersson score and FISH (r = -0.875, P < 0.001) with knee and elbow having r = -0.810 and -0.861 respectively, but not in case of ankle with r = -0.420 (P 0.174). The FISH and radiological joint (Pettersson's) scores may be extremely useful in the clinical practice in the absence of magnetic resonance imaging (MRI), which is considered very sensitive to detect early joint damage, but at a cost that makes it relatively inaccessible. FISH seems to be a reliable tool for assessment of functional independence in patients with hemophilia A.

  4. System identification and model reduction using modulating function techniques

    NASA Technical Reports Server (NTRS)

    Shen, Yan

    1993-01-01

    Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.

  5. [Clinical observation on the effect of joint mobilization in treating elderly patients after distal radius fractures operation].

    PubMed

    Jia, Xue-Feng; Cai, Hong-Xin; Lin, Ge-Sheng; Fang, Ji-Shi; Wang, Yong; Wu, Zhi-Yong; Tu, Xu-Hui

    2017-07-25

    To investigate the effect of joint mobilization on postoperative wrist joint function, pain and grip strength for elderly patients with distal radius fracture. From January 2015 to June 2016, a total of 67 elderly patients with distal radius fracture were randomly divided into routine exercise group and joint mobilization group. Among them, 37 patients in the routine exercise group underwent conventional distal radius fracture postoperative joint function exercise regimen, including 16 males and 21 females with a mean age of (67.8±3.2) years old ranging from 60 to 72 years old;the injured side was dominant in 23 cases and non-dominant in 14 cases;injury mechanism was fall in 26 cases, traffic accident in 11 cases; for AO type, 6 cases were type B3, 18 cases were type C1, 7 cases were type C2, 6 cases was type C3. Other 30 patients in the joint mobilization group underwent joint mobilization on the basis of the routine exercise group including 14 males and 16 females with a mean age of (67.1±4.0) years old ranging from 61 to 74 years old; the injured side was dominant in 21 cases and non-dominant in 9 cases;injury mechanism was fall in 25 cases, traffic accident in 5 cases;for AO type, 8 cases were type B3, 13 cases were type C1, 6 cases were type C2, 9 cases were type C3. The wrist joint activity, Gartland-Werley wrist joint function score, VAS pain score and grip strength were observed at 3 months afrer treatment. After 3 months' treatment, the VAS in the routine exercise group was higher than that of the joint mobilization group ( P <0.05). The grip strength of affected side in both groups were lower than that of contralateral side, but the average grip strength of affected side in joint mobilization group was higher than that in routine exercise group( P <0.05). In routine exercise group, the average angle of flexion, extension, radial deviation were significantly higher than those of joint mobilization group( P <0.05). But ulnar deviation angle in routine exercise group compared with joint mobilization group had no significant difference ( P >0.05). In the comparison of each item of Gartland-Werley, there was no significant difference between two groups in residual deformity and complication( P >0.05); the average score of subjective score, objective score and total score in routine exercise group were significantly higher than those of the joint mobilization group ( P <0.05). The wrist function Gartland-Werley score in routine exercise group after treatment was excellent in 21 cases, good in 10, 6 in fair, while in joint mobilization group, excellent in 23, good in 6, fair in 1( P <0.05). The application of joint mobilization in the treatment of elderly patients with distal radius fracture can improve the joint activity and obtain better wrist function after surgery.

  6. INFERRING THE ECCENTRICITY DISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogg, David W.; Bovy, Jo; Myers, Adam D., E-mail: david.hogg@nyu.ed

    2010-12-20

    Standard maximum-likelihood estimators for binary-star and exoplanet eccentricities are biased high, in the sense that the estimated eccentricity tends to be larger than the true eccentricity. As with most non-trivial observables, a simple histogram of estimated eccentricities is not a good estimate of the true eccentricity distribution. Here, we develop and test a hierarchical probabilistic method for performing the relevant meta-analysis, that is, inferring the true eccentricity distribution, taking as input the likelihood functions for the individual star eccentricities, or samplings of the posterior probability distributions for the eccentricities (under a given, uninformative prior). The method is a simple implementationmore » of a hierarchical Bayesian model; it can also be seen as a kind of heteroscedastic deconvolution. It can be applied to any quantity measured with finite precision-other orbital parameters, or indeed any astronomical measurements of any kind, including magnitudes, distances, or photometric redshifts-so long as the measurements have been communicated as a likelihood function or a posterior sampling.« less

  7. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  8. Precision Parameter Estimation and Machine Learning

    NASA Astrophysics Data System (ADS)

    Wandelt, Benjamin D.

    2008-12-01

    I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.

  9. A theoretical framework for understanding neuromuscular response to lower extremity joint injury.

    PubMed

    Pietrosimone, Brian G; McLeod, Michelle M; Lepley, Adam S

    2012-01-01

    Neuromuscular alterations are common following lower extremity joint injury and often lead to decreased function and disability. These neuromuscular alterations manifest in inhibition or abnormal facilitation of the uninjured musculature surrounding an injured joint. Unfortunately, these neural alterations are poorly understood, which may affect clinical recognition and treatment of these injuries. Understanding how these neural alterations affect physical function may be important for proper clinical management of lower extremity joint injuries. Pertinent articles focusing on neuromuscular consequences and treatment of knee and ankle injuries were collected from peer-reviewed sources available on the Web of Science and Medline databases from 1975 through 2010. A theoretical model to illustrate potential relationships between neural alterations and clinical impairments was constructed from the current literature. Lower extremity joint injury affects upstream cortical and spinal reflexive excitability pathways as well as downstream muscle function and overall physical performance. Treatment targeting the central nervous system provides an alternate means of treating joint injury that may be effective for patients with neuromuscular alterations. Disability is common following joint injury. There is mounting evidence that alterations in the central nervous system may relate to clinical changes in biomechanics that may predispose patients to further injury, and novel clinical interventions that target neural alterations may improve therapeutic outcomes.

  10. A Theoretical Framework for Understanding Neuromuscular Response to Lower Extremity Joint Injury

    PubMed Central

    Pietrosimone, Brian G.; McLeod, Michelle M.; Lepley, Adam S.

    2012-01-01

    Background: Neuromuscular alterations are common following lower extremity joint injury and often lead to decreased function and disability. These neuromuscular alterations manifest in inhibition or abnormal facilitation of the uninjured musculature surrounding an injured joint. Unfortunately, these neural alterations are poorly understood, which may affect clinical recognition and treatment of these injuries. Understanding how these neural alterations affect physical function may be important for proper clinical management of lower extremity joint injuries. Methods: Pertinent articles focusing on neuromuscular consequences and treatment of knee and ankle injuries were collected from peer-reviewed sources available on the Web of Science and Medline databases from 1975 through 2010. A theoretical model to illustrate potential relationships between neural alterations and clinical impairments was constructed from the current literature. Results: Lower extremity joint injury affects upstream cortical and spinal reflexive excitability pathways as well as downstream muscle function and overall physical performance. Treatment targeting the central nervous system provides an alternate means of treating joint injury that may be effective for patients with neuromuscular alterations. Conclusions: Disability is common following joint injury. There is mounting evidence that alterations in the central nervous system may relate to clinical changes in biomechanics that may predispose patients to further injury, and novel clinical interventions that target neural alterations may improve therapeutic outcomes. PMID:23016066

  11. Managing Knee Osteoarthritis: The Effects of Body Weight Supported Physical Activity on Joint Pain, Function, and Thigh Muscle Strength.

    PubMed

    Peeler, Jason; Christian, Mathew; Cooper, Juliette; Leiter, Jeffrey; MacDonald, Peter

    2015-11-01

    To determine the effect of a 12-week lower body positive pressure (LBPP)-supported low-load treadmill walking program on knee joint pain, function, and thigh muscle strength in overweight patients with knee osteoarthritis (OA). Prospective, observational, repeated measures investigation. Community-based, multidisciplinary sports medicine clinic. Thirty-one patients aged between 55 and 75 years, with a body mass index ≥25 kg/m and mild-to-moderate knee OA. Twelve-week LBPP-supported low-load treadmill walking regimen. Acute knee joint pain (visual analog scale) during full weight bearing treadmill walking, chronic knee pain, and joint function [Knee Injury and Osteoarthritis Outcome Score (KOOS) questionnaire] during normal activities of daily living, and thigh muscle strength (isokinetic testing). Appropriate methods of statistical analysis were used to compare data from baseline and follow-up evaluation. Participants reported significant improvements in knee joint pain and function and demonstrated significant increases in thigh muscle strength about the degenerative knee. Participants also experienced significant reductions in acute knee pain during full weight bearing treadmill walking and required dramatically less LBPP support to walk pain free on the treadmill. Data suggest that an LBPP-supported low-load exercise regimen can be used to significantly diminish knee pain, enhance joint function, and increase thigh muscle strength, while safely promoting pain-free walking exercise in overweight patients with knee OA. These findings have important implications for the development of nonoperative treatment strategies that can be used in the management of joint symptoms associated with progressive knee OA in at-risk patient populations. This research suggests that LBPP-supported low-load walking is a safe user-friendly mode of exercise that can be successfully used in the management of day-to-day joint symptoms associated with knee OA, helping to improve the physical health, quality of life, and social well-being of North America's aging population.

  12. [Treatment goals in FACE philosophy].

    PubMed

    Martin, Domingo; Maté, Amaia; Zabalegui, Paula; Valenzuela, Jaime

    2017-03-01

    The FACE philosophy is characterized by clearly defined treatment goals: facial esthetics, dental esthetics, periodontal health, functional occlusion, neuromuscular mechanism and joint function. The purpose is to establish ideal occlusion with good facial esthetics and an orthopedic stable joint position. The authors present all the concepts of FACE philosophy and illustrate them through one case report. Taking into account all the FACE philosophy concepts increases diagnostic ability and improves the quality and stability of treatment outcomes. The goal of this philosophy is to harmonize the facial profile, tooth alignment, periodontium, functional occlusion, neuromuscular mechanism and joint function. The evaluation and treatment approach to vertical problems are unique to the philosophy. © EDP Sciences, SFODF, 2017.

  13. Flexibility in the mouse middle ear: A finite element study of the frequency response

    NASA Astrophysics Data System (ADS)

    Gottlieb, Peter; Puria, Sunil

    2018-05-01

    The mammalian middle ear is comprised of three distinct ossicles, connected by joints, and suspended in an air-filled cavity. In most mammals, the ossicular joints are mobile synovial joints, which introduce flexibility into the ossicular chain. In some smaller rodents, however, these joints are less mobile, and in the mouse in particular, the malleus is additionally characterized by a large, thin plate known as the transversal lamina, which connects the manubrium to the incus-malleus joint (IMJ). We hypothesize that this feature acts as a functional joint, maintaining the benefits of a flexible ossicular chain despite a less-mobile IMJ, and tested this hypothesis with a finite element model of the mouse middle ear. The results showed that while fusing the ossicular joints had a negligible effect on sound transmission, stiffening the ossicular bone significantly reduced sound transmission, implying that bone flexibility plays a critical role in the normal function of the mouse middle ear.

  14. Evaluation of scattered light distributions of cw-transillumination for functional diagnostic of rheumatic disorders in interphalangeal joints

    NASA Astrophysics Data System (ADS)

    Prapavat, Viravuth; Schuetz, Rijk; Runge, Wolfram; Beuthan, Juergen; Mueller, Gerhard J.

    1995-12-01

    This paper presents in-vitro-studies using the scattered intensity distribution obtained by cw- transillumination to examine the condition of rheumatic disorders of interphalangeal joints. Inflammation of joints, due to rheumatic diseases, leads to changes in the synovial membrane, synovia composition and content, and anatomic geometrical variations. Measurements have shown that these rheumatic induced inflammation processes result in a variation in optical properties of joint systems. With a scanning system the interphalangeal joint is transilluminated with diode lasers (670 nm, 905 nm) perpendicular to the joint cavity. The detection of the entire distribution of the transmitted radiation intensity was performed with a CCD camera. As a function of the structure and optical properties of the transilluminated volume we achieved distributions of scattered radiation which show characteristic variations in intensity and shape. Using signal and image processing procedures we evaluated the measured scattered distributions regarding their information weight, shape and scale features. Mathematical methods were used to find classification criteria to determine variations of the joint condition.

  15. Statistical inference of static analysis rules

    NASA Technical Reports Server (NTRS)

    Engler, Dawson Richards (Inventor)

    2009-01-01

    Various apparatus and methods are disclosed for identifying errors in program code. Respective numbers of observances of at least one correctness rule by different code instances that relate to the at least one correctness rule are counted in the program code. Each code instance has an associated counted number of observances of the correctness rule by the code instance. Also counted are respective numbers of violations of the correctness rule by different code instances that relate to the correctness rule. Each code instance has an associated counted number of violations of the correctness rule by the code instance. A respective likelihood of the validity is determined for each code instance as a function of the counted number of observances and counted number of violations. The likelihood of validity indicates a relative likelihood that a related code instance is required to observe the correctness rule. The violations may be output in order of the likelihood of validity of a violated correctness rule.

  16. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    PubMed

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  17. Beyond valence in the perception of likelihood: the role of emotion specificity.

    PubMed

    DeSteno, D; Petty, R E; Wegener, D T; Rucker, D D

    2000-03-01

    Positive and negative moods have been shown to increase likelihood estimates of future events matching these states in valence (e.g., E. J. Johnson & A. Tversky, 1983). In the present article, 4 studies provide evidence that this congruency bias (a) is not limited to valence but functions in an emotion-specific manner, (b) derives from the informational value of emotions, and (c) is not the inevitable outcome of likelihood assessment under heightened emotion. Specifically, Study 1 demonstrates that sadness and anger, 2 distinct, negative emotions, differentially bias likelihood estimates of sad and angering events. Studies 2 and 3 replicate this finding in addition to supporting an emotion-as-information (cf. N. Schwarz & G. L. Clore, 1983), as opposed to a memory-based, mediating process for the bias. Finally, Study 4 shows that when the source of the emotion is salient, a reversal of the bias can occur given greater cognitive effort aimed at accuracy.

  18. Swivel Joint For Liquid Nitrogen

    NASA Technical Reports Server (NTRS)

    Milner, James F.

    1988-01-01

    Swivel joint allows liquid-nitrogen pipe to rotate through angle of 100 degree with respect to mating pipe. Functions without cracking hard foam insulation on lines. Pipe joint rotates on disks so mechanical stress not transmitted to thick insulation on pipes. Inner disks ride on fixed outer disks. Disks help to seal pressurized liquid nitrogen flowing through joint.

  19. Identification of BRCA1 missense substitutions that confer partial functional activity: potential moderate risk variants?

    PubMed

    Lovelock, Paul K; Spurdle, Amanda B; Mok, Myth T S; Farrugia, Daniel J; Lakhani, Sunil R; Healey, Sue; Arnold, Stephen; Buchanan, Daniel; Couch, Fergus J; Henderson, Beric R; Goldgar, David E; Tavtigian, Sean V; Chenevix-Trench, Georgia; Brown, Melissa A

    2007-01-01

    Many of the DNA sequence variants identified in the breast cancer susceptibility gene BRCA1 remain unclassified in terms of their potential pathogenicity. Both multifactorial likelihood analysis and functional approaches have been proposed as a means to elucidate likely clinical significance of such variants, but analysis of the comparative value of these methods for classifying all sequence variants has been limited. We have compared the results from multifactorial likelihood analysis with those from several functional analyses for the four BRCA1 sequence variants A1708E, G1738R, R1699Q, and A1708V. Our results show that multifactorial likelihood analysis, which incorporates sequence conservation, co-inheritance, segregation, and tumour immunohistochemical analysis, may improve classification of variants. For A1708E, previously shown to be functionally compromised, analysis of oestrogen receptor, cytokeratin 5/6, and cytokeratin 14 tumour expression data significantly strengthened the prediction of pathogenicity, giving a posterior probability of pathogenicity of 99%. For G1738R, shown to be functionally defective in this study, immunohistochemistry analysis confirmed previous findings of inconsistent 'BRCA1-like' phenotypes for the two tumours studied, and the posterior probability for this variant was 96%. The posterior probabilities of R1699Q and A1708V were 54% and 69%, respectively, only moderately suggestive of increased risk. Interestingly, results from functional analyses suggest that both of these variants have only partial functional activity. R1699Q was defective in foci formation in response to DNA damage and displayed intermediate transcriptional transactivation activity but showed no evidence for centrosome amplification. In contrast, A1708V displayed an intermediate transcriptional transactivation activity and a normal foci formation response in response to DNA damage but induced centrosome amplification. These data highlight the need for a range of functional studies to be performed in order to identify variants with partially compromised function. The results also raise the possibility that A1708V and R1699Q may be associated with a low or moderate risk of cancer. While data pooling strategies may provide more information for multifactorial analysis to improve the interpretation of the clinical significance of these variants, it is likely that the development of current multifactorial likelihood approaches and the consideration of alternative statistical approaches will be needed to determine whether these individually rare variants do confer a low or moderate risk of breast cancer.

  20. [Significance of the glenoid labrum for stability of the glenohumeral joint. An experimental study].

    PubMed

    Habermeyer, P; Schuller, U

    1990-01-01

    In human cadavers we were able to show that the glenohumeral joint is comparable to the model of a physical piston. The labrum glenoidale functions like a valve against atmospheric pressure. It is possible to characterize the behavior of intraarticular negative pressure by the equation (formula; see text). The calculations of the force F of atmospheric pressure tending to resist distraction of the joint surfaces leads to a 95% confidence interval from 6.9 to 22.9 kp. Under a general anesthetic, distraction of the healthy glenohumeral joint also produces negative intraarticular pressure in the area of the fossa glenoidalis in vivo. Joints with a labral tear (Bankart defect) and a chronic instability are not characterized by this phenomenon. A change in intraarticular pressure might stimulate intraarticular pressure receptors. This could be important in functioning as a neuromuscular protection reflex for the joint.

Top