Mixture Rasch Models with Joint Maximum Likelihood Estimation
ERIC Educational Resources Information Center
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
Bae, Ji Yong; Park, Kyung Soon; Seon, Jong Keun; Jeon, Insu
2015-12-01
To show the causal relationship between normal walking after various lateral ankle ligament (LAL) injuries caused by acute inversion ankle sprains and alterations in ankle joint contact characteristics, finite element simulations of normal walking were carried out using an intact ankle joint model and LAL injury models. A walking experiment using a volunteer with a normal ankle joint was performed to obtain the boundary conditions for the simulations and to support the appropriateness of the simulation results. Contact pressure and strain on the talus articular cartilage and anteroposterior and mediolateral translations of the talus were calculated. Ankles with ruptured anterior talofibular ligaments (ATFLs) had a higher likelihood of experiencing increased ankle joint contact pressures, strains and translations than ATFL-deficient ankles. In particular, ankles with ruptured ATFL + calcaneofibular ligaments and all ruptured ankles had a similar likelihood as the ATFL-ruptured ankles. The push off stance phase was the most likely situation for increased ankle joint contact pressures, strains and translations in LAL-injured ankles.
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen
2018-07-01
Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.
Analyzing Planck and low redshift data sets with advanced statistical methods
NASA Astrophysics Data System (ADS)
Eifler, Tim
The recent ESA/NASA Planck mission has provided a key data set to constrain cosmology that is most sensitive to physics of the early Universe, such as inflation and primordial NonGaussianity (Planck 2015 results XIII). In combination with cosmological probes of the LargeScale Structure (LSS), the Planck data set is a powerful source of information to investigate late time phenomena (Planck 2015 results XIV), e.g. the accelerated expansion of the Universe, the impact of baryonic physics on the growth of structure, and the alignment of galaxies in their dark matter halos. It is the main objective of this proposal to re-analyze the archival Planck data, 1) with different, more recently developed statistical methods for cosmological parameter inference, and 2) to combine Planck and ground-based observations in an innovative way. We will make the corresponding analysis framework publicly available and believe that it will set a new standard for future CMB-LSS analyses. Advanced statistical methods, such as the Gibbs sampler (Jewell et al 2004, Wandelt et al 2004) have been critical in the analysis of Planck data. More recently, Approximate Bayesian Computation (ABC, see Weyant et al 2012, Akeret et al 2015, Ishida et al 2015, for cosmological applications) has matured to an interesting tool in cosmological likelihood analyses. It circumvents several assumptions that enter the standard Planck (and most LSS) likelihood analyses, most importantly, the assumption that the functional form of the likelihood of the CMB observables is a multivariate Gaussian. Beyond applying new statistical methods to Planck data in order to cross-check and validate existing constraints, we plan to combine Planck and DES data in a new and innovative way and run multi-probe likelihood analyses of CMB and LSS observables. The complexity of multiprobe likelihood analyses scale (non-linearly) with the level of correlations amongst the individual probes that are included. For the multi-probe analysis proposed here we will use the existing CosmoLike software, a computationally efficient analysis framework that is unique in its integrated ansatz of jointly analyzing probes of large-scale structure (LSS) of the Universe. We plan to combine CosmoLike with publicly available CMB analysis software (Camb, CLASS) to include modeling capabilities of CMB temperature, polarization, and lensing measurements. The resulting analysis framework will be capable to independently and jointly analyze data from the CMB and from various probes of the LSS of the Universe. After completion we will utilize this framework to check for consistency amongst the individual probes and subsequently run a joint likelihood analysis of probes that are not in tension. The inclusion of Planck information in a joint likelihood analysis substantially reduces DES uncertainties in cosmological parameters, and allows for unprecedented constraints on parameters that describe astrophysics. In their recent review Observational Probes of Cosmic Acceleration (Weinberg et al 2013) the authors emphasize the value of a balanced program that employs several of the most powerful methods in combination, both to cross-check systematic uncertainties and to take advantage of complementary information. The work we propose follows exactly this idea: 1) cross-checking existing Planck results with alternative methods in the data analysis, 2) checking for consistency of Planck and DES data, and 3) running a joint analysis to constrain cosmology and astrophysics. It is now expedient to develop and refine multi-probe analysis strategies that allow the comparison and inclusion of information from disparate probes to optimally obtain cosmology and astrophysics. Analyzing Planck and DES data poses an ideal opportunity for this purpose and corresponding lessons will be of great value for the science preparation of Euclid and WFIRST.
Joint reconstruction of activity and attenuation in Time-of-Flight PET: A Quantitative Analysis.
Rezaei, Ahmadreza; Deroose, Christophe M; Vahle, Thomas; Boada, Fernando; Nuyts, Johan
2018-03-01
Joint activity and attenuation reconstruction methods from time of flight (TOF) positron emission tomography (PET) data provide an effective solution to attenuation correction when no (or incomplete/inaccurate) information on the attenuation is available. One of the main barriers limiting their use in clinical practice is the lack of validation of these methods on a relatively large patient database. In this contribution, we aim at validating the activity reconstructions of the maximum likelihood activity reconstruction and attenuation registration (MLRR) algorithm on a whole-body patient data set. Furthermore, a partial validation (since the scale problem of the algorithm is avoided for now) of the maximum likelihood activity and attenuation reconstruction (MLAA) algorithm is also provided. We present a quantitative comparison of the joint reconstructions to the current clinical gold-standard maximum likelihood expectation maximization (MLEM) reconstruction with CT-based attenuation correction. Methods: The whole-body TOF-PET emission data of each patient data set is processed as a whole to reconstruct an activity volume covering all the acquired bed positions, which helps to reduce the problem of a scale per bed position in MLAA to a global scale for the entire activity volume. Three reconstruction algorithms are used: MLEM, MLRR and MLAA. A maximum likelihood (ML) scaling of the single scatter simulation (SSS) estimate to the emission data is used for scatter correction. The reconstruction results are then analyzed in different regions of interest. Results: The joint reconstructions of the whole-body patient data set provide better quantification in case of PET and CT misalignments caused by patient and organ motion. Our quantitative analysis shows a difference of -4.2% (±2.3%) and -7.5% (±4.6%) between the joint reconstructions of MLRR and MLAA compared to MLEM, averaged over all regions of interest, respectively. Conclusion: Joint activity and attenuation estimation methods provide a useful means to estimate the tracer distribution in cases where CT-based attenuation images are subject to misalignments or are not available. With an accurate estimate of the scatter contribution in the emission measurements, the joint TOF-PET reconstructions are within clinical acceptable accuracy. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Bivariate categorical data analysis using normal linear conditional multinomial probability model.
Sun, Bingrui; Sutradhar, Brajendra
2015-02-10
Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.
Program for Weibull Analysis of Fatigue Data
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2005-01-01
A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.
NASA Astrophysics Data System (ADS)
Sadegh, M.; Moftakhari, H.; AghaKouchak, A.
2017-12-01
Many natural hazards are driven by multiple forcing variables, and concurrence/consecutive extreme events significantly increases risk of infrastructure/system failure. It is a common practice to use univariate analysis based upon a perceived ruling driver to estimate design quantiles and/or return periods of extreme events. A multivariate analysis, however, permits modeling simultaneous occurrence of multiple forcing variables. In this presentation, we introduce the Multi-hazard Assessment and Scenario Toolbox (MhAST) that comprehensively analyzes marginal and joint probability distributions of natural hazards. MhAST also offers a wide range of scenarios of return period and design levels and their likelihoods. Contribution of this study is four-fold: 1. comprehensive analysis of marginal and joint probability of multiple drivers through 17 continuous distributions and 26 copulas, 2. multiple scenario analysis of concurrent extremes based upon the most likely joint occurrence, one ruling variable, and weighted random sampling of joint occurrences with similar exceedance probabilities, 3. weighted average scenario analysis based on a expected event, and 4. uncertainty analysis of the most likely joint occurrence scenario using a Bayesian framework.
Krill, Michael K; Rosas, Samuel; Kwon, KiHyun; Dakkak, Andrew; Nwachukwu, Benedict U; McCormick, Frank
2018-02-01
The clinical examination of the shoulder joint is an undervalued diagnostic tool for evaluating acromioclavicular (AC) joint pathology. Applying evidence-based clinical tests enables providers to make an accurate diagnosis and minimize costly imaging procedures and potential delays in care. The purpose of this study was to create a decision tree analysis enabling simple and accurate diagnosis of AC joint pathology. A systematic review of the Medline, Ovid and Cochrane Review databases was performed to identify level one and two diagnostic studies evaluating clinical tests for AC joint pathology. Individual test characteristics were combined in series and in parallel to improve sensitivities and specificities. A secondary analysis utilized subjective pre-test probabilities to create a clinical decision tree algorithm with post-test probabilities. The optimal special test combination to screen and confirm AC joint pathology combined Paxinos sign and O'Brien's Test, with a specificity of 95.8% when performed in series; whereas, Paxinos sign and Hawkins-Kennedy Test demonstrated a sensitivity of 93.7% when performed in parallel. Paxinos sign and O'Brien's Test demonstrated the greatest positive likelihood ratio (2.71); whereas, Paxinos sign and Hawkins-Kennedy Test reported the lowest negative likelihood ratio (0.35). No combination of special tests performed in series or in parallel creates more than a small impact on post-test probabilities to screen or confirm AC joint pathology. Paxinos sign and O'Brien's Test is the only special test combination that has a small and sometimes important impact when used both in series and in parallel. Physical examination testing is not beneficial for diagnosis of AC joint pathology when pretest probability is unequivocal. In these instances, it is of benefit to proceed with procedural tests to evaluate AC joint pathology. Ultrasound-guided corticosteroid injections are diagnostic and therapeutic. An ultrasound-guided AC joint corticosteroid injection may be an appropriate new standard for treatment and surgical decision-making. II - Systematic Review.
Song, Hui; Peng, Yingwei; Tu, Dongsheng
2017-04-01
Motivated by the joint analysis of longitudinal quality of life data and recurrence free survival times from a cancer clinical trial, we present in this paper two approaches to jointly model the longitudinal proportional measurements, which are confined in a finite interval, and survival data. Both approaches assume a proportional hazards model for the survival times. For the longitudinal component, the first approach applies the classical linear mixed model to logit transformed responses, while the second approach directly models the responses using a simplex distribution. A semiparametric method based on a penalized joint likelihood generated by the Laplace approximation is derived to fit the joint model defined by the second approach. The proposed procedures are evaluated in a simulation study and applied to the analysis of breast cancer data motivated this research.
Further Iterations on Using the Problem-Analysis Framework
ERIC Educational Resources Information Center
Annan, Michael; Chua, Jocelyn; Cole, Rachel; Kennedy, Emma; James, Robert; Markusdottir, Ingibjorg; Monsen, Jeremy; Robertson, Lucy; Shah, Sonia
2013-01-01
A core component of applied educational and child psychology practice is the skilfulness with which practitioners are able to rigorously structure and conceptualise complex real world human problems. This is done in such a way that when they (with others) jointly work on them, there is an increased likelihood of positive outcomes being achieved…
Indirect detection constraints on s- and t-channel simplified models of dark matter
NASA Astrophysics Data System (ADS)
Carpenter, Linda M.; Colburn, Russell; Goodman, Jessica; Linden, Tim
2016-09-01
Recent Fermi-LAT observations of dwarf spheroidal galaxies in the Milky Way have placed strong limits on the gamma-ray flux from dark matter annihilation. In order to produce the strongest limit on the dark matter annihilation cross section, the observations of each dwarf galaxy have typically been "stacked" in a joint-likelihood analysis, utilizing optical observations to constrain the dark matter density profile in each dwarf. These limits have typically been computed only for singular annihilation final states, such as b b ¯ or τ+τ- . In this paper, we generalize this approach by producing an independent joint-likelihood analysis to set constraints on models where the dark matter particle annihilates to multiple final-state fermions. We interpret these results in the context of the most popular simplified models, including those with s- and t-channel dark matter annihilation through scalar and vector mediators. We present our results as constraints on the minimum dark matter mass and the mediator sector parameters. Additionally, we compare our simplified model results to those of effective field theory contact interactions in the high-mass limit.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION FOR BANDED PROBABILITY DISTRIBUTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gjerløw, E.; Mikkelsen, K.; Eriksen, H. K.
We investigate sets of random variables that can be arranged sequentially such that a given variable only depends conditionally on its immediate predecessor. For such sets, we show that the full joint probability distribution may be expressed exclusively in terms of uni- and bivariate marginals. Under the assumption that the cosmic microwave background (CMB) power spectrum likelihood only exhibits correlations within a banded multipole range, Δl{sub C}, we apply this expression to two outstanding problems in CMB likelihood analysis. First, we derive a statistically well-defined hybrid likelihood estimator, merging two independent (e.g., low- and high-l) likelihoods into a single expressionmore » that properly accounts for correlations between the two. Applying this expression to the Wilkinson Microwave Anisotropy Probe (WMAP) likelihood, we verify that the effect of correlations on cosmological parameters in the transition region is negligible in terms of cosmological parameters for WMAP; the largest relative shift seen for any parameter is 0.06σ. However, because this may not hold for other experimental setups (e.g., for different instrumental noise properties or analysis masks), but must rather be verified on a case-by-case basis, we recommend our new hybridization scheme for future experiments for statistical self-consistency reasons. Second, we use the same expression to improve the convergence rate of the Blackwell-Rao likelihood estimator, reducing the required number of Monte Carlo samples by several orders of magnitude, and thereby extend it to high-l applications.« less
Hybrid pairwise likelihood analysis of animal behavior experiments.
Cattelan, Manuela; Varin, Cristiano
2013-12-01
The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons. © 2013, The International Biometric Society.
Efficient Exploration of the Space of Reconciled Gene Trees
Szöllősi, Gergely J.; Rosikiewicz, Wojciech; Boussau, Bastien; Tannier, Eric; Daubin, Vincent
2013-01-01
Gene trees record the combination of gene-level events, such as duplication, transfer and loss (DTL), and species-level events, such as speciation and extinction. Gene tree–species tree reconciliation methods model these processes by drawing gene trees into the species tree using a series of gene and species-level events. The reconstruction of gene trees based on sequence alone almost always involves choosing between statistically equivalent or weakly distinguishable relationships that could be much better resolved based on a putative species tree. To exploit this potential for accurate reconstruction of gene trees, the space of reconciled gene trees must be explored according to a joint model of sequence evolution and gene tree–species tree reconciliation. Here we present amalgamated likelihood estimation (ALE), a probabilistic approach to exhaustively explore all reconciled gene trees that can be amalgamated as a combination of clades observed in a sample of gene trees. We implement the ALE approach in the context of a reconciliation model (Szöllősi et al. 2013), which allows for the DTL of genes. We use ALE to efficiently approximate the sum of the joint likelihood over amalgamations and to find the reconciled gene tree that maximizes the joint likelihood among all such trees. We demonstrate using simulations that gene trees reconstructed using the joint likelihood are substantially more accurate than those reconstructed using sequence alone. Using realistic gene tree topologies, branch lengths, and alignment sizes, we demonstrate that ALE produces more accurate gene trees even if the model of sequence evolution is greatly simplified. Finally, examining 1099 gene families from 36 cyanobacterial genomes we find that joint likelihood-based inference results in a striking reduction in apparent phylogenetic discord, with respectively. 24%, 59%, and 46% reductions in the mean numbers of duplications, transfers, and losses per gene family. The open source implementation of ALE is available from https://github.com/ssolo/ALE.git. [amalgamation; gene tree reconciliation; gene tree reconstruction; lateral gene transfer; phylogeny.] PMID:23925510
Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L
2005-12-01
Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.
Cosmological parameters from a re-analysis of the WMAP 7 year low-resolution maps
NASA Astrophysics Data System (ADS)
Finelli, F.; De Rosa, A.; Gruppuso, A.; Paoletti, D.
2013-06-01
Cosmological parameters from Wilkinson Microwave Anisotropy Probe (WMAP) 7 year data are re-analysed by substituting a pixel-based likelihood estimator to the one delivered publicly by the WMAP team. Our pixel-based estimator handles exactly intensity and polarization in a joint manner, allowing us to use low-resolution maps and noise covariance matrices in T, Q, U at the same resolution, which in this work is 3.6°. We describe the features and the performances of the code implementing our pixel-based likelihood estimator. We perform a battery of tests on the application of our pixel-based likelihood routine to WMAP publicly available low-resolution foreground-cleaned products, in combination with the WMAP high-ℓ likelihood, reporting the differences on cosmological parameters evaluated by the full WMAP likelihood public package. The differences are not only due to the treatment of polarization, but also to the marginalization over monopole and dipole uncertainties present in the WMAP pixel likelihood code for temperature. The credible central value for the cosmological parameters change below the 1σ level with respect to the evaluation by the full WMAP 7 year likelihood code, with the largest difference in a shift to smaller values of the scalar spectral index nS.
Development Risk Methodology for Whole Systems Trade Analysis
2016-08-01
Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202- 4302. Respondents should be aware that notwithstanding...JCIDS - Joint Capabilities Integration and Development System L - Likelihood MS - Milestone O&S - Operations and Sustainment P.95 - 95th...and their consequences. These dimensions are: performance, unit cost, operations & sustainment (O&S) cost, development risk, and growth potential
Hu, Meng; Clark, Kelsey L.; Gong, Xiajing; Noudoost, Behrad; Li, Mingyao; Moore, Tirin
2015-01-01
Inferotemporal (IT) neurons are known to exhibit persistent, stimulus-selective activity during the delay period of object-based working memory tasks. Frontal eye field (FEF) neurons show robust, spatially selective delay period activity during memory-guided saccade tasks. We present a copula regression paradigm to examine neural interaction of these two types of signals between areas IT and FEF of the monkey during a working memory task. This paradigm is based on copula models that can account for both marginal distribution over spiking activity of individual neurons within each area and joint distribution over ensemble activity of neurons between areas. Considering the popular GLMs as marginal models, we developed a general and flexible likelihood framework that uses the copula to integrate separate GLMs into a joint regression analysis. Such joint analysis essentially leads to a multivariate analog of the marginal GLM theory and hence efficient model estimation. In addition, we show that Granger causality between spike trains can be readily assessed via the likelihood ratio statistic. The performance of this method is validated by extensive simulations, and compared favorably to the widely used GLMs. When applied to spiking activity of simultaneously recorded FEF and IT neurons during working memory task, we observed significant Granger causality influence from FEF to IT, but not in the opposite direction, suggesting the role of the FEF in the selection and retention of visual information during working memory. The copula model has the potential to provide unique neurophysiological insights about network properties of the brain. PMID:26063909
Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H
2005-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging.
Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.
2010-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884
Balliu, Brunilda; Tsonaka, Roula; Boehringer, Stefan; Houwing-Duistermaat, Jeanine
2015-03-01
Integrative omics, the joint analysis of outcome and multiple types of omics data, such as genomics, epigenomics, and transcriptomics data, constitute a promising approach for powerful and biologically relevant association studies. These studies often employ a case-control design, and often include nonomics covariates, such as age and gender, that may modify the underlying omics risk factors. An open question is how to best integrate multiple omics and nonomics information to maximize statistical power in case-control studies that ascertain individuals based on the phenotype. Recent work on integrative omics have used prospective approaches, modeling case-control status conditional on omics, and nonomics risk factors. Compared to univariate approaches, jointly analyzing multiple risk factors with a prospective approach increases power in nonascertained cohorts. However, these prospective approaches often lose power in case-control studies. In this article, we propose a novel statistical method for integrating multiple omics and nonomics factors in case-control association studies. Our method is based on a retrospective likelihood function that models the joint distribution of omics and nonomics factors conditional on case-control status. The new method provides accurate control of Type I error rate and has increased efficiency over prospective approaches in both simulated and real data. © 2015 Wiley Periodicals, Inc.
Maximum likelihood estimation of signal-to-noise ratio and combiner weight
NASA Technical Reports Server (NTRS)
Kalson, S.; Dolinar, S. J.
1986-01-01
An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.
Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions
NASA Astrophysics Data System (ADS)
Peacock, Sheila; Douglas, Alan; Bowers, David
2017-08-01
Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.
BinQuasi: a peak detection method for ChIP-sequencing data with biological replicates.
Goren, Emily; Liu, Peng; Wang, Chao; Wang, Chong
2018-04-19
ChIP-seq experiments that are aimed at detecting DNA-protein interactions require biological replication to draw inferential conclusions, however there is no current consensus on how to analyze ChIP-seq data with biological replicates. Very few methodologies exist for the joint analysis of replicated ChIP-seq data, with approaches ranging from combining the results of analyzing replicates individually to joint modeling of all replicates. Combining the results of individual replicates analyzed separately can lead to reduced peak classification performance compared to joint modeling. Currently available methods for joint analysis may fail to control the false discovery rate at the nominal level. We propose BinQuasi, a peak caller for replicated ChIP-seq data, that jointly models biological replicates using a generalized linear model framework and employs a one-sided quasi-likelihood ratio test to detect peaks. When applied to simulated data and real datasets, BinQuasi performs favorably compared to existing methods, including better control of false discovery rate than existing joint modeling approaches. BinQuasi offers a flexible approach to joint modeling of replicated ChIP-seq data which is preferable to combining the results of replicates analyzed individually. Source code is freely available for download at https://cran.r-project.org/package=BinQuasi, implemented in R. pliu@iastate.edu or egoren@iastate.edu. Supplementary material is available at Bioinformatics online.
What the Milky Way's dwarfs tell us about the Galactic Center extended gamma-ray excess
NASA Astrophysics Data System (ADS)
Keeley, Ryan E.; Abazajian, Kevork N.; Kwa, Anna; Rodd, Nicholas L.; Safdi, Benjamin R.
2018-05-01
The Milky Way's Galactic Center harbors a gamma-ray excess that is a candidate signal of annihilating dark matter. Dwarf galaxies remain predominantly dark in their expected commensurate emission. In this work we quantify the degree of consistency between these two observations through a joint likelihood analysis. In doing so we incorporate Milky Way dark matter halo profile uncertainties, as well as an accounting of diffuse gamma-ray emission uncertainties in dark matter annihilation models for the Galactic Center extended gamma-ray excess (GCE) detected by the Fermi Gamma-Ray Space Telescope. The preferred range of annihilation rates and masses expands when including these unknowns. Even so, using two recent determinations of the Milky Way halo's local density leaves the GCE preferred region of single-channel dark matter annihilation models to be in strong tension with annihilation searches in combined dwarf galaxy analyses. A third, higher Milky Way density determination, alleviates this tension. Our joint likelihood analysis allows us to quantify this inconsistency. We provide a set of tools for testing dark matter annihilation models' consistency within this combined data set. As an example, we test a representative inverse Compton sourced self-interacting dark matter model, which is consistent with both the GCE and dwarfs.
Gallo, Jiri; Juranova, Jarmila; Svoboda, Michal; Zapletalova, Jana
2017-09-01
The aim of this study was to evaluate the characteristics of synovial fluid (SF) white cell count (SWCC) and neutrophil/lymphocyte percentage in the diagnosis of prosthetic joint infection (PJI) for particular threshold values. This was a prospective study of 391 patients in whom SF specimens were collected before total joint replacement revisions. SF was aspirated before joint capsule incision. The PJI diagnosis was based only on non-SF data. Receiver operating characteristic plots were constructed for the SWCC and differential counts of leukocytes in aspirated fluid. Logistic binomic regression was used to distinguish infected and non-infected cases in the combined data. PJI was diagnosed in 78 patients, and aseptic revision in 313 patients. The areas (AUC) under the curve for the SWCC, the neutrophil and lymphocyte percentages were 0.974, 0.962, and 0.951, respectively. The optimal cut-off for PJI was 3,450 cells/μL, 74.6% neutrophils, and 14.6% lymphocytes. Positive likelihood ratios for the SWCC, neutrophil and lymphocyte percentages were 19.0, 10.4, and 9.5, respectively. Negative likelihood ratios for the SWCC, neutrophil and lymphocyte percentages were 0.06, 0.076, and 0.092, respectively. Based on AUC, the present study identified cut-off values for the SWCC and differential leukocyte count for the diagnosis of PJI. The likelihood ratio for positive/negative SWCCs can significantly change the pre-test probability of PJI.
MIXOR: a computer program for mixed-effects ordinal regression analysis.
Hedeker, D; Gibbons, R D
1996-03-01
MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.
A joint frailty-copula model between tumour progression and death for meta-analysis.
Emura, Takeshi; Nakatochi, Masahiro; Murotani, Kenta; Rondeau, Virginie
2017-12-01
Dependent censoring often arises in biomedical studies when time to tumour progression (e.g., relapse of cancer) is censored by an informative terminal event (e.g., death). For meta-analysis combining existing studies, a joint survival model between tumour progression and death has been considered under semicompeting risks, which induces dependence through the study-specific frailty. Our paper here utilizes copulas to generalize the joint frailty model by introducing additional source of dependence arising from intra-subject association between tumour progression and death. The practical value of the new model is particularly evident for meta-analyses in which only a few covariates are consistently measured across studies and hence there exist residual dependence. The covariate effects are formulated through the Cox proportional hazards model, and the baseline hazards are nonparametrically modeled on a basis of splines. The estimator is then obtained by maximizing a penalized log-likelihood function. We also show that the present methodologies are easily modified for the competing risks or recurrent event data, and are generalized to accommodate left-truncation. Simulations are performed to examine the performance of the proposed estimator. The method is applied to a meta-analysis for assessing a recently suggested biomarker CXCL12 for survival in ovarian cancer patients. We implement our proposed methods in R joint.Cox package.
A Likelihood-Based Framework for Association Analysis of Allele-Specific Copy Numbers.
Hu, Y J; Lin, D Y; Sun, W; Zeng, D
2014-10-01
Copy number variants (CNVs) and single nucleotide polymorphisms (SNPs) co-exist throughout the human genome and jointly contribute to phenotypic variations. Thus, it is desirable to consider both types of variants, as characterized by allele-specific copy numbers (ASCNs), in association studies of complex human diseases. Current SNP genotyping technologies capture the CNV and SNP information simultaneously via fluorescent intensity measurements. The common practice of calling ASCNs from the intensity measurements and then using the ASCN calls in downstream association analysis has important limitations. First, the association tests are prone to false-positive findings when differential measurement errors between cases and controls arise from differences in DNA quality or handling. Second, the uncertainties in the ASCN calls are ignored. We present a general framework for the integrated analysis of CNVs and SNPs, including the analysis of total copy numbers as a special case. Our approach combines the ASCN calling and the association analysis into a single step while allowing for differential measurement errors. We construct likelihood functions that properly account for case-control sampling and measurement errors. We establish the asymptotic properties of the maximum likelihood estimators and develop EM algorithms to implement the corresponding inference procedures. The advantages of the proposed methods over the existing ones are demonstrated through realistic simulation studies and an application to a genome-wide association study of schizophrenia. Extensions to next-generation sequencing data are discussed.
Elashoff, Robert M.; Li, Gang; Li, Ning
2009-01-01
Summary In this article we study a joint model for longitudinal measurements and competing risks survival data. Our joint model provides a flexible approach to handle possible nonignorable missing data in the longitudinal measurements due to dropout. It is also an extension of previous joint models with a single failure type, offering a possible way to model informatively censored events as a competing risk. Our model consists of a linear mixed effects submodel for the longitudinal outcome and a proportional cause-specific hazards frailty submodel (Prentice et al., 1978, Biometrics 34, 541-554) for the competing risks survival data, linked together by some latent random effects. We propose to obtain the maximum likelihood estimates of the parameters by an expectation maximization (EM) algorithm and estimate their standard errors using a profile likelihood method. The developed method works well in our simulation studies and is applied to a clinical trial for the scleroderma lung disease. PMID:18162112
Consideration of Collision "Consequence" in Satellite Conjunction Assessment and Risk Analysis
NASA Technical Reports Server (NTRS)
Hejduk, M.; Laporte, F.; Moury, M.; Newman, L.; Shepperd, R.
2017-01-01
Classic risk management theory requires the assessment of both likelihood and consequence of deleterious events. Satellite conjunction risk assessment has produced a highly-developed theory for assessing collision likelihood but holds a completely static solution for collision consequence, treating all potential collisions as essentially equally worrisome. This may be true for the survival of the protected asset, but the amount of debris produced by the potential collision, and therefore the degree to which the orbital corridor may be compromised, can vary greatly among satellite conjunctions. This study leverages present work on satellite collision modeling to develop a method by which it can be estimated, to a particular confidence level, whether a particular collision is likely to produce a relatively large or relatively small amount of resultant debris and how this datum might alter conjunction remediation decisions. The more general question of orbital corridor protection is also addressed, and a preliminary framework presented by which both collision likelihood and consequence can be jointly considered in the risk assessment process.
Accounting for informatively missing data in logistic regression by means of reassessment sampling.
Lin, Ji; Lyles, Robert H
2015-05-20
We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.
Espin‐Garcia, Osvaldo; Craiu, Radu V.
2017-01-01
ABSTRACT We evaluate two‐phase designs to follow‐up findings from genome‐wide association study (GWAS) when the cost of regional sequencing in the entire cohort is prohibitive. We develop novel expectation‐maximization‐based inference under a semiparametric maximum likelihood formulation tailored for post‐GWAS inference. A GWAS‐SNP (where SNP is single nucleotide polymorphism) serves as a surrogate covariate in inferring association between a sequence variant and a normally distributed quantitative trait (QT). We assess test validity and quantify efficiency and power of joint QT‐SNP‐dependent sampling and analysis under alternative sample allocations by simulations. Joint allocation balanced on SNP genotype and extreme‐QT strata yields significant power improvements compared to marginal QT‐ or SNP‐based allocations. We illustrate the proposed method and evaluate the sensitivity of sample allocation to sampling variation using data from a sequencing study of systolic blood pressure. PMID:29239496
Sun, Zhichao; Mukherjee, Bhramar; Estes, Jason P; Vokonas, Pantel S; Park, Sung Kyun
2017-08-15
Joint effects of genetic and environmental factors have been increasingly recognized in the development of many complex human diseases. Despite the popularity of case-control and case-only designs, longitudinal cohort studies that can capture time-varying outcome and exposure information have long been recommended for gene-environment (G × E) interactions. To date, literature on sampling designs for longitudinal studies of G × E interaction is quite limited. We therefore consider designs that can prioritize a subsample of the existing cohort for retrospective genotyping on the basis of currently available outcome, exposure, and covariate data. In this work, we propose stratified sampling based on summaries of individual exposures and outcome trajectories and develop a full conditional likelihood approach for estimation that adjusts for the biased sample. We compare the performance of our proposed design and analysis with combinations of different sampling designs and estimation approaches via simulation. We observe that the full conditional likelihood provides improved estimates for the G × E interaction and joint exposure effects over uncorrected complete-case analysis, and the exposure enriched outcome trajectory dependent design outperforms other designs in terms of estimation efficiency and power for detection of the G × E interaction. We also illustrate our design and analysis using data from the Normative Aging Study, an ongoing longitudinal cohort study initiated by the Veterans Administration in 1963. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Measurement of the top-quark mass in all-hadronic decays in pp collisions at CDF II.
Aaltonen, T; Abulencia, A; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Ambrose, D; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arguin, J-F; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Bedeschi, F; Behari, S; Belforte, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Budroni, S; Burkett, K; Busetto, G; Bussey, P; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carillo, S; Carlsmith, D; Carosi, R; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciljak, M; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Coca, M; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Almenar, C Cuenca; Cuevas, J; Culbertson, R; Cully, J C; Cyr, D; Daronco, S; Datta, M; D'Auria, S; Davies, T; D'Onofrio, M; Dagenhart, D; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; Dell'orso, M; Delli Paoli, F; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; Dituro, P; Dörr, C; Donati, S; Donega, M; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Foland, A; Forrester, S; Foster, G W; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garberson, F; Garfinkel, A F; Gay, C; Gerberich, H; Gerdes, D; Giagu, S; Giannetti, P; Gibson, A; Gibson, K; Gimmell, J L; Ginsburg, C; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, J; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Griffiths, M; Grinstein, S; Grosso-Pilcher, C; Grundler, U; da Costa, J Guimaraes; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Holloway, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ishizawa, Y; Ivanov, A; Iyutin, B; James, E; Jang, D; Jayatilaka, B; Jeans, D; Jensen, H; Jeon, E J; Jindariani, S; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Karchin, P E; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kovalev, A; Kraan, A C; Kraus, J; Kravchenko, I; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Loverre, P; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Manca, G; Margaroli, F; Marginean, R; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Maruyama, T; Mastrandrea, P; Masubuchi, T; Matsunaga, H; Mattson, M E; Mazini, R; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyamoto, A; Moed, S; Moggi, N; Mohr, B; Moore, R; Morello, M; Fernandez, P Movilla; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Nachtman, J; Nagano, A; Naganoma, J; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nigmanov, T; Nodulman, L; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ranjan, N; Rappoccio, S; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Sabik, S; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Saltzberg, D; Sánchez, C; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savard, P; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Sjolin, J; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; Staveris-Polykalas, A; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Takikawa, K; Tanaka, M; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tsuchiya, R; Tsuno, S; Turini, N; Ukegawa, F; Unverhau, T; Uozumi, S; Usynin, D; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Veramendi, G; Veszpremi, V; Vidal, R; Vila, I; Vilar, R; Vine, T; Vollrath, I; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner, J; Wagner, W; Wallny, R; Wang, S M; Warburton, A; Waschke, S; Waters, D; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zhou, J; Zucchelli, S
2007-04-06
We present a measurement of the top-quark mass Mtop in the all-hadronic decay channel tt-->W+bW-b-->q1q2bq3q4b. The analysis is performed using 310 pb-1 of sqrt[s]=1.96 TeV pp[over ] collisions collected with the CDF II detector using a multijet trigger. The mass measurement is based on an event-by-event likelihood which depends on both the sample purity and the value of the top-quark mass, using 90 possible jet-to-parton assignments in the six-jet final state. The joint likelihood of 290 selected events yields a value of Mtop=177.1+/-4.9(stat)+/-4.7(syst) GeV/c2.
Classification of cassava genotypes based on qualitative and quantitative data.
Oliveira, E J; Oliveira Filho, O S; Santos, V S
2015-02-02
We evaluated the genetic variation of cassava accessions based on qualitative (binomial and multicategorical) and quantitative traits (continuous). We characterized 95 accessions obtained from the Cassava Germplasm Bank of Embrapa Mandioca e Fruticultura; we evaluated these accessions for 13 continuous, 10 binary, and 25 multicategorical traits. First, we analyzed the accessions based only on quantitative traits; next, we conducted joint analysis (qualitative and quantitative traits) based on the Ward-MLM method, which performs clustering in two stages. According to the pseudo-F, pseudo-t2, and maximum likelihood criteria, we identified five and four groups based on quantitative trait and joint analysis, respectively. The smaller number of groups identified based on joint analysis may be related to the nature of the data. On the other hand, quantitative data are more subject to environmental effects in the phenotype expression; this results in the absence of genetic differences, thereby contributing to greater differentiation among accessions. For most of the accessions, the maximum probability of classification was >0.90, independent of the trait analyzed, indicating a good fit of the clustering method. Differences in clustering according to the type of data implied that analysis of quantitative and qualitative traits in cassava germplasm might explore different genomic regions. On the other hand, when joint analysis was used, the means and ranges of genetic distances were high, indicating that the Ward-MLM method is very useful for clustering genotypes when there are several phenotypic traits, such as in the case of genetic resources and breeding programs.
2010-04-13
Office of Naval Research. DISTRIBUTION STATEMENT A . Approved for public release; distribution is unlimited. a . This statement may be used only on...documents resulting from contracted fundamental research efforts will normally be assigned Distribution Statement A , except for those rare and exceptional...circumstances where there is a high likelihood of disclosing performance characteristics of military systems, or of manufacturing technologies that
Benoit, Roland G.; Schacter, Daniel L.
2015-01-01
It has been suggested that the simulation of hypothetical episodes and the recollection of past episodes are supported by fundamentally the same set of brain regions. The present article specifies this core network via Activation Likelihood Estimation (ALE). Specifically, a first meta-analysis revealed joint engagement of core network regions during episodic memory and episodic simulation. These include parts of the medial surface, the hippocampus and parahippocampal cortex within the medial temporal lobes, and the lateral temporal and inferior posterior parietal cortices on the lateral surface. Both capacities also jointly recruited additional regions such as parts of the bilateral dorsolateral prefrontal cortex. All of these core regions overlapped with the default network. Moreover, it has further been suggested that episodic simulation may require a stronger engagement of some of the core network’s nodes as wells as the recruitment of additional brain regions supporting control functions. A second ALE meta-analysis indeed identified such regions that were consistently more strongly engaged during episodic simulation than episodic memory. These comprised the core-network clusters located in the left dorsolateral prefrontal cortex and posterior inferior parietal lobe and other structures distributed broadly across the default and fronto-parietal control networks. Together, the analyses determine the set of brain regions that allow us to experience past and hypothetical episodes, thus providing an important foundation for studying the regions’ specialized contributions and interactions. PMID:26142352
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
Results and Analysis from Space Suit Joint Torque Testing
NASA Technical Reports Server (NTRS)
Matty, Jennifer E.; Aitchison, Lindsay
2009-01-01
A space suit s mobility is critical to an astronaut s ability to perform work efficiently. As mobility increases, the astronaut can perform tasks for longer durations with less fatigue. The term mobility, with respect to space suits, is defined in terms of two key components: joint range of motion and joint torque. Individually these measures describe the path which in which a joint travels and the force required to move it through that path. Previous space suits mobility requirements were defined as the collective result of these two measures and verified by the completion of discrete functional tasks. While a valid way to impose mobility requirements, such a method does necessitate a solid understanding of the operational scenarios in which the final suit will be performing. Because the Constellation space suit system requirements are being finalized with a relatively immature concept of operations, the Space Suit Element team elected to define mobility in terms of its constituent parts to increase the likelihood that the future pressure garment will be mobile enough to enable a broad scope of undefined exploration activities. The range of motion requirements were defined by measuring the ranges of motion test subjects achieved while performing a series of joint maximizing tasks in a variety of flight and prototype space suits. The definition of joint torque requirements has proved more elusive. NASA evaluated several different approaches to the problem before deciding to generate requirements based on unmanned joint torque evaluations of six different space suit configurations being articulated through 16 separate joint movements. This paper discusses the experiment design, data analysis and results, and the process used to determine the final values for the Constellation pressure garment joint torque requirements.
Position of the prosthesis and the incidence of dislocation following total hip replacement.
He, Rong-xin; Yan, Shi-gui; Wu, Li-dong; Wang, Xiang-hua; Dai, Xue-song
2007-07-05
Dislocation is the second most common complication of hip replacement surgery, and impact of the prosthesis is believed to be the fundamental reason. The present study employed Solidworks 2003 and MSC-Nastran software to analyze the three dimensional variables in order to investigate how to prevent dislocation following hip replacement surgery. Computed tomography (CT) imaging was used to collect femoral outline data and Solidworks 2003 software was used to construct the cup model with variabilities. Nastran software was used to evaluate dislocation at different prosthesis positions and different geometrical shapes. Three dimensional movement and results from finite element method were analyzed and the values of dislocation resistance index (DRI), range of motion to impingement (ROM-I), range of motion to dislocation (ROM-D) and peak resisting moment (PRM) were determined. Computer simulation was used to evaluate the range of motion of the hip joint at different prosthesis positions. Finite element analysis showed: (1) Increasing the ratio of head/neck increased the ROM-I values and moderately increased ROM-D and PRM values. Increasing the head size significantly increased PRM and to some extent ROM-I and ROM-D values, which suggested that there would be a greater likelihood of dislocation. (2) Increasing the anteversion angle increased the ROM-I, ROM-D, PRM, energy required for dislocation (ENERGY-D) and DRI values, which would increase the stability of the joint. (3) As the chamber angle was increased, ROM-I, ROM-D, PRM, Energy-D and DRI values were increased, resulting in improved joint stability. Chamber angles exceeding 55 degrees resulted in increases in ROM-I and ROM-D values, but decreases in PRM, Energy-D, and DRI values, which, in turn, increased the likelihood of dislocation. (4) The cup, which was reduced posteriorly, reduced ROM-I values (2.1 -- 5.3 degrees ) and increased the DRI value (0.073). This suggested that the posterior high side had the effect of 10 degrees anteversion angle. Increasing the head/neck ratio increases joint stability. Posterior high side reduced the range of motion of the joint but increased joint stability; Increasing the anteversion angle increases DRI values and thus improve joint stability; Increasing the chamber angle increases DRI values and improves joint stability. However, at angles exceeding 55 degrees , further increases in the chamber angle result in decreased DRI values and reduce the stability of the joint.
NASA Astrophysics Data System (ADS)
Krestyannikov, E.; Tohka, J.; Ruotsalainen, U.
2008-06-01
This paper presents a novel statistical approach for joint estimation of regions-of-interest (ROIs) and the corresponding time-activity curves (TACs) from dynamic positron emission tomography (PET) brain projection data. It is based on optimizing the joint objective function that consists of a data log-likelihood term and two penalty terms reflecting the available a priori information about the human brain anatomy. The developed local optimization strategy iteratively updates both the ROI and TAC parameters and is guaranteed to monotonically increase the objective function. The quantitative evaluation of the algorithm is performed with numerically and Monte Carlo-simulated dynamic PET brain data of the 11C-Raclopride and 18F-FDG tracers. The results demonstrate that the method outperforms the existing sequential ROI quantification approaches in terms of accuracy, and can noticeably reduce the errors in TACs arising due to the finite spatial resolution and ROI delineation.
A Comparative Study of Co-Channel Interference Suppression Techniques
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Satorius, Ed; Paparisto, Gent; Polydoros, Andreas
1997-01-01
We describe three methods of combatting co-channel interference (CCI): a cross-coupled phase-locked loop (CCPLL); a phase-tracking circuit (PTC), and joint Viterbi estimation based on the maximum likelihood principle. In the case of co-channel FM-modulated voice signals, the CCPLL and PTC methods typically outperform the maximum likelihood estimators when the modulation parameters are dissimilar. However, as the modulation parameters become identical, joint Viterbi estimation provides for a more robust estimate of the co-channel signals and does not suffer as much from "signal switching" which especially plagues the CCPLL approach. Good performance for the PTC requires both dissimilar modulation parameters and a priori knowledge of the co-channel signal amplitudes. The CCPLL and joint Viterbi estimators, on the other hand, incorporate accurate amplitude estimates. In addition, application of the joint Viterbi algorithm to demodulating co-channel digital (BPSK) signals in a multipath environment is also discussed. It is shown in this case that if the interference is sufficiently small, a single trellis model is most effective in demodulating the co-channel signals.
Overcoming Barriers to Firewise Actions by Residents. Final Report to Joint Fire Science Program
James D. Absher; Jerry J. Vaske; Katie M. Lyon
2013-01-01
Encouraging the public to take action (e.g., creating defensible space) that can reduce the likelihood of wildfire damage and decrease the likelihood of injury is a common approach to increasing wildfire safety and damage mitigation. This study was designed to improve our understanding of both individual and community actions that homeowners currently do or might take...
Statistical characteristics of the sequential detection of signals in correlated noise
NASA Astrophysics Data System (ADS)
Averochkin, V. A.; Baranov, P. E.
1985-10-01
A solution is given to the problem of determining the distribution of the duration of the sequential two-threshold Wald rule for the time-discrete detection of determinate and Gaussian correlated signals on a background of Gaussian correlated noise. Expressions are obtained for the joint probability densities of the likelihood ratio logarithms, and an analysis is made of the effect of correlation and SNR on the duration distribution and the detection efficiency. Comparison is made with Neumann-Pearson detection.
Working on a Standard Joint Unit: A pilot test.
Casajuana, Cristina; López-Pelayo, Hugo; Mercedes Balcells, María; Miquel, Laia; Teixidó, Lídia; Colom, Joan; Gual, Antoni
2017-09-29
Assessing cannabis consumption remains complex due to no reliable registration systems. We tested the likelihood of establishing a Standard Joint Unit (SJU) which considers the main cannabinoids with implication on health through a naturalistic approach. Methodology. Pilot study with current cannabis users of four areas of Barcelona: universities, nightclubs, out-patient mental health service, and cannabis associations. We designed and administered a questionnaire on cannabis use-patterns and determined the willingness to donate a joint for analysis. Descriptive statistics were used to analyze the data. Forty volunteers answered the questionnaire (response rate 95%); most of them were men (72.5%) and young adults (median age 24.5 years; IQR 8.75 years) who consume daily or nearly daily (70%). Most participants consume marihuana (85%) and roll their joints with a median of 0.25 gr of marihuana. Two out of three (67.5%) stated they were willing to donate a joint. Obtaining an SJU with the planned methodology has proved to be feasible. Pre-testing resulted in an improvement of the questionnaire and retribution to incentivize donations. Establishing an SJU is essential to improve our knowledge on cannabis-related outcomes.
Michelsen, Brigitte; Kristianslund, Eirik Klami; Sexton, Joseph; Hammer, Hilde Berner; Fagerli, Karen Minde; Lie, Elisabeth; Wierød, Ada; Kalstad, Synøve; Rødevand, Erik; Krøll, Frode; Haugeberg, Glenn; Kvien, Tore K
2017-11-01
To investigate the predictive value of baseline depression/anxiety on the likelihood of achieving joint remission in rheumatoid arthritis (RA) and psoriatic arthritis (PsA) as well as the associations between baseline depression/anxiety and the components of the remission criteria at follow-up. We included 1326 patients with RA and 728 patients with PsA from the prospective observational NOR-DMARD study starting first-time tumour necrosis factor inhibitors or methotrexate. The predictive value of depression/anxiety on remission was explored in prespecified logistic regression models and the associations between baseline depression/anxiety and the components of the remission criteria in prespecified multiple linear regression models. Baseline depression/anxiety according to EuroQoL-5D-3L, Short Form-36 (SF-36) Mental Health subscale ≤56 and SF-36 Mental Component Summary ≤38 negatively predicted 28-joint Disease Activity Score <2.6, Simplified Disease Activity Index ≤3.3, Clinical Disease Activity Index ≤2.8, ACR/EULAR Boolean and Disease Activity Index for Psoriatic Arthritis ≤4 remission after 3 and 6 months treatment in RA (p≤0.008) and partly in PsA (p from 0.001 to 0.73). Baseline depression/anxiety was associated with increased patient's and evaluator's global assessment, tender joint count and joint pain in RA at follow-up, but not with swollen joint count and acute phase reactants. Depression and anxiety may reduce likelihood of joint remission based on composite scores in RA and PsA and should be taken into account in individual patients when making a shared decision on a treatment target. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Exact Calculation of the Joint Allele Frequency Spectrum for Isolation with Migration Models.
Kern, Andrew D; Hey, Jody
2017-09-01
Population genomic datasets collected over the past decade have spurred interest in developing methods that can utilize massive numbers of loci for inference of demographic and selective histories of populations. The allele frequency spectrum (AFS) provides a convenient statistic for such analysis, and, accordingly, much attention has been paid to predicting theoretical expectations of the AFS under a number of different models. However, to date, exact solutions for the joint AFS of two or more populations under models of migration and divergence have not been found. Here, we present a novel Markov chain representation of the coalescent on the state space of the joint AFS that allows for rapid, exact calculation of the joint AFS under isolation with migration (IM) models. In turn, we show how our Markov chain method, in the context of composite likelihood estimation, can be used for accurate inference of parameters of the IM model using SNP data. Lastly, we apply our method to recent whole genome datasets from African Drosophila melanogaster . Copyright © 2017 Kern and Hey.
Harbert, Robert S; Nixon, Kevin C
2015-08-01
• Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.
Transfer Entropy as a Log-Likelihood Ratio
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Bossomaier, Terry
2012-09-01
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Transfer entropy as a log-likelihood ratio.
Barnett, Lionel; Bossomaier, Terry
2012-09-28
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Vector quantizer designs for joint compression and terrain categorization of multispectral imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Lyons, Daniel F.
1994-01-01
Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.
On the Existence and Uniqueness of JML Estimates for the Partial Credit Model
ERIC Educational Resources Information Center
Bertoli-Barsotti, Lucio
2005-01-01
A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…
Hip Implant Modified To Increase Probability Of Retention
NASA Technical Reports Server (NTRS)
Canabal, Francisco, III
1995-01-01
Modification in design of hip implant proposed to increase likelihood of retention of implant in femur after hip-repair surgery. Decreases likelihood of patient distress and expense associated with repetition of surgery after failed implant procedure. Intended to provide more favorable flow of cement used to bind implant in proximal extreme end of femur, reducing structural flaws causing early failure of implant/femur joint.
NASA Astrophysics Data System (ADS)
Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.
2014-09-01
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.
Benoit, Roland G; Schacter, Daniel L
2015-08-01
It has been suggested that the simulation of hypothetical episodes and the recollection of past episodes are supported by fundamentally the same set of brain regions. The present article specifies this core network via Activation Likelihood Estimation (ALE). Specifically, a first meta-analysis revealed joint engagement of expected core-network regions during episodic memory and episodic simulation. These include parts of the medial surface, the hippocampus and parahippocampal cortex within the medial temporal lobes, and the temporal and inferior posterior parietal cortices on the lateral surface. Both capacities also jointly recruited additional regions such as parts of the bilateral dorsolateral prefrontal cortex. All of these core regions overlapped with the default network. Moreover, it has further been suggested that episodic simulation may require a stronger engagement of some of the core network's nodes as well as the recruitment of additional brain regions supporting control functions. A second ALE meta-analysis indeed identified such regions that were consistently more strongly engaged during episodic simulation than episodic memory. These comprised the core-network clusters located in the left dorsolateral prefrontal cortex and posterior inferior parietal lobe and other structures distributed broadly across the default and fronto-parietal control networks. Together, the analyses determine the set of brain regions that allow us to experience past and hypothetical episodes, thus providing an important foundation for studying the regions' specialized contributions and interactions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chakraborty, Arindom
2016-12-01
A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.
Espin-Garcia, Osvaldo; Craiu, Radu V; Bull, Shelley B
2018-02-01
We evaluate two-phase designs to follow-up findings from genome-wide association study (GWAS) when the cost of regional sequencing in the entire cohort is prohibitive. We develop novel expectation-maximization-based inference under a semiparametric maximum likelihood formulation tailored for post-GWAS inference. A GWAS-SNP (where SNP is single nucleotide polymorphism) serves as a surrogate covariate in inferring association between a sequence variant and a normally distributed quantitative trait (QT). We assess test validity and quantify efficiency and power of joint QT-SNP-dependent sampling and analysis under alternative sample allocations by simulations. Joint allocation balanced on SNP genotype and extreme-QT strata yields significant power improvements compared to marginal QT- or SNP-based allocations. We illustrate the proposed method and evaluate the sensitivity of sample allocation to sampling variation using data from a sequencing study of systolic blood pressure. © 2017 The Authors. Genetic Epidemiology Published by Wiley Periodicals, Inc.
Ganachari, Malathesha; Ruiz-Morales, Jorge A; Gomez de la Torre Pretell, Juan C; Dinh, Jeffrey; Granados, Julio; Flores-Villanueva, Pedro O
2010-01-25
We previously reported that the -2518 MCP-1 genotype GG increases the likelihood of developing tuberculosis (TB) in non-BCG-vaccinated Mexicans and Koreans. Here, we tested the hypothesis that this genotype, alone or together with the -1607 MMP-1 functional polymorphism, increases the likelihood of developing TB in BCG-vaccinated individuals. We conducted population-based case-control studies of BCG-vaccinated individuals in Mexico and Peru that included 193 TB cases and 243 healthy tuberculin-positive controls from Mexico and 701 TB cases and 796 controls from Peru. We also performed immunohistochemistry (IHC) analysis of lymph nodes from carriers of relevant two-locus genotypes and in vitro studies to determine how these variants may operate to increase the risk of developing active disease. We report that a joint effect between the -2518 MCP-1 genotype GG and the -1607 MMP-1 genotype 2G/2G consistently increases the odds of developing TB 3.59-fold in Mexicans and 3.9-fold in Peruvians. IHC analysis of lymph nodes indicated that carriers of the two-locus genotype MCP-1 GG MMP-1 2G/2G express the highest levels of both MCP-1 and MMP-1. Carriers of these susceptibility genotypes might be at increased risk of developing TB because they produce high levels of MCP-1, which enhances the induction of MMP-1 production by M. tuberculosis-sonicate antigens to higher levels than in carriers of the other two-locus MCP-1 MMP-1 genotypes studied. This notion was supported by in vitro experiments and luciferase based promoter activity assay. MMP-1 may destabilize granuloma formation and promote tissue damage and disease progression early in the infection. Our findings may foster the development of new and personalized therapeutic approaches targeting MCP-1 and/or MMP-1.
Ganachari, Malathesha; Ruiz-Morales, Jorge A.; Gomez de la Torre Pretell, Juan C.; Dinh, Jeffrey; Granados, Julio; Flores-Villanueva, Pedro O.
2010-01-01
We previously reported that the – 2518 MCP-1 genotype GG increases the likelihood of developing tuberculosis (TB) in non-BCG-vaccinated Mexicans and Koreans. Here, we tested the hypothesis that this genotype, alone or together with the – 1607 MMP-1 functional polymorphism, increases the likelihood of developing TB in BCG-vaccinated individuals. We conducted population-based case-control studies of BCG-vaccinated individuals in Mexico and Peru that included 193 TB cases and 243 healthy tuberculin-positive controls from Mexico and 701 TB cases and 796 controls from Peru. We also performed immunohistochemistry (IHC) analysis of lymph nodes from carriers of relevant two-locus genotypes and in vitro studies to determine how these variants may operate to increase the risk of developing active disease. We report that a joint effect between the – 2518 MCP-1 genotype GG and the – 1607 MMP-1 genotype 2G/2G consistently increases the odds of developing TB 3.59-fold in Mexicans and 3.9-fold in Peruvians. IHC analysis of lymph nodes indicated that carriers of the two-locus genotype MCP-1 GG MMP-1 2G/2G express the highest levels of both MCP-1 and MMP-1. Carriers of these susceptibility genotypes might be at increased risk of developing TB because they produce high levels of MCP-1, which enhances the induction of MMP-1 production by M. tuberculosis-sonicate antigens to higher levels than in carriers of the other two-locus MCP-1 MMP-1 genotypes studied. This notion was supported by in vitro experiments and luciferase based promoter activity assay. MMP-1 may destabilize granuloma formation and promote tissue damage and disease progression early in the infection. Our findings may foster the development of new and personalized therapeutic approaches targeting MCP-1 and/or MMP-1. PMID:20111728
Pseudo and conditional score approach to joint analysis of current count and current status data.
Wen, Chi-Chung; Chen, Yi-Hau
2018-04-17
We develop a joint analysis approach for recurrent and nonrecurrent event processes subject to case I interval censorship, which are also known in literature as current count and current status data, respectively. We use a shared frailty to link the recurrent and nonrecurrent event processes, while leaving the distribution of the frailty fully unspecified. Conditional on the frailty, the recurrent event is assumed to follow a nonhomogeneous Poisson process, and the mean function of the recurrent event and the survival function of the nonrecurrent event are assumed to follow some general form of semiparametric transformation models. Estimation of the models is based on the pseudo-likelihood and the conditional score techniques. The resulting estimators for the regression parameters and the unspecified baseline functions are shown to be consistent with rates of square and cubic roots of the sample size, respectively. Asymptotic normality with closed-form asymptotic variance is derived for the estimator of the regression parameters. We apply the proposed method to a fracture-osteoporosis survey data to identify risk factors jointly for fracture and osteoporosis in elders, while accounting for association between the two events within a subject. © 2018, The International Biometric Society.
Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O
1994-01-01
The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
Assessment of articular disc displacement of temporomandibular joint with ultrasound.
Razek, Ahmed Abdel Khalek Abdel; Al Mahdy Al Belasy, Fouad; Ahmed, Wael Mohamed Said; Haggag, Mai Ahmed
2015-06-01
To assess pattern of articular disc displacement in patients with internal derangement (ID) of temporomandibular joint (TMJ) with ultrasound. Prospective study was conducted upon 40 TMJ of 20 patients (3 male, 17 female with mean age of 26.1 years) with ID of TMJ. They underwent high-resolution ultrasound and MR imaging of TMJ. The MR images were used as the gold standard for calculating sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (PLR), and negative likelihood ratio (NLR) of ultrasound for diagnosis of anterior or sideway displacement of the disc. The anterior displaced disc was seen in 26 joints at MR and 22 joints at ultrasound. The diagnostic efficacy of ultrasound for anterior displacement has sensitivity of 79.3 %, specificity of 72.7 %, accuracy of 77.5 %, PPV of 88.5 %, NPV of 57.1 %, PLR of 2.9 and NLR of 0.34. The sideway displacement of disc was seen in four joints at MR and three joints at ultrasound. The diagnostic efficacy of ultrasound for sideway displacement has a sensitivity of 75 %, specificity of 63.6 %, accuracy of 66.7 %, PPV of 42.8, NPV of 87.5 %, PLR of 2.06, and NLR of 0.39. We concluded that ultrasound is a non-invasive imaging modality used for assessment of anterior and sideway displacement of the articular disc in patients with ID of TMJ.
Statistical inference for noisy nonlinear ecological dynamic systems.
Wood, Simon N
2010-08-26
Chaotic ecological dynamic systems defy conventional statistical analysis. Systems with near-chaotic dynamics are little better. Such systems are almost invariably driven by endogenous dynamic processes plus demographic and environmental process noise, and are only observable with error. Their sensitivity to history means that minute changes in the driving noise realization, or the system parameters, will cause drastic changes in the system trajectory. This sensitivity is inherited and amplified by the joint probability density of the observable data and the process noise, rendering it useless as the basis for obtaining measures of statistical fit. Because the joint density is the basis for the fit measures used by all conventional statistical methods, this is a major theoretical shortcoming. The inability to make well-founded statistical inferences about biological dynamic models in the chaotic and near-chaotic regimes, other than on an ad hoc basis, leaves dynamic theory without the methods of quantitative validation that are essential tools in the rest of biological science. Here I show that this impasse can be resolved in a simple and general manner, using a method that requires only the ability to simulate the observed data on a system from the dynamic model about which inferences are required. The raw data series are reduced to phase-insensitive summary statistics, quantifying local dynamic structure and the distribution of observations. Simulation is used to obtain the mean and the covariance matrix of the statistics, given model parameters, allowing the construction of a 'synthetic likelihood' that assesses model fit. This likelihood can be explored using a straightforward Markov chain Monte Carlo sampler, but one further post-processing step returns pure likelihood-based inference. I apply the method to establish the dynamic nature of the fluctuations in Nicholson's classic blowfly experiments.
Exact likelihood evaluations and foreground marginalization in low resolution WMAP data
NASA Astrophysics Data System (ADS)
Slosar, Anže; Seljak, Uroš; Makarov, Alexey
2004-06-01
The large scale anisotropies of Wilkinson Microwave Anisotropy Probe (WMAP) data have attracted a lot of attention and have been a source of controversy, with many favorite cosmological models being apparently disfavored by the power spectrum estimates at low l. All the existing analyses of theoretical models are based on approximations for the likelihood function, which are likely to be inaccurate on large scales. Here we present exact evaluations of the likelihood of the low multipoles by direct inversion of the theoretical covariance matrix for low resolution WMAP maps. We project out the unwanted galactic contaminants using the WMAP derived maps of these foregrounds. This improves over the template based foreground subtraction used in the original analysis, which can remove some of the cosmological signal and may lead to a suppression of power. As a result we find an increase in power at low multipoles. For the quadrupole the maximum likelihood values are rather uncertain and vary between 140 and 220 μK2. On the other hand, the probability distribution away from the peak is robust and, assuming a uniform prior between 0 and 2000 μK2, the probability of having the true value above 1200 μK2 (as predicted by the simplest cold dark matter model with a cosmological constant) is 10%, a factor of 2.5 higher than predicted by the WMAP likelihood code. We do not find the correlation function to be unusual beyond the low quadrupole value. We develop a fast likelihood evaluation routine that can be used instead of WMAP routines for low l values. We apply it to the Markov chain Monte Carlo analysis to compare the cosmological parameters between the two cases. The new analysis of WMAP either alone or jointly with the Sloan Digital Sky Survey (SDSS) and the Very Small Array (VSA) data reduces the evidence for running to less than 1σ, giving αs=-0.022±0.033 for the combined case. The new analysis prefers about a 1σ lower value of Ωm, a consequence of an increased integrated Sachs-Wolfe (ISW) effect contribution required by the increase in the spectrum at low l. These results suggest that the details of foreground removal and full likelihood analysis are important for parameter estimation from the WMAP data. They are robust in the sense that they do not change significantly with frequency, mask, or details of foreground template marginalization. The marginalization approach presented here is the most conservative method to remove the foregrounds and should be particularly useful in the analysis of polarization, where foreground contamination may be much more severe.
Planning and conducting medical support to joint operations.
Hughes, A S
2000-01-01
Operations are core business for all of us and the PJHQ medical cell is at the heart of this process. With the likelihood of a continuing UK presence in the Balkans for some time to come, the challenge of meeting this and any other new operational commitments will continue to demand a flexible and innovative approach from all concerned. These challenges together with the Joint and multinational aspects of the job make the PJHQ medical cell a demanding but rewarding place to work and provide a valuable Joint staff training opportunity for the RNMS.
Interpreting DNA mixtures with the presence of relatives.
Hu, Yue-Qing; Fung, Wing K
2003-02-01
The assessment of DNA mixtures with the presence of relatives is discussed in this paper. The kinship coefficients are incorporated into the evaluation of the likelihood ratio and we first derive a unified expression of joint genotypic probabilities. A general formula and seven types of detailed expressions for calculating likelihood ratios are then developed for the case that a relative of the tested suspect is an unknown contributor to the mixed stain. These results can also be applied to the case of a non-tested suspect with one tested relative. Moreover, the formula for calculating the likelihood ratio when there are two related unknown contributors is given. Data for a real situation are given for illustration, and the effect of kinship on the likelihood ratio is shown therein. Some interesting findings are obtained.
Semiparametric time-to-event modeling in the presence of a latent progression event.
Rice, John D; Tsodikov, Alex
2017-06-01
In cancer research, interest frequently centers on factors influencing a latent event that must precede a terminal event. In practice it is often impossible to observe the latent event precisely, making inference about this process difficult. To address this problem, we propose a joint model for the unobserved time to the latent and terminal events, with the two events linked by the baseline hazard. Covariates enter the model parametrically as linear combinations that multiply, respectively, the hazard for the latent event and the hazard for the terminal event conditional on the latent one. We derive the partial likelihood estimators for this problem assuming the latent event is observed, and propose a profile likelihood-based method for estimation when the latent event is unobserved. The baseline hazard in this case is estimated nonparametrically using the EM algorithm, which allows for closed-form Breslow-type estimators at each iteration, bringing improved computational efficiency and stability compared with maximizing the marginal likelihood directly. We present simulation studies to illustrate the finite-sample properties of the method; its use in practice is demonstrated in the analysis of a prostate cancer data set. © 2016, The International Biometric Society.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
Devesa, V; Rovesti, G L; Urrutia, P G; Sanroman, F; Rodriguez-Quiros, J
2015-06-01
The objective of this study was to evaluate technical feasibility and efficacy of a joint distraction technique by traction stirrup to facilitate shoulder arthroscopy and assess potential soft tissue damage. Twenty shoulders were evaluated radiographically before distraction. Distraction was applied with loads from 40 N up to 200 N, in 40 N increments, and the joint space was recorded at each step by radiographic images. The effects of joint flexion and intra-articular air injection at maximum load were evaluated. Radiographic evaluation was performed after distraction to evaluate ensuing joint laxity. Joint distraction by traction stirrup technique produces a significant increase in the joint space; an increase in joint laxity could not be inferred by standard and stress radiographs. However, further clinical studies are required to evaluate potential neurovascular complications. A wider joint space may be useful to facilitate arthroscopy, reducing the likelihood for iatrogenic damage to intra-articular structures. Copyright © 2015 Elsevier Ltd. All rights reserved.
He, Y; Li, Y; Lai, J; Wang, D; Zhang, J; Fu, P; Yang, X; Qi, L
2013-10-01
To examine the nationally-representative dietary patterns and their joint effects with physical activity on the likelihood of metabolic syndrome (MS) among 20,827 Chinese adults. CNNHS was a nationally representative cross-sectional observational study. Metabolic syndrome was defined according to the Joint Interim Statement definition. The "Green Water" dietary pattern, characterized by high intakes of rice and vegetables and moderate intakes in animal foods was related to the lowest prevalence of MS (15.9%). Compared to the "Green Water" dietary pattern, the "Yellow Earth" dietary pattern, characterized by high intakes of refined cereal products, tubers, cooking salt and salted vegetable was associated with a significantly elevated odds of MS (odds ratio 1.66, 95%CI: 1.40-1.96), after adjustment of age, sex, socioeconomic status and lifestyle factors. The "Western/new affluence" dietary pattern characterized by higher consumption of beef/lamb, fruit, eggs, poultry and seafood also significantly associated with MS (odds ratio: 1.37, 95%CI: 1.13-1.67). Physical activity showed significant interactions with the dietary patterns in relation to MS risk (P for interaction = 0.008). In the joint analysis, participants with the combination of sedentary activity with the "Yellow Earth" dietary pattern or the "Western/new affluence" dietary pattern both had more than three times (95%CI: 2.8-6.1) higher odds of MS than those with active activity and the "Green Water" dietary pattern. Our findings from the large Chinese national representative data indicate that dietary patterns affect the likelihood of MS. Combining healthy dietary pattern with active lifestyle may benefit more in prevention of MS. Copyright © 2012 Elsevier B.V. All rights reserved.
Gaussianization for fast and accurate inference from cosmological data
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2016-06-01
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
Direct Parametric Reconstruction With Joint Motion Estimation/Correction for Dynamic Brain PET Data.
Jiao, Jieqing; Bousse, Alexandre; Thielemans, Kris; Burgos, Ninon; Weston, Philip S J; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Markiewicz, Pawel; Ourselin, Sebastien
2017-01-01
Direct reconstruction of parametric images from raw photon counts has been shown to improve the quantitative analysis of dynamic positron emission tomography (PET) data. However it suffers from subject motion which is inevitable during the typical acquisition time of 1-2 hours. In this work we propose a framework to jointly estimate subject head motion and reconstruct the motion-corrected parametric images directly from raw PET data, so that the effects of distorted tissue-to-voxel mapping due to subject motion can be reduced in reconstructing the parametric images with motion-compensated attenuation correction and spatially aligned temporal PET data. The proposed approach is formulated within the maximum likelihood framework, and efficient solutions are derived for estimating subject motion and kinetic parameters from raw PET photon count data. Results from evaluations on simulated [ 11 C]raclopride data using the Zubal brain phantom and real clinical [ 18 F]florbetapir data of a patient with Alzheimer's disease show that the proposed joint direct parametric reconstruction motion correction approach can improve the accuracy of quantifying dynamic PET data with large subject motion.
Robust joint score tests in the application of DNA methylation data analysis.
Li, Xuan; Fu, Yuejiao; Wang, Xiaogang; Qiu, Weiliang
2018-05-18
Recently differential variability has been showed to be valuable in evaluating the association of DNA methylation to the risks of complex human diseases. The statistical tests based on both differential methylation level and differential variability can be more powerful than those based only on differential methylation level. Anh and Wang (2013) proposed a joint score test (AW) to simultaneously detect for differential methylation and differential variability. However, AW's method seems to be quite conservative and has not been fully compared with existing joint tests. We proposed three improved joint score tests, namely iAW.Lev, iAW.BF, and iAW.TM, and have made extensive comparisons with the joint likelihood ratio test (jointLRT), the Kolmogorov-Smirnov (KS) test, and the AW test. Systematic simulation studies showed that: 1) the three improved tests performed better (i.e., having larger power, while keeping nominal Type I error rates) than the other three tests for data with outliers and having different variances between cases and controls; 2) for data from normal distributions, the three improved tests had slightly lower power than jointLRT and AW. The analyses of two Illumina HumanMethylation27 data sets GSE37020 and GSE20080 and one Illumina Infinium MethylationEPIC data set GSE107080 demonstrated that three improved tests had higher true validation rates than those from jointLRT, KS, and AW. The three proposed joint score tests are robust against the violation of normality assumption and presence of outlying observations in comparison with other three existing tests. Among the three proposed tests, iAW.BF seems to be the most robust and effective one for all simulated scenarios and also in real data analyses.
Henriksen, Marius; Creaby, Mark W; Lund, Hans; Juhl, Carsten; Christensen, Robin
2014-01-01
Objective We performed a systematic review, meta-analysis and assessed the evidence supporting a causal link between knee joint loading during walking and structural knee osteoarthritis (OA) progression. Design Systematic review, meta-analysis and application of Bradford Hill's considerations on causation. Data sources We searched MEDLINE, Scopus, AMED, CINAHL and SportsDiscus for prospective cohort studies and randomised controlled trials (RCTs) from 1950 through October 2013. Study eligibility criteria We selected cohort studies and RCTs in which estimates of knee joint loading during walking were used to predict structural knee OA progression assessed by X-ray or MRI. Data analyses Meta-analysis was performed to estimate the combined OR for structural disease progression with higher baseline loading. The likelihood of a causal link between knee joint loading and OA progression was assessed from cohort studies using the Bradford Hill guidelines to derive a 0–4 causation score based on four criteria and examined for confirmation in RCTs. Results Of the 1078 potentially eligible articles, 5 prospective cohort studies were included. The studies included a total of 452 patients relating joint loading to disease progression over 12–72 months. There were very serious limitations associated with the methodological quality of the included studies. The combined OR for disease progression was 1.90 (95% CI 0.85 to 4.25; I2=77%) for each one-unit increment in baseline knee loading. The combined causation score was 0, indicating no causal association between knee loading and knee OA progression. No RCTs were found to confirm or refute the findings from the cohort studies. Conclusions There is very limited and low-quality evidence to support for a causal link between knee joint loading during walking and structural progression of knee OA. Trial registration number CRD42012003253 PMID:25031196
Henriksen, Marius; Creaby, Mark W; Lund, Hans; Juhl, Carsten; Christensen, Robin
2014-07-15
We performed a systematic review, meta-analysis and assessed the evidence supporting a causal link between knee joint loading during walking and structural knee osteoarthritis (OA) progression. Systematic review, meta-analysis and application of Bradford Hill's considerations on causation. We searched MEDLINE, Scopus, AMED, CINAHL and SportsDiscus for prospective cohort studies and randomised controlled trials (RCTs) from 1950 through October 2013. We selected cohort studies and RCTs in which estimates of knee joint loading during walking were used to predict structural knee OA progression assessed by X-ray or MRI. Meta-analysis was performed to estimate the combined OR for structural disease progression with higher baseline loading. The likelihood of a causal link between knee joint loading and OA progression was assessed from cohort studies using the Bradford Hill guidelines to derive a 0-4 causation score based on four criteria and examined for confirmation in RCTs. Of the 1078 potentially eligible articles, 5 prospective cohort studies were included. The studies included a total of 452 patients relating joint loading to disease progression over 12-72 months. There were very serious limitations associated with the methodological quality of the included studies. The combined OR for disease progression was 1.90 (95% CI 0.85 to 4.25; I(2)=77%) for each one-unit increment in baseline knee loading. The combined causation score was 0, indicating no causal association between knee loading and knee OA progression. No RCTs were found to confirm or refute the findings from the cohort studies. There is very limited and low-quality evidence to support for a causal link between knee joint loading during walking and structural progression of knee OA. CRD42012003253. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Model-based estimation for dynamic cardiac studies using ECT.
Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O
1994-01-01
The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.
Hey, Jody; Nielsen, Rasmus
2007-01-01
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
optBINS: Optimal Binning for histograms
NASA Astrophysics Data System (ADS)
Knuth, Kevin H.
2018-03-01
optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.
Ohm-Laursen, Line; Nielsen, Morten; Larsen, Stine R; Barington, Torben
2006-01-01
Antibody diversity is created by imprecise joining of the variability (V), diversity (D) and joining (J) gene segments of the heavy and light chain loci. Analysis of rearrangements is complicated by somatic hypermutations and uncertainty concerning the sources of gene segments and the precise way in which they recombine. It has been suggested that D genes with irregular recombination signal sequences (DIR) and chromosome 15 open reading frames (OR15) can replace conventional D genes, that two D genes or inverted D genes may be used and that the repertoire can be further diversified by heavy chain V gene (VH) replacement. Safe conclusions require large, well-defined sequence samples and algorithms minimizing stochastic assignment of segments. Two computer programs were developed for analysis of heavy chain joints. JointHMM is a profile hidden Markow model, while JointML is a maximum-likelihood-based method taking the lengths of the joint and the mutational status of the VH gene into account. The programs were applied to a set of 6329 clonally unrelated rearrangements. A conventional D gene was found in 80% of unmutated sequences and 64% of mutated sequences, while D-gene assignment was kept below 5% in artificial (randomly permutated) rearrangements. No evidence for the use of DIR, OR15, multiple D genes or VH replacements was found, while inverted D genes were used in less than 1‰ of the sequences. JointML was shown to have a higher predictive performance for D-gene assignment in mutated and unmutated sequences than four other publicly available programs. An online version 1·0 of JointML is available at http://www.cbs.dtu.dk/services/VDJsolver. PMID:17005006
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
Education and black-white interracial marriage.
Gullickson, Aaron
2006-11-01
This article examines competing theoretical claims regarding how an individual's education will affect his or her likelihood of interracial marriage. I demonstrate that prior models of interracial marriage have failed to adequately distinguish the joint and marginal effects of education on interracial marriage and present a model capable of distinguishing these effects. I test this model on black-white interracial marriages using 1980, 1990, and 2000 U.S. census data. The results reveal partial support for status exchange theory within black male-white female unions and strong isolation of lower-class blacks from the interracial marriage market. Structural assimilation theory is not supported because the educational attainment of whites is not related in any consistent fashion to the likelihood of interracial marriage. The strong isolation of lower-class blacks from the interracial marriage market has gone unnoticed in prior research because of the failure of prior methods to distinguish joint and marginal effects.
Spatial hydrological drought characteristics in Karkheh River basin, southwest Iran using copulas
NASA Astrophysics Data System (ADS)
Dodangeh, Esmaeel; Shahedi, Kaka; Shiau, Jenq-Tzong; MirAkbari, Maryam
2017-08-01
Investigation on drought characteristics such as severity, duration, and frequency is crucial for water resources planning and management in a river basin. While the methodology for multivariate drought frequency analysis is well established by applying the copulas, the estimation on the associated parameters by various parameter estimation methods and the effects on the obtained results have not yet been investigated. This research aims at conducting a comparative analysis between the maximum likelihood parametric and non-parametric method of the Kendall τ estimation method for copulas parameter estimation. The methods were employed to study joint severity-duration probability and recurrence intervals in Karkheh River basin (southwest Iran) which is facing severe water-deficit problems. Daily streamflow data at three hydrological gauging stations (Tang Sazbon, Huleilan and Polchehr) near the Karkheh dam were used to draw flow duration curves (FDC) of these three stations. The Q_{75} index extracted from the FDC were set as threshold level to abstract drought characteristics such as drought duration and severity on the basis of the run theory. Drought duration and severity were separately modeled using the univariate probabilistic distributions and gamma-GEV, LN2-exponential, and LN2-gamma were selected as the best paired drought severity-duration inputs for copulas according to the Akaike Information Criteria (AIC), Kolmogorov-Smirnov and chi-square tests. Archimedean Clayton, Frank, and extreme value Gumbel copulas were employed to construct joint cumulative distribution functions (JCDF) of droughts for each station. Frank copula at Tang Sazbon and Gumbel at Huleilan and Polchehr stations were identified as the best copulas based on the performance evaluation criteria including AIC, BIC, log-likelihood and root mean square error (RMSE) values. Based on the RMSE values, nonparametric Kendall-τ is preferred to the parametric maximum likelihood estimation method. The results showed greater drought return periods by the parametric ML method in comparison to the nonparametric Kendall τ estimation method. The results also showed that stations located in tributaries (Huleilan and Polchehr) have close return periods, while the station along the main river (Tang Sazbon) has the smaller return periods for the drought events with identical drought duration and severity.
NASA Technical Reports Server (NTRS)
Bonamente, Massimillano; Joy, Marshall K.; Carlstrom, John E.; Reese, Erik D.; LaRoque, Samuel J.
2004-01-01
X-ray and Sunyaev-Zel'dovich effect data can be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from Chandra, which provides both spatial and spectral information, and Sunyaev-Zel'dovich effect data were obtained from the BIMA and Owens Valley Radio Observatory (OVRO) arrays. We introduce a Markov Chain Monte Carlo procedure for the joint analysis of X-ray and Sunyaev- Zel'dovich effect data. The advantages of this method are the high computational efficiency and the ability to measure simultaneously the probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and also for derivative quantities such as the distance to the cluster. We demonstrate this technique by applying it to the Chandra X-ray data and the OVRO radio data for the galaxy cluster A611. Comparisons with traditional likelihood ratio methods reveal the robustness of the method. This method will be used in follow-up paper to determine the distances to a large sample of galaxy cluster.
Application of logistic regression to case-control association studies involving two causative loci.
North, Bernard V; Curtis, David; Sham, Pak C
2005-01-01
Models in which two susceptibility loci jointly influence the risk of developing disease can be explored using logistic regression analysis. Comparison of likelihoods of models incorporating different sets of disease model parameters allows inferences to be drawn regarding the nature of the joint effect of the loci. We have simulated case-control samples generated assuming different two-locus models and then analysed them using logistic regression. We show that this method is practicable and that, for the models we have used, it can be expected to allow useful inferences to be drawn from sample sizes consisting of hundreds of subjects. Interactions between loci can be explored, but interactive effects do not exactly correspond with classical definitions of epistasis. We have particularly examined the issue of the extent to which it is helpful to utilise information from a previously identified locus when investigating a second, unknown locus. We show that for some models conditional analysis can have substantially greater power while for others unconditional analysis can be more powerful. Hence we conclude that in general both conditional and unconditional analyses should be performed when searching for additional loci.
Analysis of capture-recapture models with individual covariates using data augmentation
Royle, J. Andrew
2009-01-01
I consider the analysis of capture-recapture models with individual covariates that influence detection probability. Bayesian analysis of the joint likelihood is carried out using a flexible data augmentation scheme that facilitates analysis by Markov chain Monte Carlo methods, and a simple and straightforward implementation in freely available software. This approach is applied to a study of meadow voles (Microtus pennsylvanicus) in which auxiliary data on a continuous covariate (body mass) are recorded, and it is thought that detection probability is related to body mass. In a second example, the model is applied to an aerial waterfowl survey in which a double-observer protocol is used. The fundamental unit of observation is the cluster of individual birds, and the size of the cluster (a discrete covariate) is used as a covariate on detection probability.
Model-based estimation for dynamic cardiac studies using ECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.
1994-06-01
In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performancemore » to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed.« less
Root Cause Analysis: Learning from Adverse Safety Events.
Brook, Olga R; Kruskal, Jonathan B; Eisenberg, Ronald L; Larson, David B
2015-10-01
Serious adverse events continue to occur in clinical practice, despite our best preventive efforts. It is essential that radiologists, both as individuals and as a part of organizations, learn from such events and make appropriate changes to decrease the likelihood that such events will recur. Root cause analysis (RCA) is a process to (a) identify factors that underlie variation in performance or that predispose an event toward undesired outcomes and (b) allow for development of effective strategies to decrease the likelihood of similar adverse events occurring in the future. An RCA process should be performed within the environment of a culture of safety, focusing on underlying system contributors and, in a confidential manner, taking into account the emotional effects on the staff involved. The Joint Commission now requires that a credible RCA be performed within 45 days for all sentinel or major adverse events, emphasizing the need for all radiologists to understand the processes with which an effective RCA can be performed. Several RCA-related tools that have been found to be useful in the radiology setting include the "five whys" approach to determine causation; cause-and-effect, or Ishikawa, diagrams; causal tree mapping; affinity diagrams; and Pareto charts. © RSNA, 2015.
NASA Astrophysics Data System (ADS)
Coakley, Kevin J.; Vecchia, Dominic F.; Hussey, Daniel S.; Jacobson, David L.
2013-10-01
At the NIST Neutron Imaging Facility, we collect neutron projection data for both the dry and wet states of a Proton-Exchange-Membrane (PEM) fuel cell. Transmitted thermal neutrons captured in a scintillator doped with lithium-6 produce scintillation light that is detected by an amorphous silicon detector. Based on joint analysis of the dry and wet state projection data, we reconstruct a residual neutron attenuation image with a Penalized Likelihood method with an edge-preserving Huber penalty function that has two parameters that control how well jumps in the reconstruction are preserved and how well noisy fluctuations are smoothed out. The choice of these parameters greatly influences the resulting reconstruction. We present a data-driven method that objectively selects these parameters, and study its performance for both simulated and experimental data. Before reconstruction, we transform the projection data so that the variance-to-mean ratio is approximately one. For both simulated and measured projection data, the Penalized Likelihood method reconstruction is visually sharper than a reconstruction yielded by a standard Filtered Back Projection method. In an idealized simulation experiment, we demonstrate that the cross validation procedure selects regularization parameters that yield a reconstruction that is nearly optimal according to a root-mean-square prediction error criterion.
NASA Astrophysics Data System (ADS)
De Santis, Alberto; Dellepiane, Umberto; Lucidi, Stefano
2012-11-01
In this paper we investigate the estimation problem for a model of the commodity prices. This model is a stochastic state space dynamical model and the problem unknowns are the state variables and the system parameters. Data are represented by the commodity spot prices, very seldom time series of Futures contracts are available for free. Both the system joint likelihood function (state variables and parameters) and the system marginal likelihood (the state variables are eliminated) function are addressed.
Deep Learning for Population Genetic Inference.
Sheehan, Sara; Song, Yun S
2016-03-01
Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.
Deep Learning for Population Genetic Inference
Sheehan, Sara; Song, Yun S.
2016-01-01
Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908
Diagnostic accuracy of clinical tests of the hip: a systematic review with meta-analysis.
Reiman, Michael P; Goode, Adam P; Hegedus, Eric J; Cook, Chad E; Wright, Alexis A
2013-09-01
Hip Physical Examination (HPE) tests have long been used to diagnose a myriad of intra-and extra-articular pathologies of the hip joint. Useful clinical utility is necessary to support diagnostic imaging and subsequent surgical decision making. Summarise and evaluate the current research and utility on the diagnostic accuracy of HPE tests for the hip joint germane to sports related injuries and pathology. A computer-assisted literature search of MEDLINE, CINHAL and EMBASE databases (January 1966 to January 2012) using keywords related to diagnostic accuracy of the hip joint. This systematic review with meta-analysis utilised the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines for the search and reporting phases of the study. Der-Simonian and Laird random effects models were used to summarise sensitivities (SN), specificities (SP), likelihood ratios and diagnostic OR. The employed search strategy revealed 25 potential articles, with 10 demonstrating high quality. Fourteen articles qualified for meta-analysis. The meta-analysis demonstrated that most tests possess weak diagnostic properties with the exception of the patellar-pubic percussion test, which had excellent pooled SN 95 (95% CI 92 to 97%) and good specificity 86 (95% CI 78 to 92%). Several studies have investigated pathology in the hip. Few of the current studies are of substantial quality to dictate clinical decision-making. Currently, only the patellar-pubic percussion test is supported by the data as a stand-alone HPE test. Further studies involving high quality designs are needed to fully assess the value of HPE tests for patients with intra- and extra-articular hip dysfunction.
Doré, Adam L.; Golightly, Yvonne M.; Mercer, Vicki S.; Shi, Xiaoyan A.; Renner, Jordan B.; Jordan, Joanne M.; Nelson, Amanda E.
2014-01-01
Objective Knee and hip osteoarthritis (OA) are known risk factors for falls, but whether they together additionally contribute to falls risk is unknown. This study utilizes a biracial cohort of men and women to examine the influence of lower limb OA burden on the risk for future falls. Methods A longitudinal analysis was performed using data from 2 time points of a large cohort. The outcome of interest was falls at follow up. Covariates included age, sex, race, body mass index, a history of prior falls, symptomatic OA of the hip and/or knee, a history of neurologic or pulmonary diseases, and current use of narcotic medications. Symptomatic OA was defined as patient reported symptoms and radiographic evidence of OA in the same joint. Logistic regression analyses were used to determine associations between covariates and falls at follow-up. Results The odds of falling increased with an increasing number of lower limb symptomatic OA joints: those with 1 joint had 53% higher odds, those with 2 joints had 74% higher odds, those with 3–4 OA joints had 85% higher odds. When controlling for covariates, patients who had symptomatic knee or hip OA had an increased likelihood of falling (aOR 1.39 95% CI [1.02, 1.88]; aOR 1.60 95% CI [1.14, 2.24], respectively). Conclusions This study reveals the risk for falls increases with additional symptomatic OA lower limb joints and confirms that symptomatic hip and knee OA are important risk factors for falls. PMID:25331686
Video Analysis of Anterior Cruciate Ligament (ACL) Injuries
Carlson, Victor R.; Sheehan, Frances T.; Boden, Barry P.
2016-01-01
Background: As the most viable method for investigating in vivo anterior cruciate ligament (ACL) rupture, video analysis is critical for understanding ACL injury mechanisms and advancing preventative training programs. Despite the limited number of published studies involving video analysis, much has been gained through evaluating actual injury scenarios. Methods: Studies meeting criteria for this systematic review were collected by performing a broad search of the ACL literature with use of variations and combinations of video recordings and ACL injuries. Both descriptive and analytical studies were included. Results: Descriptive studies have identified specific conditions that increase the likelihood of an ACL injury. These conditions include close proximity to opposing players or other perturbations, high shoe-surface friction, and landing on the heel or the flat portion of the foot. Analytical studies have identified high-risk joint angles on landing, such as a combination of decreased ankle plantar flexion, decreased knee flexion, and increased hip flexion. Conclusions: The high-risk landing position appears to influence the likelihood of ACL injury to a much greater extent than inherent risk factors. As such, on the basis of the results of video analysis, preventative training should be applied broadly. Kinematic data from video analysis have provided insights into the dominant forces that are responsible for the injury (i.e., axial compression with potential contributions from quadriceps contraction and valgus loading). With the advances in video technology currently underway, video analysis will likely lead to enhanced understanding of non-contact ACL injury. PMID:27922985
Schoenfeld, Elizabeth A.; Loving, Timothy J.
2012-01-01
We examined how daters’ levels of relationship dependence interact with men’s and women’s degree of accommodation during a likelihood of marriage discussion to predict cortisol levels at the conclusion of the discussion. Upon arriving at the laboratory, couple members were separated and asked to graph their perceived likelihood of one day marrying each other. Couples were reunited and instructed to create a joint graph depicting their agreed-upon chance of marriage. For the majority of couples, negotiating their likelihood of marriage required one or both partners to accommodate each other’s presumed likelihood of marriage. Multilevel analyses revealed a significant Dependence x Accommodation x Sex interaction. For women who increased their likelihood of marriage, feelings of dependence predicted heightened levels of cortisol relative to baseline; we suggest such a response is indicative of eustress. Among men, those who accommodated by decreasing their likelihood of marriage experienced significantly lower levels of cortisol to the extent they were less dependent on their partners. Discussion focuses on why men and women show different physiological reactions in response to seemingly favorable outcomes from a relationship discussion. PMID:22801249
Schoenfeld, Elizabeth A; Loving, Timothy J
2013-06-01
We examined how daters' levels of relationship dependence interact with men's and women's degree of accommodation during a likelihood of marriage discussion to predict cortisol levels at the conclusion of the discussion. Upon arriving at the laboratory, couple members were separated and asked to graph their perceived likelihood of one day marrying each other. Couples were reunited and instructed to create a joint graph depicting their agreed-upon chance of marriage. For the majority of couples, negotiating their likelihood of marriage required one or both partners to accommodate each other's presumed likelihood of marriage. Multilevel analyses revealed a significant Dependence×Accommodation×Sex interaction. For women who increased their likelihood of marriage, feelings of dependence predicted heightened levels of cortisol relative to baseline; we suggest such a response is indicative of eustress. Among men, those who accommodated by decreasing their likelihood of marriage experienced significantly lower levels of cortisol to the extent that they were less dependent on their partners. Discussion focuses on why men and women show different physiological reactions in response to seemingly favorable outcomes from a relationship discussion. Copyright © 2012 Elsevier B.V. All rights reserved.
Under-reported data analysis with INAR-hidden Markov chains.
Fernández-Fontelo, Amanda; Cabaña, Alejandra; Puig, Pedro; Moriña, David
2016-11-20
In this work, we deal with correlated under-reported data through INAR(1)-hidden Markov chain models. These models are very flexible and can be identified through its autocorrelation function, which has a very simple form. A naïve method of parameter estimation is proposed, jointly with the maximum likelihood method based on a revised version of the forward algorithm. The most-probable unobserved time series is reconstructed by means of the Viterbi algorithm. Several examples of application in the field of public health are discussed illustrating the utility of the models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Liu, Zhiyong; Li, Chao; Zhou, Ping; Chen, Xiuzhi
2016-10-07
Climate change significantly impacts the vegetation growth and terrestrial ecosystems. Using satellite remote sensing observations, here we focus on investigating vegetation dynamics and the likelihood of vegetation-related drought under varying climate conditions across China. We first compare temporal trends of Normalized Difference Vegetation Index (NDVI) and climatic variables over China. We find that in fact there is no significant change in vegetation over the cold regions where warming is significant. Then, we propose a joint probability model to estimate the likelihood of vegetation-related drought conditioned on different precipitation/temperature scenarios in growing season across China. To the best of our knowledge, this study is the first to examine the vegetation-related drought risk over China from a perspective based on joint probability. Our results demonstrate risk patterns of vegetation-related drought under both low and high precipitation/temperature conditions. We further identify the variations in vegetation-related drought risk under different climate conditions and the sensitivity of drought risk to climate variability. These findings provide insights for decision makers to evaluate drought risk and vegetation-related develop drought mitigation strategies over China in a warming world. The proposed methodology also has a great potential to be applied for vegetation-related drought risk assessment in other regions worldwide.
Liu, Zhiyong; Li, Chao; Zhou, Ping; Chen, Xiuzhi
2016-01-01
Climate change significantly impacts the vegetation growth and terrestrial ecosystems. Using satellite remote sensing observations, here we focus on investigating vegetation dynamics and the likelihood of vegetation-related drought under varying climate conditions across China. We first compare temporal trends of Normalized Difference Vegetation Index (NDVI) and climatic variables over China. We find that in fact there is no significant change in vegetation over the cold regions where warming is significant. Then, we propose a joint probability model to estimate the likelihood of vegetation-related drought conditioned on different precipitation/temperature scenarios in growing season across China. To the best of our knowledge, this study is the first to examine the vegetation-related drought risk over China from a perspective based on joint probability. Our results demonstrate risk patterns of vegetation-related drought under both low and high precipitation/temperature conditions. We further identify the variations in vegetation-related drought risk under different climate conditions and the sensitivity of drought risk to climate variability. These findings provide insights for decision makers to evaluate drought risk and vegetation-related develop drought mitigation strategies over China in a warming world. The proposed methodology also has a great potential to be applied for vegetation-related drought risk assessment in other regions worldwide. PMID:27713530
Localized cervical facet joint kinematics under physiological and whiplash loading.
Stemper, Brian D; Yoganandan, Narayan; Gennarelli, Thomas A; Pintar, Frank A
2005-12-01
Although facet joints have been implicated in the whiplash injury mechanism, no investigators have determined the degree to which joint motions in whiplash are nonphysiological. The purpose of this investigation was to quantify the correlation between facet joint and segmental motions under physiological and whiplash loading. Human cadaveric cervical spine specimens were exercise tested under physiological extension loading, and intact human head-neck complexes were exercise tested under whiplash loading to correlate the localized component motions of the C4-5 facet joint with segmental extension. Facet joint shear and distraction kinematics demonstrated a linear correlation with segmental extension under both loading modes. Facet joints responded differently to whiplash and physiological loading, with significantly increased kinematics for the same-segmental angulation. The limitations of this study include removal of superficial musculature and the limited sample size for physiological testing. The presence of increased facet joint motions indicated that synovial joint soft-tissue components (that is, synovial membrane and capsular ligament) sustain increased distortion that may subject these tissues to a greater likelihood of injury. This finding is supported by clinical investigations in which lower cervical facet joint injury resulted in similar pain patterns due to the most commonly reported whiplash symptoms.
Trends and Patterns in Age Discrimination in Employment Act (ADEA) Charges.
von Schrader, Sarah; Nazarov, Zafar E
2016-07-01
The Age Discrimination in Employment Act (ADEA) protects individuals aged 40 years and over from discrimination throughout the employment process. Using data on ADEA charges from the Equal Employment Opportunity Commission from 1993 to 2010, we present labor force-adjusted charge rates demonstrating that the highest charge rates are among those in the preretirement age range, and only the rate of charges among those aged 65 years and older has not decreased. We examine characteristics of ADEA charges including the prevalence of different alleged discriminatory actions (or issues) and highlight the increasing proportion of age discrimination charges that are jointly filed with other antidiscrimination statutes. Through a regression analysis, we find that the likelihood of citing various issues differs by charging party characteristics, such as age, gender, and minority status, and on charges that cite only age discrimination as compared to those that are jointly filed. Implications of these findings for employers are discussed. © The Author(s) 2016.
Joint sparsity based heterogeneous data-level fusion for target detection and estimation
NASA Astrophysics Data System (ADS)
Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe
2017-05-01
Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.
Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong
2017-12-18
Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.
Top-quark mass measurement from dilepton events at CDF II.
Abulencia, A; Acosta, D; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Ambrose, D; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arguin, J-F; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Bachacou, H; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Bedeschi, F; Behari, S; Belforte, S; Bellettini, G; Bellinger, J; Belloni, A; Ben-Haim, E; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bishai, M; Blair, R E; Blocker, C; Bloom, K; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Bourov, S; Boveia, A; Brau, B; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carron, S; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chapman, J; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Chu, P H; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciljak, M; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Coca, M; Connolly, A; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Cruz, A; Cuevas, J; Culbertson, R; Cyr, D; DaRonco, S; D'Auria, S; D'Onofrio, M; Dagenhart, D; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; Dell'Orso, M; Demers, S; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Dionisi, C; Dittmann, J; Dituro, P; Dörr, C; Dominguez, A; Donati, S; Donega, M; Dong, P; Donini, J; Dorigo, T; Dube, S; Ebina, K; Efron, J; Ehlers, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Flores-Castillo, L R; Foland, A; Forrester, S; Foster, G W; Franklin, M; Freeman, J C; Fujii, Y; Furic, I; Gajjar, A; Gallinaro, M; Galyardt, J; Garcia, J E; Garcia Sciverez, M; Garfinkel, A F; Gay, C; Gerberich, H; Gerchtein, E; Gerdes, D; Giagu, S; Giannetti, P; Gibson, A; Gibson, K; Ginsburg, C; Giolo, K; Giordani, M; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, J; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Gotra, Y; Goulianos, K; Gresele, A; Griffiths, M; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Haber, C; Hahn, S R; Hahn, K; Halkiadakis, E; Hamilton, A; Han, B-Y; Handler, R; Happacher, F; Hara, K; Hare, M; Harper, S; Harr, R F; Harris, R M; Hatakeyama, K; Hauser, J; Hays, C; Hayward, H; Heijboer, A; Heinemann, B; Heinrich, J; Hennecke, M; Herndon, M; Heuser, J; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Holloway, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Huston, J; Ikado, K; Incandela, J; Introzzi, G; Iori, M; Ishizawa, Y; Ivanov, A; Iyutin, B; James, E; Jang, D; Jayatilaka, B; Jeans, D; Jensen, H; Jeon, E J; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Kang, J; Karagoz-Unel, M; Karchin, P E; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, M S; Kim, S B; Kim, S H; Kim, Y K; Kirby, M; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Kobayashi, H; Kondo, K; Kong, D J; Konigsberg, J; Kordas, K; Korytov, A; Kotwal, A V; Kovalev, A; Kraus, J; Kravchenko, I; Kreps, M; Kreymer, A; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kuhlmann, S E; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecci, C; LeCompte, T; Lee, J; Lee, J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Li, K; Lin, C; Lin, C S; Lindgren, M; Lipeles, E; Liss, T M; Lister, A; Litvintsev, D O; Liu, T; Liu, Y; Lockyer, N S; Loginov, A; Loreti, M; Loverre, P; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Maki, T; Maksimovic, P; Manca, G; Margaroli, F; Marginean, R; Marino, C; Martin, A; Martin, M; Martin, V; Martínez, M; Maruyama, T; Matsunaga, H; Mattson, M E; Mazini, R; Mazzanti, P; McFarland, K S; McGivern, D; McIntyre, P; McNamara, P; McNulty, R; Mehta, A; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; von der Mey, M; Miao, T; Miladinovic, N; Miles, J; Miller, R; Miller, J S; Mills, C; Milnik, M; Miquel, R; Miscetti, S; Mitselmakher, G; Miyamoto, A; Moggi, N; Mohr, B; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Mulhearn, M; Muller, Th; Mumford, R; Murat, P; Nachtman, J; Nahn, S; Nakano, I; Napier, A; Naumov, D; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nigmanov, T; Nodulman, L; Norniella, O; Ogawa, T; Oh, S H; Oh, Y D; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagliarone, C; Palencia, E; Paoletti, R; Papadimitriou, V; Papikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pitts, K; Plager, C; Pondrom, L; Pope, G; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Rakitin, A; Rappoccio, S; Ratnikov, F; Reisert, B; Rekovic, V; van Remortel, N; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Rinnert, K; Ristori, L; Robertson, W J; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Rott, C; Ruiz, A; Russ, J; Rusu, V; Ryan, D; Saarikko, H; Sabik, S; Safonov, A; Sakumoto, W K; Salamanna, G; Salto, O; Saltzberg, D; Sanchez, C; Santi, L; Sarkar, S; Sato, K; Savard, P; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Semeria, F; Sexton-Kennedy, L; Sfiligoi, I; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sill, A; Sinervo, P; Sisakyan, A; Sjolin, J; Skiba, A; Slaughter, A J; Sliwa, K; Smirnov, D; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Squillacioti, P; Stanitzki, M; Staveris-Polykalas, A; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sumorok, K; Sun, H; Suzuki, T; Taffard, A; Tafirout, R; Takashima, R; Takeuchi, Y; Takikawa, K; Tanaka, M; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Tether, S; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tollefson, K; Tomura, T; Tonelli, D; Tönnesmann, M; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tsuchiya, R; Tsuno, S; Turini, N; Ukegawa, F; Unverhau, T; Uozumi, S; Usynin, D; Vacavant, L; Vaiciulis, A; Vallecorsa, S; Varganov, A; Vataga, E; Velev, G; Veramendi, G; Veszpremi, V; Vickey, T; Vidal, R; Vila, I; Vilar, R; Vollrath, I; Volobouev, I; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner, W; Wallny, R; Walter, T; Wan, Z; Wang, M J; Wang, S M; Warburton, A; Ward, B; Waschke, S; Waters, D; Watts, T; Weber, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Worm, S; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, Y; Yang, C; Yang, U K; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zetti, F; Zhang, X; Zhou, J; Zucchelli, S
2006-04-21
We report a measurement of the top-quark mass using events collected by the CDF II detector from pp collisions at square root of s = 1.96 TeV at the Fermilab Tevatron. We calculate a likelihood function for the top-quark mass in events that are consistent with tt --> bl(-)nu(l)bl'+ nu'(l) decays. The likelihood is formed as the convolution of the leading-order matrix element and detector resolution functions. The joint likelihood is the product of likelihoods for each of 33 events collected in 340 pb(-1) of integrated luminosity, yielding a top-quark mass M(t) = 165.2 +/- 6.1(stat) +/- 3.4(syst) GeV/c2. This first application of a matrix-element technique to tt --> bl+ nu(l)bl'- nu(l') decays gives the most precise single measurement of M(t) in dilepton events. Combined with other CDF run II measurements using dilepton events, we measure M(t) = 167.9 +/- 5.2(stat) +/- 3.7(syst) GeV/c2.
NASA Technical Reports Server (NTRS)
Switzer, Eric Ryan; Watts, Duncan J.
2016-01-01
The B-mode polarization of the cosmic microwave background provides a unique window into tensor perturbations from inflationary gravitational waves. Survey effects complicate the estimation and description of the power spectrum on the largest angular scales. The pixel-space likelihood yields parameter distributions without the power spectrum as an intermediate step, but it does not have the large suite of tests available to power spectral methods. Searches for primordial B-modes must rigorously reject and rule out contamination. Many forms of contamination vary or are uncorrelated across epochs, frequencies, surveys, or other data treatment subsets. The cross power and the power spectrum of the difference of subset maps provide approaches to reject and isolate excess variance. We develop an analogous joint pixel-space likelihood. Contamination not modeled in the likelihood produces parameter-dependent bias and complicates the interpretation of the difference map. We describe a null test that consistently weights the difference map. Excess variance should either be explicitly modeled in the covariance or be removed through reprocessing the data.
Diegoli, Toni Marie; Rohde, Heinrich; Borowski, Stefan; Krawczak, Michael; Coble, Michael D; Nothnagel, Michael
2016-11-01
Typing of X chromosomal short tandem repeat (X STR) markers has become a standard element of human forensic genetic analysis. Joint consideration of many X STR markers at a time increases their discriminatory power but, owing to physical linkage, requires inter-marker recombination rates to be accurately known. We estimated the recombination rates between 15 well established X STR markers using genotype data from 158 families (1041 individuals) and following a previously proposed likelihood-based approach that allows for single-step mutations. To meet the computational requirements of this family-based type of analysis, we modified a previous implementation so as to allow multi-core parallelization on a high-performance computing system. While we obtained recombination rate estimates larger than zero for all but one pair of adjacent markers within the four previously proposed linkage groups, none of the three X STR pairs defining the junctions of these groups yielded a recombination rate estimate of 0.50. Corroborating previous studies, our results therefore argue against a simple model of independent X chromosomal linkage groups. Moreover, the refined recombination fraction estimates obtained in our study will facilitate the appropriate joint consideration of all 15 investigated markers in forensic analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Quantum-state reconstruction by maximizing likelihood and entropy.
Teo, Yong Siah; Zhu, Huangjun; Englert, Berthold-Georg; Řeháček, Jaroslav; Hradil, Zdeněk
2011-07-08
Quantum-state reconstruction on a finite number of copies of a quantum system with informationally incomplete measurements, as a rule, does not yield a unique result. We derive a reconstruction scheme where both the likelihood and the von Neumann entropy functionals are maximized in order to systematically select the most-likely estimator with the largest entropy, that is, the least-bias estimator, consistent with a given set of measurement data. This is equivalent to the joint consideration of our partial knowledge and ignorance about the ensemble to reconstruct its identity. An interesting structure of such estimators will also be explored.
NASA Astrophysics Data System (ADS)
Degaudenzi, Riccardo; Vanghi, Vieri
1994-02-01
In all-digital Trellis-Coded 8PSK (TC-8PSK) demodulator well suited for VLSI implementation, including maximum likelihood estimation decision-directed (MLE-DD) carrier phase and clock timing recovery, is introduced and analyzed. By simply removing the trellis decoder the demodulator can efficiently cope with uncoded 8PSK signals. The proposed MLE-DD synchronization algorithm requires one sample for the phase and two samples per symbol for the timing loop. The joint phase and timing discriminator characteristics are analytically derived and numerical results checked by means of computer simulations. An approximated expression for steady-state carrier phase and clock timing mean square error has been derived and successfully checked with simulation findings. Synchronizer deviation from the Cramer Rao bound is also discussed. Mean acquisition time for the digital synchronizer has also been computed and checked, using the Monte Carlo simulation technique. Finally, TC-8PSK digital demodulator performance in terms of bit error rate and mean time to lose lock, including digital interpolators and synchronization loops, is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, Adam Paul
The authors present a measurement of the mass of the top quark. The event sample is selected from proton-antiproton collisions, at 1.96 TeV center-of-mass energy, observed with the CDF detector at Fermilab's Tevatron. They consider a 318 pb -1 dataset collected between March 2002 and August 2004. They select events that contain one energetic lepton, large missing transverse energy, exactly four energetic jets, and at least one displaced vertex b tag. The analysis uses leading-order tmore » $$\\bar{t}$$ and background matrix elements along with parameterized parton showering to construct event-by-event likelihoods as a function of top quark mass. From the 63 events observed with the 318 pb -1 dataset they extract a top quark mass of 172.0 ± 2.6(stat) ± 3.3(syst) GeV/c 2 from the joint likelihood. The mean expected statistical uncertainty is 3.2 GeV/c 2 for m $$\\bar{t}$$ = 178 GTeV/c 2 and 3.1 GeV/c 2 for m $$\\bar{t}$$ = 172.5 GeV/c 2. The systematic error is dominated by the uncertainty of the jet energy scale.« less
Soldier-relevant body borne loads increase knee joint contact force during a run-to-stop maneuver.
Ramsay, John W; Hancock, Clifford L; O'Donovan, Meghan P; Brown, Tyler N
2016-12-08
The purpose of this study was to understand the effects of load carriage on human performance, specifically during a run-to-stop (RTS) task. Using OpenSim analysis tools, knee joint contact force, grounds reaction force, leg stiffness and lower extremity joint angles and moments were determined for nine male military personnel performing a RTS under three load configurations (light, ~6kg, medium, ~20kg, and heavy, ~40kg). Subject-based means for each biomechanical variable were submitted to repeated measures ANOVA to test the effects of load. During the RTS, body borne load significantly increased peak knee joint contact force by 1.2 BW (p<0.001) and peak vertical (p<0.001) and anterior-posterior (p=0.002) ground reaction forces by 0.6 BW and 0.3 BW, respectively. Body borne load also had a significant effect on hip (p=0.026) posture with the medium load and knee (p=0.046) posture with the heavy load. With the heavy load, participants exhibited a substantial, albeit non-significant increase in leg stiffness (p=0.073 and d=0.615). Increases in joint contact force exhibited during the RTS were primarily due to greater GRFs that impact the soldier with each incremental addition of body borne load. The stiff leg, extended knee and large braking force the soldiers exhibited with the heavy load suggests their injury risk may be greatest with that specific load configuration. Further work is needed to determine if the biomechanical profile exhibited with the heavy load configuration translates to unsafe shear forces at the knee joint and consequently, a higher likelihood of injury. Published by Elsevier Ltd.
Evaluation of advanced multiplex short tandem repeat systems in pairwise kinship analysis.
Tamura, Tomonori; Osawa, Motoki; Ochiai, Eriko; Suzuki, Takanori; Nakamura, Takashi
2015-09-01
The AmpFLSTR Identifiler Kit, comprising 15 autosomal short tandem repeat (STR) loci, is commonly employed in forensic practice for calculating match probabilities and parentage testing. The conventional system exhibits insufficient estimation for kinship analysis such as sibship testing because of shortness of examined loci. This study evaluated the power of the PowerPlex Fusion System, GlobalFiler Kit, and PowerPlex 21 System, which comprise more than 20 autosomal STR loci, to estimate pairwise blood relatedness (i.e., parent-child, full siblings, second-degree relatives, and first cousins). The genotypes of all 24 STR loci in 10,000 putative pedigrees were constructed by simulation. The likelihood ratio for each locus was calculated from joint probabilities for relatives and non-relatives. The combined likelihood ratio was calculated according to the product rule. The addition of STR loci improved separation between relatives and non-relatives. However, these systems were less effectively extended to the inference for first cousins. In conclusion, these advanced systems will be useful in forensic personal identification, especially in the evaluation of full siblings and second-degree relatives. Moreover, the additional loci may give rise to two major issues of more frequent mutational events and several pairs of linked loci on the same chromosome. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A Systematic Bayesian Integration of Epidemiological and Genetic Data
Lau, Max S. Y.; Marion, Glenn; Streftaris, George; Gibson, Gavin
2015-01-01
Genetic sequence data on pathogens have great potential to inform inference of their transmission dynamics ultimately leading to better disease control. Where genetic change and disease transmission occur on comparable timescales additional information can be inferred via the joint analysis of such genetic sequence data and epidemiological observations based on clinical symptoms and diagnostic tests. Although recently introduced approaches represent substantial progress, for computational reasons they approximate genuine joint inference of disease dynamics and genetic change in the pathogen population, capturing partially the joint epidemiological-evolutionary dynamics. Improved methods are needed to fully integrate such genetic data with epidemiological observations, for achieving a more robust inference of the transmission tree and other key epidemiological parameters such as latent periods. Here, building on current literature, a novel Bayesian framework is proposed that infers simultaneously and explicitly the transmission tree and unobserved transmitted pathogen sequences. Our framework facilitates the use of realistic likelihood functions and enables systematic and genuine joint inference of the epidemiological-evolutionary process from partially observed outbreaks. Using simulated data it is shown that this approach is able to infer accurately joint epidemiological-evolutionary dynamics, even when pathogen sequences and epidemiological data are incomplete, and when sequences are available for only a fraction of exposures. These results also characterise and quantify the value of incomplete and partial sequence data, which has important implications for sampling design, and demonstrate the abilities of the introduced method to identify multiple clusters within an outbreak. The framework is used to analyse an outbreak of foot-and-mouth disease in the UK, enhancing current understanding of its transmission dynamics and evolutionary process. PMID:26599399
Tian, Guo-Liang; Li, Hui-Qiong
2017-08-01
Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.
NASA Astrophysics Data System (ADS)
Pellejero-Ibanez, Marcos; Chuang, Chia-Hsun; Rubiño-Martín, J. A.; Cuesta, Antonio J.; Wang, Yuting; Zhao, Gongbo; Ross, Ashley J.; Rodríguez-Torres, Sergio; Prada, Francisco; Slosar, Anže; Vazquez, Jose A.; Alam, Shadab; Beutler, Florian; Eisenstein, Daniel J.; Gil-Marín, Héctor; Grieb, Jan Niklas; Ho, Shirley; Kitaura, Francisco-Shu; Percival, Will J.; Rossi, Graziano; Salazar-Albornoz, Salvador; Samushia, Lado; Sánchez, Ariel G.; Satpathy, Siddharth; Seo, Hee-Jong; Tinker, Jeremy L.; Tojeiro, Rita; Vargas-Magaña, Mariana; Brownstein, Joel R.; Nichol, Robert C.; Olmstead, Matthew D.
2017-07-01
We develop a new computationally efficient methodology called double-probe analysis with the aim of minimizing informative priors (those coming from extra probes) in the estimation of cosmological parameters. Using our new methodology, we extract the dark energy model-independent cosmological constraints from the joint data sets of the Baryon Oscillation Spectroscopic Survey (BOSS) galaxy sample and Planck cosmic microwave background (CMB) measurements. We measure the mean values and covariance matrix of {R, la, Ωbh2, ns, log(As), Ωk, H(z), DA(z), f(z)σ8(z)}, which give an efficient summary of the Planck data and two-point statistics from the BOSS galaxy sample. The CMB shift parameters are R=√{Ω _m H_0^2} r(z_*) and la = πr(z*)/rs(z*), where z* is the redshift at the last scattering surface, and r(z*) and rs(z*) denote our comoving distance to the z* and sound horizon at z*, respectively; Ωb is the baryon fraction at z = 0. This approximate methodology guarantees that we will not need to put informative priors on the cosmological parameters that galaxy clustering is unable to constrain, I.e. Ωbh2 and ns. The main advantage is that the computational time required for extracting these parameters is decreased by a factor of 60 with respect to exact full-likelihood analyses. The results obtained show no tension with the flat Λ cold dark matter (ΛCDM) cosmological paradigm. By comparing with the full-likelihood exact analysis with fixed dark energy models, on one hand we demonstrate that the double-probe method provides robust cosmological parameter constraints that can be conveniently used to study dark energy models, and on the other hand we provide a reliable set of measurements assuming dark energy models to be used, for example, in distance estimations. We extend our study to measure the sum of the neutrino mass using different methodologies, including double-probe analysis (introduced in this study), full-likelihood analysis and single-probe analysis. From full-likelihood analysis, we obtain Σmν < 0.12 (68 per cent), assuming ΛCDM and Σmν < 0.20 (68 per cent) assuming owCDM. We also find that there is degeneracy between observational systematics and neutrino masses, which suggests that one should take great care when estimating these parameters in the case of not having control over the systematics of a given sample.
Joint Stability in Total Knee Arthroplasty: What Is the Target for a Stable Knee?
Wright, Timothy M
2017-02-01
Instability remains a common cause of failure in total knee arthroplasty. Although approaches for surgical treatment of instability exist, the target for initial stability remains elusive, increasing the likelihood that failures will persist because adequate stability is not restored when performing the primary arthroplasty. Although the mechanisms that stabilize the knee joint-contact between the articular surfaces, ligamentous constraints, and muscle forces-are well-defined, their relative importance and the interplay among them throughout functions of daily living are poorly understood. The problem is exacerbated by the complex multiplanar motions that occur across the joint and the large variations in these motions across the population, suggesting that stability targets may need to be patient-specific.
Statistical inferences with jointly type-II censored samples from two Pareto distributions
NASA Astrophysics Data System (ADS)
Abu-Zinadah, Hanaa H.
2017-08-01
In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.
NASA Astrophysics Data System (ADS)
Teeples, Ronald; Glyer, David
1987-05-01
Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.
Joint Symbol Timing and CFO Estimation for OFDM/OQAM Systems in Multipath Channels
NASA Astrophysics Data System (ADS)
Fusco, Tilde; Petrella, Angelo; Tanda, Mario
2009-12-01
The problem of data-aided synchronization for orthogonal frequency division multiplexing (OFDM) systems based on offset quadrature amplitude modulation (OQAM) in multipath channels is considered. In particular, the joint maximum-likelihood (ML) estimator for carrier-frequency offset (CFO), amplitudes, phases, and delays, exploiting a short known preamble, is derived. The ML estimators for phases and amplitudes are in closed form. Moreover, under the assumption that the CFO is sufficiently small, a closed form approximate ML (AML) CFO estimator is obtained. By exploiting the obtained closed form solutions a cost function whose peaks provide an estimate of the delays is derived. In particular, the symbol timing (i.e., the delay of the first multipath component) is obtained by considering the smallest estimated delay. The performance of the proposed joint AML estimator is assessed via computer simulations and compared with that achieved by the joint AML estimator designed for AWGN channel and that achieved by a previously derived joint estimator for OFDM systems.
Nantel-Vivier, Amélie; Pihl, Robert O; Côté, Sylvana; Tremblay, Richard E
2014-10-01
Research on associations between children's prosocial behaviour and mental health has provided mixed evidence. The present study sought to describe and predict the joint development of prosocial behaviour with externalizing and internalizing problems (physical aggression, anxiety and depression) from 2 to 11 years of age. Data were drawn from the National Longitudinal Survey of Children and Youth (NLSCY). Biennial prosocial behaviour, physical aggression, anxiety and depression maternal ratings were sought for 10,700 children aged 0 to 9 years at the first assessment point. While a negative association was observed between prosociality and physical aggression, more complex associations emerged with internalizing problems. Being a boy decreased the likelihood of membership in the high prosocial trajectory. Maternal depression increased the likelihood of moderate aggression, but also of joint high prosociality/low aggression. Low family income predicted the joint development of high prosociality with high physical aggression and high depression. Individual differences exist in the association of prosocial behaviour with mental health. While high prosociality tends to co-occur with low levels of mental health problems, high prosociality and internalizing/externalizing problems can co-occur in subgroups of children. Child, mother and family characteristics are predictive of individual differences in prosocial behaviour and mental health development. Mechanisms underlying these associations warrant future investigations. © 2014 The Authors. Journal of Child Psychology and Psychiatry. © 2014 Association for Child and Adolescent Mental Health.
Martín, Irene; Loza, Estibaliz; Carmona, Loreto; Ivorra, José; Narváez, José Antonio; Hernández-Gañán, Javier; Alía, Pedro
2016-01-01
Objective To analyze the association between circulating osteoprotegerin (OPG) and Dickkopf-related protein 1 (DKK-1) and radiological progression in patients with tightly controlled rheumatoid arthritis (RA). Methods Serum levels of OPG and DKK-1 were measured in 97 RA patients who were treated according to a treat-to-target strategy (T2T) aimed at remission (DAS28<2.6). Radiologic joint damage progression was assessed by changes in the total Sharp-van der Heijde score (SHS) on serial radiographs of the hands and feet. The independent association between these biomarker levels and the structural damage endpoint was examined using regression analysis Results The mean age of the 97 RA patients (68 women) at the time of the study was 54 ± 14 years, and the median disease duration was 1.6 ± 1.5 years. Most patients were seropositive for either RF or ACPA, and the large majority (76%) were in remission or had low disease activity. After a median follow-up time of 3.3 ± 1.5 years (range, 1–7.5 yrs.), the mean total SHS annual progression was 0.88 ± 2.20 units. Fifty-two percent of the patients had no progression (defined as a total SHS of zero). The mean serum OPG level did not change significantly over the study period (from 3.9 ± 1.8 to 4.07 ± 2.23 pmol/L), whereas the mean serum DKK-1 level decreased, although not significantly (from 29.9 ± 10.9 to 23.6 ± 18.8 pmol/L). In the multivariate analysis, the predictive factors increasing the likelihood of total SHS progression were age (OR per year = 1.10; p = 0.003) and a high mean C-reactive protein level over the study period (OR = 1.29; p = 0.005). Circulating OPG showed a protective effect reducing the likelihood of joint space narrowing by 60% (95% CI: 0.38–0.94) and the total SHS progression by 48% (95% CI: 0.28–0.83). The DKK-1 levels were not associated with radiological progression. Conclusion In patients with tightly controlled RA, serum OPG was inversely associated with progression of joint destruction. This biomarker may be useful in combination with other risk factors to improve prediction in patients in clinical remission or low disease activity state. PMID:27911913
A composite likelihood approach for spatially correlated survival data
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450
A composite likelihood approach for spatially correlated survival data.
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.
Women's Inheritance Rights and Intergenerational Transmission of Resources in India
ERIC Educational Resources Information Center
Deininger, Klaus; Goyal, Aparajita; Nagarajan, Hari
2013-01-01
We use inheritance patterns over three generations of individuals to assess the impact of changes in the Hindu Succession Act that grant daughters equal coparcenary birth rights in joint family property that were denied to daughters in the past. We show that the amendment significantly increased daughters' likelihood to inherit land, but that…
On the Relation between the Linear Factor Model and the Latent Profile Model
ERIC Educational Resources Information Center
Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul
2011-01-01
The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…
Distributed multimodal data fusion for large scale wireless sensor networks
NASA Astrophysics Data System (ADS)
Ertin, Emre
2006-05-01
Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.
Multivariate Longitudinal Analysis with Bivariate Correlation Test
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692
Multivariate Longitudinal Analysis with Bivariate Correlation Test.
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.
Bayesian Analysis of a Simple Measurement Model Distinguishing between Types of Information
NASA Astrophysics Data System (ADS)
Lira, Ignacio; Grientschnig, Dieter
2015-12-01
Let a quantity of interest, Y, be modeled in terms of a quantity X and a set of other quantities Z. Suppose that for Z there is type B information, by which we mean that it leads directly to a joint state-of-knowledge probability density function (PDF) for that set, without reference to likelihoods. Suppose also that for X there is type A information, which signifies that a likelihood is available. The posterior for X is then obtained by updating its prior with said likelihood by means of Bayes' rule, where the prior encodes whatever type B information there may be available for X. If there is no such information, an appropriate non-informative prior should be used. Once the PDFs for X and Z have been constructed, they can be propagated through the measurement model to obtain the PDF for Y, either analytically or numerically. But suppose that, at the same time, there is also information of type A, type B or both types together for the quantity Y. By processing such information in the manner described above we obtain another PDF for Y. Which one is right? Should both PDFs be merged somehow? Is there another way of applying Bayes' rule such that a single PDF for Y is obtained that encodes all existing information? In this paper we examine what we believe should be the proper ways of dealing with such a (not uncommon) situation.
Bhadra, Dhiman; Daniels, Michael J.; Kim, Sungduk; Ghosh, Malay; Mukherjee, Bhramar
2014-01-01
In a typical case-control study, exposure information is collected at a single time-point for the cases and controls. However, case-control studies are often embedded in existing cohort studies containing a wealth of longitudinal exposure history on the participants. Recent medical studies have indicated that incorporating past exposure history, or a constructed summary measure of cumulative exposure derived from the past exposure history, when available, may lead to more precise and clinically meaningful estimates of the disease risk. In this paper, we propose a flexible Bayesian semiparametric approach to model the longitudinal exposure profiles of the cases and controls and then use measures of cumulative exposure based on a weighted integral of this trajectory in the final disease risk model. The estimation is done via a joint likelihood. In the construction of the cumulative exposure summary, we introduce an influence function, a smooth function of time to characterize the association pattern of the exposure profile on the disease status with different time windows potentially having differential influence/weights. This enables us to analyze how the present disease status of a subject is influenced by his/her past exposure history conditional on the current ones. The joint likelihood formulation allows us to properly account for uncertainties associated with both stages of the estimation process in an integrated manner. Analysis is carried out in a hierarchical Bayesian framework using Reversible jump Markov chain Monte Carlo (RJMCMC) algorithms. The proposed methodology is motivated by, and applied to a case-control study of prostate cancer where longitudinal biomarker information is available for the cases and controls. PMID:22313248
Assessing compatibility of direct detection data: halo-independent global likelihood analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.
2016-10-18
We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be comparedmore » with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.« less
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2018-01-01
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591
Semiparametric Time-to-Event Modeling in the Presence of a Latent Progression Event
Rice, John D.; Tsodikov, Alex
2017-01-01
Summary In cancer research, interest frequently centers on factors influencing a latent event that must precede a terminal event. In practice it is often impossible to observe the latent event precisely, making inference about this process difficult. To address this problem, we propose a joint model for the unobserved time to the latent and terminal events, with the two events linked by the baseline hazard. Covariates enter the model parametrically as linear combinations that multiply, respectively, the hazard for the latent event and the hazard for the terminal event conditional on the latent one. We derive the partial likelihood estimators for this problem assuming the latent event is observed, and propose a profile likelihood–based method for estimation when the latent event is unobserved. The baseline hazard in this case is estimated nonparametrically using the EM algorithm, which allows for closed-form Breslow-type estimators at each iteration, bringing improved computational efficiency and stability compared with maximizing the marginal likelihood directly. We present simulation studies to illustrate the finite-sample properties of the method; its use in practice is demonstrated in the analysis of a prostate cancer data set. PMID:27556886
Estimating unbiased magnitudes for the announced DPRK nuclear tests, 2006-2016
NASA Astrophysics Data System (ADS)
Peacock, Sheila; Bowers, David
2017-04-01
The seismic disturbances generated from the five (2006-2016) announced nuclear test explosions by the Democratic People's Republic of Korea (DPRK) are of moderate magnitude (body-wave magnitude mb 4-5) by global earthquake standards. An upward bias of network mean mb of low- to moderate-magnitude events is long established, and is caused by the censoring of readings from stations where the signal was below noise level at the time of the predicted arrival. This sampling bias can be overcome by maximum-likelihood methods using station thresholds at detecting (and non-detecting) stations. Bias in the mean mb can also be introduced by differences in the network of stations recording each explosion - this bias can reduced by using station corrections. We apply a maximum-likelihood (JML) inversion that jointly estimates station corrections and unbiased network mb for the five DPRK explosions recorded by the CTBTO International Monitoring Network (IMS) of seismic stations. The thresholds can either be directly measured from the noise preceding the observed signal, or determined by statistical analysis of bulletin amplitudes. The network mb of the first and smallest explosion is reduced significantly relative to the mean mb (to < 4.0 mb) by removal of the censoring bias.
Ng, S K; McLachlan, G J
2003-04-15
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.
Additivity and maximum likelihood estimation of nonlinear component biomass models
David L.R. Affleck
2015-01-01
Since Parresol's (2001) seminal paper on the subject, it has become common practice to develop nonlinear tree biomass equations so as to ensure compatibility among total and component predictions and to fit equations jointly using multi-step least squares (MSLS) methods. In particular, many researchers have specified total tree biomass models by aggregating the...
Joint Modeling Approach for Semicompeting Risks Data with Missing Nonterminal Event Status
Hu, Chen; Tsodikov, Alex
2014-01-01
Semicompeting risks data, where a subject may experience sequential non-terminal and terminal events, and the terminal event may censor the non-terminal event but not vice versa, are widely available in many biomedical studies. We consider the situation when a proportion of subjects’ non-terminal events is missing, such that the observed data become a mixture of “true” semicompeting risks data and partially observed terminal event only data. An illness-death multistate model with proportional hazards assumptions is proposed to study the relationship between non-terminal and terminal events, and provide covariate-specific global and local association measures. Maximum likelihood estimation based on semiparametric regression analysis is used for statistical inference, and asymptotic properties of proposed estimators are studied using empirical process and martingale arguments. We illustrate the proposed method with simulation studies and data analysis of a follicular cell lymphoma study. PMID:24430204
Novel joint cupping clinical maneuver for ultrasonographic detection of knee joint effusions.
Uryasev, Oleg; Joseph, Oliver C; McNamara, John P; Dallas, Apostolos P
2013-11-01
Knee effusions occur due to traumatic and atraumatic causes. Clinical diagnosis currently relies on several provocative techniques to demonstrate knee joint effusions. Portable bedside ultrasonography (US) is becoming an adjunct to diagnosis of effusions. We hypothesized that a US approach with a clinical joint cupping maneuver increases sensitivity in identifying effusions as compared to US alone. Using unembalmed cadaver knees, we injected fluid to create effusions up to 10 mL. Each effusion volume was measured in a lateral transverse location with respect to the patella. For each effusion we applied a joint cupping maneuver from an inferior approach, and re-measured the effusion. With increased volume of saline infusion, the mean depth of effusion on ultrasound imaging increased as well. Using a 2-mm cutoff, we visualized an effusion without the joint cupping maneuver at 2.5 mL and with the joint cupping technique at 1 mL. Mean effusion diameter increased on average 0.26 cm for the joint cupping maneuver as compared to without the maneuver. The effusion depth was statistically different at 2.5 and 7.5 mL (P < .05). Utilizing a joint cupping technique in combination with US is a valuable tool in assessing knee effusions, especially those of subclinical levels. Effusion measurements are complicated by uneven distribution of effusion fluid. A clinical joint cupping maneuver concentrates the fluid in one recess of the joint, increasing the likelihood of fluid detection using US. © 2013 Elsevier Inc. All rights reserved.
Impact of Moving From a Widespread to Multisite Pain Definition on Other Fibromyalgia Symptoms.
Dean, Linda E; Arnold, Lesley; Crofford, Leslie; Bennett, Robert; Goldenberg, Don; Fitzcharles, Mary-Ann; Paiva, Eduardo S; Staud, Roland; Clauw, Dan; Sarzi-Puttini, Piercarlo; Jones, Gareth T; Ayorinde, Abimbola; Flüß, Elisa; Beasley, Marcus; Macfarlane, Gary J
2017-12-01
To investigate whether associations between pain and the additional symptoms associated with fibromyalgia are different in persons with chronic widespread pain (CWP) compared to multisite pain (MSP), with or without joint areas. Six studies were used: 1958 British birth cohort, Epidemiology of Functional Disorders, Kid Low Back Pain, Managing Unexplained Symptoms (Chronic Widespread Pain) in Primary Care: Involving Traditional and Accessible New Approaches, Study of Health and its Management, and Women's Health Study (WHEST; females). MSP was defined as the presence of pain in ≥8 body sites in adults (≥10 sites in children) indicated on 4-view body manikins, conducted first to include joints (positive joints) and second without (negative joints). The relationship between pain and fatigue, sleep disturbance, somatic symptoms, and mood impairment was assessed using logistic regression. Results are presented as odds ratios (ORs) with 95% confidence intervals (95% CIs). There were 34,818 participants across the study populations (adults age range 42-56 years, male 43-51% [excluding WHEST], and CWP prevalence 12-17%). Among those reporting MSP, the proportion reporting CWP ranged between 62% and 76%. Among those reporting the symptoms associated with fibromyalgia, there was an increased likelihood of reporting pain, the magnitude of which was similar regardless of the definition used. For example, within WHEST, reporting moderate/severe fatigue (Chalder fatigue scale 4-11) was associated with a >5-fold increase in likelihood of reporting pain (CWP OR 5.2 [95% CI 3.9-6.9], MSP-positive joints OR 6.5 [95% CI 5.0-8.6], and MSP-negative joints OR 6.5 [95% CI 4.7-9.0]). This large-scale study demonstrates that regardless of the pain definition used, the magnitude of association between pain and other associated symptoms of fibromyalgia is similar. This finding supports the continued collection of both when classifying fibromyalgia, but highlights the fact that pain may not require to follow the definition outlined within the 1990 American College of Rheumatology criteria. © 2017, American College of Rheumatology.
Evidence-based Diagnostics: Adult Septic Arthritis
Carpenter, Christopher R.; Schuur, Jeremiah D.; Everett, Worth W.; Pines, Jesse M.
2011-01-01
Background Acutely swollen or painful joints are common complaints in the emergency department (ED). Septic arthritis in adults is a challenging diagnosis, but prompt differentiation of a bacterial etiology is crucial to minimize morbidity and mortality. Objectives The objective was to perform a systematic review describing the diagnostic characteristics of history, physical examination, and bedside laboratory tests for nongonococcal septic arthritis. A secondary objective was to quantify test and treatment thresholds using derived estimates of sensitivity and specificity, as well as best-evidence diagnostic and treatment risks and anticipated benefits from appropriate therapy. Methods Two electronic search engines (PUBMED and EMBASE) were used in conjunction with a selected bibliography and scientific abstract hand search. Inclusion criteria included adult trials of patients presenting with monoarticular complaints if they reported sufficient detail to reconstruct partial or complete 2 × 2 contingency tables for experimental diagnostic test characteristics using an acceptable criterion standard. Evidence was rated by two investigators using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS). When more than one similarly designed trial existed for a diagnostic test, meta-analysis was conducted using a random effects model. Interval likelihood ratios (LRs) were computed when possible. To illustrate one method to quantify theoretical points in the probability of disease whereby clinicians might cease testing altogether and either withhold treatment (test threshold) or initiate definitive therapy in lieu of further diagnostics (treatment threshold), an interactive spreadsheet was designed and sample calculations were provided based on research estimates of diagnostic accuracy, diagnostic risk, and therapeutic risk/benefits. Results The prevalence of nongonococcal septic arthritis in ED patients with a single acutely painful joint is approximately 27% (95% confidence interval [CI] = 17% to 38%). With the exception of joint surgery (positive likelihood ratio [+LR] = 6.9) or skin infection overlying a prosthetic joint (+LR = 15.0), history, physical examination, and serum tests do not significantly alter posttest probability. Serum inflammatory markers such as white blood cell (WBC) counts, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) are not useful acutely. The interval LR for synovial white blood cell (sWBC) counts of 0 × 109–25 × 109/ L was 0.33; for 25 × 109–50 × 109/L, 1.06; for 50 × 109–100 × 109/L, 3.59; and exceeding 100 × 109/L, infinity. Synovial lactate may be useful to rule in or rule out the diagnosis of septic arthritis with a +LR ranging from 2.4 to infinity, and negative likelihood ratio (−LR) ranging from 0 to 0.46. Rapid polymerase chain reaction (PCR) of synovial fluid may identify the causative organism within 3 hours. Based on 56% sensitivity and 90% specificity for sWBC counts of >50 × 109/L in conjunction with best-evidence estimates for diagnosis-related risk and treatment-related risk/benefit, the arthrocentesis test threshold is 5%, with a treatment threshold of 39%. Conclusions Recent joint surgery or cellulitis overlying a prosthetic hip or knee were the only findings on history or physical examination that significantly alter the probability of nongonococcal septic arthritis. Extreme values of sWBC (>50 × 109/L) can increase, but not decrease, the probability of septic arthritis. Future ED-based diagnostic trials are needed to evaluate the role of clinical gestalt and the efficacy of nontraditional synovial markers such as lactate. PMID:21843213
Evidence-based diagnostics: adult septic arthritis.
Carpenter, Christopher R; Schuur, Jeremiah D; Everett, Worth W; Pines, Jesse M
2011-08-01
Acutely swollen or painful joints are common complaints in the emergency department (ED). Septic arthritis in adults is a challenging diagnosis, but prompt differentiation of a bacterial etiology is crucial to minimize morbidity and mortality. The objective was to perform a systematic review describing the diagnostic characteristics of history, physical examination, and bedside laboratory tests for nongonococcal septic arthritis. A secondary objective was to quantify test and treatment thresholds using derived estimates of sensitivity and specificity, as well as best-evidence diagnostic and treatment risks and anticipated benefits from appropriate therapy. Two electronic search engines (PUBMED and EMBASE) were used in conjunction with a selected bibliography and scientific abstract hand search. Inclusion criteria included adult trials of patients presenting with monoarticular complaints if they reported sufficient detail to reconstruct partial or complete 2 × 2 contingency tables for experimental diagnostic test characteristics using an acceptable criterion standard. Evidence was rated by two investigators using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS). When more than one similarly designed trial existed for a diagnostic test, meta-analysis was conducted using a random effects model. Interval likelihood ratios (LRs) were computed when possible. To illustrate one method to quantify theoretical points in the probability of disease whereby clinicians might cease testing altogether and either withhold treatment (test threshold) or initiate definitive therapy in lieu of further diagnostics (treatment threshold), an interactive spreadsheet was designed and sample calculations were provided based on research estimates of diagnostic accuracy, diagnostic risk, and therapeutic risk/benefits. The prevalence of nongonococcal septic arthritis in ED patients with a single acutely painful joint is approximately 27% (95% confidence interval [CI] = 17% to 38%). With the exception of joint surgery (positive likelihood ratio [+LR] = 6.9) or skin infection overlying a prosthetic joint (+LR = 15.0), history, physical examination, and serum tests do not significantly alter posttest probability. Serum inflammatory markers such as white blood cell (WBC) counts, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) are not useful acutely. The interval LR for synovial white blood cell (sWBC) counts of 0 × 10(9)-25 × 10(9)/L was 0.33; for 25 × 10(9)-50 × 10(9)/L, 1.06; for 50 × 10(9)-100 × 10(9)/L, 3.59; and exceeding 100 × 10(9)/L, infinity. Synovial lactate may be useful to rule in or rule out the diagnosis of septic arthritis with a +LR ranging from 2.4 to infinity, and negative likelihood ratio (-LR) ranging from 0 to 0.46. Rapid polymerase chain reaction (PCR) of synovial fluid may identify the causative organism within 3 hours. Based on 56% sensitivity and 90% specificity for sWBC counts of >50 × 10(9)/L in conjunction with best-evidence estimates for diagnosis-related risk and treatment-related risk/benefit, the arthrocentesis test threshold is 5%, with a treatment threshold of 39%. Recent joint surgery or cellulitis overlying a prosthetic hip or knee were the only findings on history or physical examination that significantly alter the probability of nongonococcal septic arthritis. Extreme values of sWBC (>50 × 10(9)/L) can increase, but not decrease, the probability of septic arthritis. Future ED-based diagnostic trials are needed to evaluate the role of clinical gestalt and the efficacy of nontraditional synovial markers such as lactate. © 2011 by the Society for Academic Emergency Medicine.
Ratmann, Oliver; Andrieu, Christophe; Wiuf, Carsten; Richardson, Sylvia
2009-06-30
Mathematical models are an important tool to explain and comprehend complex phenomena, and unparalleled computational advances enable us to easily explore them without any or little understanding of their global properties. In fact, the likelihood of the data under complex stochastic models is often analytically or numerically intractable in many areas of sciences. This makes it even more important to simultaneously investigate the adequacy of these models-in absolute terms, against the data, rather than relative to the performance of other models-but no such procedure has been formally discussed when the likelihood is intractable. We provide a statistical interpretation to current developments in likelihood-free Bayesian inference that explicitly accounts for discrepancies between the model and the data, termed Approximate Bayesian Computation under model uncertainty (ABCmicro). We augment the likelihood of the data with unknown error terms that correspond to freely chosen checking functions, and provide Monte Carlo strategies for sampling from the associated joint posterior distribution without the need of evaluating the likelihood. We discuss the benefit of incorporating model diagnostics within an ABC framework, and demonstrate how this method diagnoses model mismatch and guides model refinement by contrasting three qualitative models of protein network evolution to the protein interaction datasets of Helicobacter pylori and Treponema pallidum. Our results make a number of model deficiencies explicit, and suggest that the T. pallidum network topology is inconsistent with evolution dominated by link turnover or lateral gene transfer alone.
The high prevalence of pathologic calcium crystals in pre-operative knees.
Derfus, Beth A; Kurian, Jason B; Butler, Jeffrey J; Daft, Laureen J; Carrera, Guillermo F; Ryan, Lawrence M; Rosenthal, Ann K
2002-03-01
Calcium pyrophosphate dihydrate (CPPD) and basic calcium phosphate (BCP) crystals are important in the pathogenesis of osteoarthritis (OA) but are under recognized even in end stage disease. We determined the prevalence of these calcium crystals in synovial fluid (SF) of persons undergoing total knee arthroplasty for degenerative arthritis. SF samples were obtained from 53 knee joints undergoing total arthroplasty for a pre-operative diagnosis of OA. SF were analyzed via compensated light microscopy for CPPD crystals and a semiquantitative radiometric assay for BCP crystals. Fifty pre-operative radiographs were analyzed and graded according to the scale of Kellgren and Lawrence. Patients had an average age of 70 years at the time of surgery. CPPD and/or BCP crystals were identified in 60% of SF. Overall radiographic scores correlated with mean concentrations of BCP crystals. Higher mean radiographic scores correlated with the presence of calcium-containing crystals of either type in SF Radiographic chondrocalcinosis was identified in only 31% of those with SF CPPD. Pathologic calcium crystals were present in a majority of SF at the time of total knee arthroplasty. Intraoperative SF analysis could conveniently identify pathologic calcium crystals providing information that may be relevant to the future care of the patient's replaced joint and that of other joints. This information could also potentially aid in predicting the likelihood of the need for contralateral total knee arthroplasty.
Adaptive control of center of mass (global) motion and its joint (local) origin in gait.
Yang, Feng; Pai, Yi-Chung
2014-08-22
Dynamic gait stability can be quantified by the relationship of the motion state (i.e. the position and velocity) between the body center of mass (COM) and its base of support (BOS). Humans learn how to adaptively control stability by regulating the absolute COM motion state (i.e. its position and velocity) and/or by controlling the BOS (through stepping) in a predictable manner, or by doing both simultaneously following an external perturbation that disrupts their regular relationship. Post repeated-slip perturbation training, for instance, older adults learned to forward shift their COM position while walking with a reduced step length, hence reduced their likelihood of slip-induced falls. How and to what extent each individual joint influences such adaptive alterations is mostly unknown. A three-dimensional individualized human kinematic model was established. Based on the human model, sensitivity analysis was used to systematically quantify the influence of each lower limb joint on the COM position relative to the BOS and the step length during gait. It was found that the leading foot had the greatest effect on regulating the COM position relative to the BOS; and both hips bear the most influence on the step length. These findings could guide cost-effective but efficient fall-reduction training paradigm among older population. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multivariate longitudinal data analysis with censored and intermittent missing responses.
Lin, Tsung-I; Lachos, Victor H; Wang, Wan-Lun
2018-05-08
The multivariate linear mixed model (MLMM) has emerged as an important analytical tool for longitudinal data with multiple outcomes. However, the analysis of multivariate longitudinal data could be complicated by the presence of censored measurements because of a detection limit of the assay in combination with unavoidable missing values arising when subjects miss some of their scheduled visits intermittently. This paper presents a generalization of the MLMM approach, called the MLMM-CM, for a joint analysis of the multivariate longitudinal data with censored and intermittent missing responses. A computationally feasible expectation maximization-based procedure is developed to carry out maximum likelihood estimation within the MLMM-CM framework. Moreover, the asymptotic standard errors of fixed effects are explicitly obtained via the information-based method. We illustrate our methodology by using simulated data and a case study from an AIDS clinical trial. Experimental results reveal that the proposed method is able to provide more satisfactory performance as compared with the traditional MLMM approach. Copyright © 2018 John Wiley & Sons, Ltd.
Probability of spacesuit-induced fingernail trauma is associated with hand circumference.
Opperman, Roedolph A; Waldie, James M A; Natapoff, Alan; Newman, Dava J; Jones, Jeffrey A
2010-10-01
A significant number of astronauts sustain hand injuries during extravehicular activity training and operations. These hand injuries have been known to cause fingernail delamination (onycholysis) that requires medical intervention. This study investigated correlations between the anthropometrics of the hand and susceptibility to injury. The analysis explored the hypothesis that crewmembers with a high finger-to-hand size ratio are more likely to experience injuries. A database of 232 crewmembers' injury records and anthropometrics was sourced from NASA Johnson Space Center. No significant effect of finger-to-hand size was found on the probability of injury, but circumference and width of the metacarpophalangeal (MCP) joint were found to be significantly associated with injuries by the Kruskal-Wallis test. A multivariate logistic regression showed that hand circumference is the dominant effect on the likelihood of onycholysis. Male crewmembers with a hand circumference > 22.86 cm (9") have a 19.6% probability of finger injury, but those with hand circumferences < or = 22.86 cm (9") only have a 5.6% chance of injury. Findings were similar for female crewmembers. This increased probability may be due to constriction at large MCP joints by the current NASA Phase VI glove. Constriction may lead to occlusion of vascular flow to the fingers that may increase the chances of onycholysis. Injury rates are lower on gloves such as the superseded series 4000 and the Russian Orlan that provide more volume for the MCP joint. This suggests that we can reduce onycholysis by modifying the design of the current gloves at the MCP joint.
Cosmology with the largest galaxy cluster surveys: going beyond Fisher matrix forecasts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khedekar, Satej; Majumdar, Subhabrata, E-mail: satej@mpa-garching.mpg.de, E-mail: subha@tifr.res.in
2013-02-01
We make the first detailed MCMC likelihood study of cosmological constraints that are expected from some of the largest, ongoing and proposed, cluster surveys in different wave-bands and compare the estimates to the prevalent Fisher matrix forecasts. Mock catalogs of cluster counts expected from the surveys — eROSITA, WFXT, RCS2, DES and Planck, along with a mock dataset of follow-up mass calibrations are analyzed for this purpose. A fair agreement between MCMC and Fisher results is found only in the case of minimal models. However, for many cases, the marginalized constraints obtained from Fisher and MCMC methods can differ bymore » factors of 30-100%. The discrepancy can be alarmingly large for a time dependent dark energy equation of state, w(a); the Fisher methods are seen to under-estimate the constraints by as much as a factor of 4-5. Typically, Fisher estimates become more and more inappropriate as we move away from ΛCDM, to a constant-w dark energy to varying-w dark energy cosmologies. Fisher analysis, also, predicts incorrect parameter degeneracies. There are noticeable offsets in the likelihood contours obtained from Fisher methods that is caused due to an asymmetry in the posterior likelihood distribution as seen through a MCMC analysis. From the point of mass-calibration uncertainties, a high value of unknown scatter about the mean mass-observable relation, and its redshift dependence, is seen to have large degeneracies with the cosmological parameters σ{sub 8} and w(a) and can degrade the cosmological constraints considerably. We find that the addition of mass-calibrated cluster datasets can improve dark energy and σ{sub 8} constraints by factors of 2-3 from what can be obtained from CMB+SNe+BAO only . Finally, we show that a joint analysis of datasets of two (or more) different cluster surveys would significantly tighten cosmological constraints from using clusters only. Since, details of future cluster surveys are still being planned, we emphasize that optimal survey design must be done using MCMC analysis rather than Fisher forecasting.« less
Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao
2014-01-01
Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data
Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu
2012-01-01
SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840
Clinical Evaluation and Physical Exam Findings in Patients with Anterior Shoulder Instability.
Lizzio, Vincent A; Meta, Fabien; Fidai, Mohsin; Makhni, Eric C
2017-12-01
The goal of this paper is to provide an overview in evaluating the patient with suspected or known anteroinferior glenohumeral instability. There is a high rate of recurrent subluxations or dislocations in young patients with history of anterior shoulder dislocation, and recurrent instability will increase likelihood of further damage to the glenohumeral joint. Proper identification and treatment of anterior shoulder instability can dramatically reduce the rate of recurrent dislocation and prevent subsequent complications. Overall, the anterior release or surprise test demonstrates the best sensitivity and specificity for clinically diagnosing anterior shoulder instability, although other tests also have favorable sensitivities, specificities, positive likelihood ratios, negative likelihood ratios, and inter-rater reliabilities. Anterior shoulder instability is a relatively common injury in the young and athletic population. The combination of history and performing apprehension, relocation, release or surprise, anterior load, and anterior drawer exam maneuvers will optimize sensitivity and specificity for accurately diagnosing anterior shoulder instability in clinical practice.
Dual baseline search for muon antineutrino disappearance at 0.1 eV²
Cheng, G.; Huelsnitz, W.; Aguilar-Arevalo, A. A.; ...
2012-09-25
The MiniBooNE and SciBooNE collaborations report the results of a joint search for short baseline disappearance of ν¯ μ at Fermilab’s Booster Neutrino Beamline. The MiniBooNE Cherenkov detector and the SciBooNE tracking detector observe antineutrinos from the same beam, therefore the combined analysis of their data sets serves to partially constrain some of the flux and cross section uncertainties. Uncertainties in the ν μ background were constrained by neutrino flux and cross section measurements performed in both detectors. A likelihood ratio method was used to set a 90% confidence level upper limit on ν¯ μ disappearance that dramatically improves uponmore » prior limits in the Δm²=0.1–100 eV² region.« less
Raj, Anita; Silverman, Jay G; Klugman, Jeni; Saggurti, Niranjan; Donta, Balaiah; Shakya, Holly B
2018-01-01
The purpose of this study was to assess via longitudinal analysis whether women's economic empowerment and financial inclusion predicts incident IPV. This prospective study involved analysis of three waves of survey data collected from rural young married women (n = 853 women) in Maharashtra at baseline and 9&18 month follow-ups. This study, which was in the field from 2012 to 2014, was conducted as part of a larger family planning evaluation study unrelated to economic empowerment. Participants were surveyed on economic empowerment, as measured by items on women's income generation and joint decision-making of husband's income, and financial inclusion, as measured by bank account ownership. Women's land ownership and participation in microloan programs were also assessed but were too rare (2-3% reporting) to be included in analyses. Longitudinal regression models assessed whether women's economic empowerment predicted incident IPV at follow-up. At Wave 1 (baseline), one in ten women reported IPV in the past six months; 23% reported income generation; 58% reported having their own money; 61% reported joint control over husband's money, and 10% reported bank ownership. Women's income generation and having their own money did not predict IPV over time. However, women maintaining joint control over their husband's income were at a 60% reduced risk for subsequent incident IPV (AOR = 0.40; 95% CI = 0.18, 0.90), and women gaining joint control over time were at a 70% reduced risk for subsequent incident IPV (AOR = 0.30; 95% CI = 0.13, 0.72), relative to women whose husbands maintained sole control over his income. Women who initiated a new bank account by Wave 3 also had a 56% reduced likelihood of reporting incident IPV in this same wave (AOR = 0.44; 95% CI = 0.22, 0.93), relative to those who maintained no bank account at Waves 1 and 3. These findings suggest that women's joint control over husband's income and her financial inclusion as indicated by bank ownership appear to reduce risk for IPV, whereas her income generation or control over her own income do not. Awareness of and participation in financial inclusion services may help reduce women's risk for IPV in rural India and elsewhere. Copyright © 2017. Published by Elsevier Ltd.
Guy, Gery P; M Johnston, Emily; Ketsche, Patricia; Joski, Peter; Adams, E Kathleen
2017-03-01
Numerous states have implemented policies expanding public insurance eligibility or subsidizing private insurance for parents. To assess the impact of parental health insurance expansions from 1999 to 2012 on the likelihood that parents are insured; their children are insured; both the parent and child within a family unit are insured; and the type of insurance. Cross-sectional analysis of the 2000-2013 March supplements to the Current Population Survey, with data from the Medical Expenditure Panel Survey-Insurance Component and the Area Resource File. Cross-state and within-state multivariable regression models estimated the effects of health insurance expansions targeting parents using 2-way fixed effect modeling and difference-in-difference modeling. All analyses controlled for household, parent, child, and local area characteristics that could affect insurance status. Expansions increased parental coverage by 2.5 percentage points, and increased the likelihood of both parent and child being insured by 2.1 percentage points. Substantial variation was observed by type of expansion. Public expansions without premiums and special subsidized plan expansions had the largest effects on parental coverage and increased the likelihood of jointly insuring both the parent and child. Higher premiums were a substantial deterrent to parents' insurance. Our findings suggest that premiums and the type of insurance expansion can have a substantial impact on the insurance status of the family. These findings can help inform states as they continue to make decisions about expanding Medicaid under the Affordable Care Act to cover all family members.
Improving estimates of genetic maps: a meta-analysis-based approach.
Stewart, William C L
2007-07-01
Inaccurate genetic (or linkage) maps can reduce the power to detect linkage, increase type I error, and distort haplotype and relationship inference. To improve the accuracy of existing maps, I propose a meta-analysis-based method that combines independent map estimates into a single estimate of the linkage map. The method uses the variance of each independent map estimate to combine them efficiently, whether the map estimates use the same set of markers or not. As compared with a joint analysis of the pooled genotype data, the proposed method is attractive for three reasons: (1) it has comparable efficiency to the maximum likelihood map estimate when the pooled data are homogeneous; (2) relative to existing map estimation methods, it can have increased efficiency when the pooled data are heterogeneous; and (3) it avoids the practical difficulties of pooling human subjects data. On the basis of simulated data modeled after two real data sets, the proposed method can reduce the sampling variation of linkage maps commonly used in whole-genome linkage scans. Furthermore, when the independent map estimates are also maximum likelihood estimates, the proposed method performs as well as or better than when they are estimated by the program CRIMAP. Since variance estimates of maps may not always be available, I demonstrate the feasibility of three different variance estimators. Overall, the method should prove useful to investigators who need map positions for markers not contained in publicly available maps, and to those who wish to minimize the negative effects of inaccurate maps. Copyright 2007 Wiley-Liss, Inc.
Fermi LAT Stacking Analysis of Swift Localized GRBs
Ackermann, M.; Ajello, M.; Anderson, B.; ...
2016-05-05
In this paper, we perform a comprehensive stacking analysis of data collected by the Fermi Large Area Telescope (LAT) of γ-ray bursts (GRBs) localized by the Swift spacecraft, which were not detected by the LAT but which fell within the instrument's field of view at the time of trigger. We examine a total of 79 GRBs by comparing the observed counts over a range of time intervals to that expected from designated background orbits, as well as by using a joint likelihood technique to model the expected distribution of stacked counts. We find strong evidence for subthreshold emission at MeVmore » to GeV energies using both techniques. This observed excess is detected during intervals that include and exceed the durations typically characterizing the prompt emission observed at keV energies and lasts at least 2700 s after the co-aligned burst trigger. By utilizing a novel cumulative likelihood analysis, we find that although a burst's prompt γ-ray and afterglow X-ray flux both correlate with the strength of the subthreshold emission, the X-ray afterglow flux measured by Swift's X-ray Telescope at 11 hr post trigger correlates far more significantly. Overall, the extended nature of the subthreshold emission and its connection to the burst's afterglow brightness lend further support to the external forward shock origin of the late-time emission detected by the LAT. Finally, these results suggest that the extended high-energy emission observed by the LAT may be a relatively common feature but remains undetected in a majority of bursts owing to instrumental threshold effects.« less
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2016-01-15
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.
Smith, Toby O; Simpson, Michael; Ejindu, Vivian; Hing, Caroline B
2013-04-01
The purpose of this study was to assess the diagnostic test accuracy of magnetic resonance imaging (MRI), magnetic resonance arthrography (MRA) and multidetector arrays in CT arthrography (MDCT) for assessing chondral lesions in the hip joint. A review of the published and unpublished literature databases was performed to identify all studies reporting the diagnostic test accuracy (sensitivity/specificity) of MRI, MRA or MDCT for the assessment of adults with chondral (cartilage) lesions of the hip with surgical comparison (arthroscopic or open) as the reference test. All included studies were reviewed using the quality assessment of diagnostic accuracy studies appraisal tool. Pooled sensitivity, specificity, likelihood ratios and diagnostic odds ratios were calculated with 95 % confidence intervals using a random-effects meta-analysis for MRI, MRA and MDCT imaging. Eighteen studies satisfied the eligibility criteria. These included 648 hips from 637 patients. MRI indicated a pooled sensitivity of 0.59 (95 % CI: 0.49-0.70) and specificity of 0.94 (95 % CI: 0.90-0.97), and MRA sensitivity and specificity values were 0.62 (95 % CI: 0.57-0.66) and 0.86 (95 % CI: 0.83-0.89), respectively. The diagnostic test accuracy for the detection of hip joint cartilage lesions is currently superior for MRI compared with MRA. There were insufficient data to perform meta-analysis for MDCT or CTA protocols. Based on the current limited diagnostic test accuracy of the use of magnetic resonance or CT, arthroscopy remains the most accurate method of assessing chondral lesions in the hip joint.
Ten Cate, D F; Jacobs, J W G; Swen, W A A; Hazes, J M W; de Jager, M H; Basoski, N M; Haagsma, C J; Luime, J J; Gerards, A H
2018-01-30
At present, there are no prognostic parameters unequivocally predicting treatment failure in early rheumatoid arthritis (RA) patients. We investigated whether baseline ultrasonography (US) findings of joints, when added to baseline clinical, laboratory, and radiographical data, could improve prediction of failure to achieve Disease Activity Score assessing 28 joints (DAS28) remission (<2.6) at 1 year in newly diagnosed RA patients. A multicentre cohort of newly diagnosed RA patients was followed prospectively for 1 year. US of the hands, wrists, and feet was performed at baseline. Clinical, laboratory, and radiographical parameters were recorded. Primary analysis was the prediction by logistic regression of the absence of DAS28 remission 12 months after diagnosis and start of therapy. Of 194 patients included, 174 were used for the analysis, with complete data available for 159. In a multivariate model with baseline DAS28 (odds ratio (OR) 1.6, 95% confidence interval (CI) 1.2-2.2), the presence of rheumatoid factor (OR 2.3, 95% CI 1.1-5.1), and type of monitoring strategy (OR 0.2, 95% CI 0.05-0.85), the addition of baseline US results for joints (OR 0.96, 95% CI 0.89-1.04) did not significantly improve the prediction of failure to achieve DAS28 remission (likelihood ratio test, 1.04; p = 0.31). In an early RA population, adding baseline ultrasonography of the hands, wrists, and feet to commonly available baseline characteristics did not improve prediction of failure to achieve DAS28 remission at 12 months. Clinicaltrials.gov, NCT01752309 . Registered on 19 December 2012.
Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.
Xin, Cao; Chongshi, Gu
2016-01-01
Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.
Joint Inference of Population Assignment and Demographic History
Choi, Sang Chul; Hey, Jody
2011-01-01
A new approach to assigning individuals to populations using genetic data is described. Most existing methods work by maximizing Hardy–Weinberg and linkage equilibrium within populations, neither of which will apply for many demographic histories. By including a demographic model, within a likelihood framework based on coalescent theory, we can jointly study demographic history and population assignment. Genealogies and population assignments are sampled from a posterior distribution using a general isolation-with-migration model for multiple populations. A measure of partition distance between assignments facilitates not only the summary of a posterior sample of assignments, but also the estimation of the posterior density for the demographic history. It is shown that joint estimates of assignment and demographic history are possible, including estimation of population phylogeny for samples from three populations. The new method is compared to results of a widely used assignment method, using simulated and published empirical data sets. PMID:21775468
Feature Screening in Ultrahigh Dimensional Cox's Model.
Yang, Guangren; Yu, Ye; Li, Runze; Buu, Anne
Survival data with ultrahigh dimensional covariates such as genetic markers have been collected in medical studies and other fields. In this work, we propose a feature screening procedure for the Cox model with ultrahigh dimensional covariates. The proposed procedure is distinguished from the existing sure independence screening (SIS) procedures (Fan, Feng and Wu, 2010, Zhao and Li, 2012) in that the proposed procedure is based on joint likelihood of potential active predictors, and therefore is not a marginal screening procedure. The proposed procedure can effectively identify active predictors that are jointly dependent but marginally independent of the response without performing an iterative procedure. We develop a computationally effective algorithm to carry out the proposed procedure and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. That is, with the probability tending to one, the selected variable set includes the actual active predictors. We conduct Monte Carlo simulation to evaluate the finite sample performance of the proposed procedure and further compare the proposed procedure and existing SIS procedures. The proposed methodology is also demonstrated through an empirical analysis of a real data example.
Code of Federal Regulations, 2014 CFR
2014-10-01
... not designed to be used in situations where there is a reasonable likelihood of a collision with much... rail crossing at grade, a shared method of train control, or shared highway-rail grade crossings. 4.... You should explain the nature of such simultaneous joint use, the system of train control, the...
Azagba, Sunday; Sharaf, Mesbah F
2014-03-01
Research has shown that smoking menthol cigarettes induces smoking initiation and hinders cessation efforts especially among youth. The objective of this paper is to examine the association between menthol cigarette smoking and substance use among adolescent students in Canada. A nationally representative cross-sectional sample of 4466 Canadian students in grades 7 to 12 from the 2010-2011 Youth Smoking Survey is analyzed. A bivariate probit model is used jointly to examine the association of menthol smoking status with binge drinking and marijuana use. 32% of the current smokers in grades 7 to 12 smoke mentholated cigarettes, 73% are binge drinkers and 79% use marijuana. Results of the bivariate probit regression analysis, controlling for other covariates, show statistically significant differences in the likelihood of binge drinking and marijuana use between menthol and non-menthol smokers. Menthol cigarette smokers are 6% (ME=0.06, 95% CI=0.03-0.09) more likely to binge drink and 7% (ME=0.07, 95% CI=0.05-0.10) more likely to use marijuana. Smoking menthol cigarettes is associated with a higher likelihood of binge drinking and marijuana use among Canadian adolescents. Banning menthol in cigarettes may be beneficial to public health. Copyright © 2013 Elsevier Ltd. All rights reserved.
Long, Yi; Du, Zhi-Jiang; Chen, Chao-Feng; Dong, Wei; Wang, Wei-Dong
2017-07-01
The most important step for lower extremity exoskeleton is to infer human motion intent (HMI), which contributes to achieve human exoskeleton collaboration. Since the user is in the control loop, the relationship between human robot interaction (HRI) information and HMI is nonlinear and complicated, which is difficult to be modeled by using mathematical approaches. The nonlinear approximation can be learned by using machine learning approaches. Gaussian Process (GP) regression is suitable for high-dimensional and small-sample nonlinear regression problems. GP regression is restrictive for large data sets due to its computation complexity. In this paper, an online sparse GP algorithm is constructed to learn the HMI. The original training dataset is collected when the user wears the exoskeleton system with friction compensation to perform unconstrained movement as far as possible. The dataset has two kinds of data, i.e., (1) physical HRI, which is collected by torque sensors placed at the interaction cuffs for the active joints, i.e., knee joints; (2) joint angular position, which is measured by optical position sensors. To reduce the computation complexity of GP, grey relational analysis (GRA) is utilized to specify the original dataset and provide the final training dataset. Those hyper-parameters are optimized offline by maximizing marginal likelihood and will be applied into online GP regression algorithm. The HMI, i.e., angular position of human joints, will be regarded as the reference trajectory for the mechanical legs. To verify the effectiveness of the proposed algorithm, experiments are performed on a subject at a natural speed. The experimental results show the HMI can be obtained in real time, which can be extended and employed in the similar exoskeleton systems.
Hackney, James; Brummel, Sara; Newman, Mary; Scott, Shannon; Reinagel, Matthew; Smith, Jennifer
2015-09-01
We carried out a study to investigate how low stiffness flooring may help prevent overuse injuries of the lower extremity in dancers. It was hypothesized that performing a ballet jump (sauté) on a reduced stiffness dance floor would decrease maximum joint flexion angles and negative angular velocities at the hips, knees, or ankles compared to performing the same jump on a harder floor. The participants were 15 young adult female dancers (age range 18 to 28, mean = 20.89 ± 2.93 years) with at least 5 years of continuous ballet experience and without history of serious lower body injury, surgery, or recent pain. They performed sautés on a (low stiffness) Harlequin ® WoodSpring Floor and on a vinyl-covered hardwood on concrete floor. Maximum joint flexion angles and negative velocities at bilateral hips, knees, and ankles were measured with the "Ariel Performance Analysis System" (APAS). Paired one-tailed t-tests yielded significant decreases in maximum knee angle (average decrease = 3.4° ± 4.2°, p = 0.026) and angular negative velocity of the ankles (average decrease = 18.7°/sec ± 27.9°/sec, p = 0.009) with low stiffness flooring. If the knee angle is less acute, then the length of the external knee flexion moment arm will also be shorter and result in a smaller external knee flexion moment, given an equal landing force. Also, high velocities of eccentric muscle contraction, which are necessary to control negative angular velocity of the ankle joint, are associated with higher risk of musculotendinous injury. Hence, our findings indicate that reduced floor stiffness may indeed help decrease the likelihood of lower extremity injuries.
NASA Astrophysics Data System (ADS)
Gaál, Ladislav; Szolgay, Ján.; Bacigál, Tomáå.¡; Kohnová, Silvia
2010-05-01
Copula-based estimation methods of hydro-climatological extremes have increasingly been gaining attention of researchers and practitioners in the last couple of years. Unlike the traditional estimation methods which are based on bivariate cumulative distribution functions (CDFs), copulas are a relatively flexible tool of statistics that allow for modelling dependencies between two or more variables such as flood peaks and flood volumes without making strict assumptions on the marginal distributions. The dependence structure and the reliability of the joint estimates of hydro-climatological extremes, mainly in the right tail of the joint CDF not only depends on the particular copula adopted but also on the data available for the estimation of the marginal distributions of the individual variables. Generally, data samples for frequency modelling have limited temporal extent, which is a considerable drawback of frequency analyses in practice. Therefore, it is advised to deal with statistical methods that improve any part of the process of copula construction and result in more reliable design values of hydrological variables. The scarcity of the data sample mostly in the extreme tail of the joint CDF can be bypassed, e.g., by using a considerably larger amount of simulated data by rainfall-runoff analysis or by including historical information on the variables under study. The latter approach of data extension is used here to make the quantile estimates of the individual marginals of the copula more reliable. In the presented paper it is proposed to use historical information in the frequency analysis of the marginal distributions in the framework of Bayesian Monte Carlo Markov Chain (MCMC) simulations. Generally, a Bayesian approach allows for a straightforward combination of different sources of information on floods (e.g. flood data from systematic measurements and historical flood records, respectively) in terms of a product of the corresponding likelihood functions. On the other hand, the MCMC algorithm is a numerical approach for sampling from the likelihood distributions. The Bayesian MCMC methods therefore provide an attractive way to estimate the uncertainty in parameters and quantile metrics of frequency distributions. The applicability of the method is demonstrated in a case study of the hydroelectric power station Orlík on the Vltava River. This site has a key role in the flood prevention of Prague, the capital city of the Czech Republic. The record length of the available flood data is 126 years from the period 1877-2002, while the flood event observed in 2002 that caused extensive damages and numerous casualties is treated as a historic one. To estimate the joint probabilities of flood peaks and volumes, different copulas are fitted and their goodness-of-fit are evaluated by bootstrap simulations. Finally, selected quantiles of flood volumes conditioned on given flood peaks are derived and compared with those obtained by the traditional method used in the practice of water management specialists of the Vltava River.
A power study of bivariate LOD score analysis of a complex trait and fear/discomfort with strangers
Ji, Fei; Lee, Dayoung; Mendell, Nancy Role
2005-01-01
Complex diseases are often reported along with disease-related traits (DRT). Sometimes investigators consider both disease and DRT phenotypes separately and sometimes they consider individuals as affected if they have either the disease or the DRT, or both. We propose instead to consider the joint distribution of the disease and the DRT and do a linkage analysis assuming a pleiotropic model. We evaluated our results through analysis of the simulated datasets provided by Genetic Analysis Workshop 14. We first conducted univariate linkage analysis of the simulated disease, Kofendrerd Personality Disorder and one of its simulated associated traits, phenotype b (fear/discomfort with strangers). Subsequently, we considered the bivariate phenotype, which combined the information on Kofendrerd Personality Disorder and fear/discomfort with strangers. We developed a program to perform bivariate linkage analysis using an extension to the Elston-Stewart peeling method of likelihood calculation. Using this program we considered the microsatellites within 30 cM of the gene pleiotropic for this simulated disease and DRT. Based on 100 simulations of 300 families we observed excellent power to detect linkage within 10 cM of the disease locus using the DRT and the bivariate trait. PMID:16451570
A power study of bivariate LOD score analysis of a complex trait and fear/discomfort with strangers.
Ji, Fei; Lee, Dayoung; Mendell, Nancy Role
2005-12-30
Complex diseases are often reported along with disease-related traits (DRT). Sometimes investigators consider both disease and DRT phenotypes separately and sometimes they consider individuals as affected if they have either the disease or the DRT, or both. We propose instead to consider the joint distribution of the disease and the DRT and do a linkage analysis assuming a pleiotropic model. We evaluated our results through analysis of the simulated datasets provided by Genetic Analysis Workshop 14. We first conducted univariate linkage analysis of the simulated disease, Kofendrerd Personality Disorder and one of its simulated associated traits, phenotype b (fear/discomfort with strangers). Subsequently, we considered the bivariate phenotype, which combined the information on Kofendrerd Personality Disorder and fear/discomfort with strangers. We developed a program to perform bivariate linkage analysis using an extension to the Elston-Stewart peeling method of likelihood calculation. Using this program we considered the microsatellites within 30 cM of the gene pleiotropic for this simulated disease and DRT. Based on 100 simulations of 300 families we observed excellent power to detect linkage within 10 cM of the disease locus using the DRT and the bivariate trait.
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying
2018-03-01
In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Albert, A.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.;
2011-01-01
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10(exp -26) cm(exp 3) / s at 5 GeV to about 5 X 10(exp -23) cm(exp 3)/ s at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section (approx 3 X 10(exp -26) cm(exp 3)/s for a purely s-wave cross section), without assuming additional boost factors.
NASA Astrophysics Data System (ADS)
Ackermann, M.; Ajello, M.; Albert, A.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Bonamente, E.; Borgland, A. W.; Bregeon, J.; Brigida, M.; Bruel, P.; Buehler, R.; Burnett, T. H.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Cañadas, B.; Caraveo, P. A.; Casandjian, J. M.; Cecchi, C.; Charles, E.; Chekhtman, A.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Conrad, J.; Cutini, S.; de Angelis, A.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Do Couto E Silva, E.; Drell, P. S.; Drlica-Wagner, A.; Falletti, L.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gargano, F.; Gasparrini, D.; Gehrels, N.; Germani, S.; Giglietto, N.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grenier, I. A.; Guiriec, S.; Gustafsson, M.; Hadasch, D.; Hayashida, M.; Hays, E.; Hughes, R. E.; Jeltema, T. E.; Jóhannesson, G.; Johnson, R. P.; Johnson, A. S.; Kamae, T.; Katagiri, H.; Kataoka, J.; Knödlseder, J.; Kuss, M.; Lande, J.; Latronico, L.; Lionetto, A. M.; Llena Garde, M.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Madejski, G. M.; Mazziotta, M. N.; McEnery, J. E.; Mehault, J.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Naumann-Godo, M.; Norris, J. P.; Nuss, E.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orlando, E.; Ormes, J. F.; Ozaki, M.; Paneque, D.; Parent, D.; Pesce-Rollins, M.; Pierbattista, M.; Piron, F.; Pivato, G.; Porter, T. A.; Profumo, S.; Rainò, S.; Razzano, M.; Reimer, A.; Reimer, O.; Ritz, S.; Roth, M.; Sadrozinski, H. F.-W.; Sbarra, C.; Scargle, J. D.; Schalk, T. L.; Sgrò, C.; Siskind, E. J.; Spandre, G.; Spinelli, P.; Strigari, L.; Suson, D. J.; Tajima, H.; Takahashi, H.; Tanaka, T.; Thayer, J. G.; Thayer, J. B.; Thompson, D. J.; Tibaldo, L.; Tinivella, M.; Torres, D. F.; Troja, E.; Uchiyama, Y.; Vandenbroucke, J.; Vasileiou, V.; Vianello, G.; Vitale, V.; Waite, A. P.; Wang, P.; Winer, B. L.; Wood, K. S.; Wood, M.; Yang, Z.; Zimmer, S.; Kaplinghat, M.; Martinez, G. D.
2011-12-01
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10-26cm3s-1 at 5 GeV to about 5×10-23cm3s-1 at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section (˜3×10-26cm3s-1 for a purely s-wave cross section), without assuming additional boost factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% con dence level upper limits range from about 10 -26 cm3s -1 at 5 GeV to about 5 X10 -23 cm3smore » -1 at 1 TeV, depending on the dark matter annihilation nal state. For the rst time, using gamma rays, we are able to rule out models with the most generic cross section (~ 3 X 10 -26 cm 3s -1 for a purely s-wave cross section), without assuming additional boost factors.« less
Ackermann, M.
2011-12-01
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% con dence level upper limits range from about 10 -26 cm3s -1 at 5 GeV to about 5 X10 -23 cm3smore » -1 at 1 TeV, depending on the dark matter annihilation nal state. For the rst time, using gamma rays, we are able to rule out models with the most generic cross section (~ 3 X 10 -26 cm 3s -1 for a purely s-wave cross section), without assuming additional boost factors.« less
Dual baseline search for muon antineutrino disappearance at 0.1eV2<Δm2<100eV2
NASA Astrophysics Data System (ADS)
Cheng, G.; Huelsnitz, W.; Aguilar-Arevalo, A. A.; Alcaraz-Aunion, J. L.; Brice, S. J.; Brown, B. C.; Bugel, L.; Catala-Perez, J.; Church, E. D.; Conrad, J. M.; Dharmapalan, R.; Djurcic, Z.; Dore, U.; Finley, D. A.; Ford, R.; Franke, A. J.; Garcia, F. G.; Garvey, G. T.; Giganti, C.; Gomez-Cadenas, J. J.; Grange, J.; Guzowski, P.; Hanson, A.; Hayato, Y.; Hiraide, K.; Ignarra, C.; Imlay, R.; Johnson, R. A.; Jones, B. J. P.; Jover-Manas, G.; Karagiorgi, G.; Katori, T.; Kobayashi, Y. K.; Kobilarcik, T.; Kubo, H.; Kurimoto, Y.; Louis, W. C.; Loverre, P. F.; Ludovici, L.; Mahn, K. B. M.; Mariani, C.; Marsh, W.; Masuike, S.; Matsuoka, K.; McGary, V. T.; Metcalf, W.; Mills, G. B.; Mirabal, J.; Mitsuka, G.; Miyachi, Y.; Mizugashira, S.; Moore, C. D.; Mousseau, J.; Nakajima, Y.; Nakaya, T.; Napora, R.; Nienaber, P.; Orme, D.; Osmanov, B.; Otani, M.; Pavlovic, Z.; Perevalov, D.; Polly, C. C.; Ray, H.; Roe, B. P.; Russell, A. D.; Sanchez, F.; Shaevitz, M. H.; Shibata, T.-A.; Sorel, M.; Spitz, J.; Stancu, I.; Stefanski, R. J.; Takei, H.; Tanaka, H.-K.; Tanaka, M.; Tayloe, R.; Taylor, I. J.; Tesarek, R. J.; Uchida, Y.; Van de Water, R. G.; Walding, J. J.; Wascko, M. O.; White, D. H.; White, H. B.; Wickremasinghe, D. A.; Yokoyama, M.; Zeller, G. P.; Zimmerman, E. D.
2012-09-01
The MiniBooNE and SciBooNE collaborations report the results of a joint search for short baseline disappearance of ν¯μ at Fermilab’s Booster Neutrino Beamline. The MiniBooNE Cherenkov detector and the SciBooNE tracking detector observe antineutrinos from the same beam, therefore the combined analysis of their data sets serves to partially constrain some of the flux and cross section uncertainties. Uncertainties in the νμ background were constrained by neutrino flux and cross section measurements performed in both detectors. A likelihood ratio method was used to set a 90% confidence level upper limit on ν¯μ disappearance that dramatically improves upon prior limits in the Δm2=0.1-100eV2 region.
Islam, Ahmed Zohirul; Rahman, Mosiur; Mostofa, Md Golam
2017-10-01
This study aimed to explore the association between socio-demographic factors and contraceptive use among fecund women under 25years old. This study utilized a cross-sectional data (n=3744) extracted from the Bangladesh Demographic and Health Survey 2011. Differences in the use of contraceptives by socio-demographic characteristics were assessed by χ 2 analyses. Binary logistic regression was used to identify the determinants of contraceptive use among young women. This study observed that 71% fecund women aged below 25years used contraceptives. Getting family planning (FP) methods from FP workers increases the likelihood of using contraceptives among young women because outreach activities by FP workers and accessibility of FP related information pave the way of using contraceptives. Husband-wife joint participation in decision making on health care increases the likelihood of using contraceptives. Participation of women in decision making on health care could be achieved by promoting higher education and gainful employment for women. Reproductive and sex education should be introduced in schools to prepare the young for healthy and responsible living. Moreover, policy makers should focus on developing negotiation skills in young women by creating educational and employment opportunities since husband-wife joint participation in decision making increases contraceptive use. Copyright © 2017 Elsevier B.V. All rights reserved.
Vet, Raymond; de Wit, John B F; Das, Enny
2014-02-01
This study assessed the separate and joint effects of having a goal intention and the completeness of implementation intention formation on the likelihood of attending an appointment to obtain vaccination against the hepatitis B virus among men who have sex with men (MSM) in the Netherlands. Extending previous research, it was hypothesized that to be effective in promoting vaccination, implementation intention formation not only requires a strong goal intention, but also complete details specifying when, where and how to make an appointment to obtain hepatitis B virus vaccination among MSM. MSM at risk for hepatitis B virus (N = 616), with strong or weak intentions to obtain hepatitis B virus vaccination, were randomly assigned to form an implementation intention or not. Completeness of implementation intentions was rated and hepatitis B virus uptake was assessed through data linkage with the joint vaccination registry of the collaborating Public Health Services. Having a strong goal intention to obtain hepatitis B virus vaccination and forming an implementation intention, each significantly and independently increased the likelihood of MSM obtaining hepatitis B virus vaccination. In addition, MSM who formed complete implementation intentions were more successful in obtaining vaccination (p < 0.01). The formation of complete implementation intentions was promoted by strong goal intentions (p < 0.01).
MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors.
Hedeker, D; Gibbons, R D
1996-05-01
MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors. This model can be used for analysis of unbalanced longitudinal data, where individuals may be measured at a different number of timepoints, or even at different timepoints. Autocorrelated errors of a general form or following an AR(1), MA(1), or ARMA(1,1) form are allowable. This model can also be used for analysis of clustered data, where the mixed-effects model assumes data within clusters are dependent. The degree of dependency is estimated jointly with estimates of the usual model parameters, thus adjusting for clustering. MIXREG uses maximum marginal likelihood estimation, utilizing both the EM algorithm and a Fisher-scoring solution. For the scoring solution, the covariance matrix of the random effects is expressed in its Gaussian decomposition, and the diagonal matrix reparameterized using the exponential transformation. Estimation of the individual random effects is accomplished using an empirical Bayes approach. Examples illustrating usage and features of MIXREG are provided.
Precision growth index using the clustering of cosmic structures and growth data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pouri, Athina; Basilakos, Spyros; Plionis, Manolis, E-mail: athpouri@phys.uoa.gr, E-mail: svasil@academyofathens.gr, E-mail: mplionis@physics.auth.gr
2014-08-01
We use the clustering properties of Luminous Red Galaxies (LRGs) and the growth rate data provided by the various galaxy surveys in order to constrain the growth index γ) of the linear matter fluctuations. We perform a standard χ{sup 2}-minimization procedure between theoretical expectations and data, followed by a joint likelihood analysis and we find a value of γ=0.56± 0.05, perfectly consistent with the expectations of the ΛCDM model, and Ω{sub m0} =0.29± 0.01, in very good agreement with the latest Planck results. Our analysis provides significantly more stringent growth index constraints with respect to previous studies, as indicated by the fact thatmore » the corresponding uncertainty is only ∼ 0.09 γ. Finally, allowing γ to vary with redshift in two manners (Taylor expansion around z=0, and Taylor expansion around the scale factor), we find that the combined statistical analysis between our clustering and literature growth data alleviates the degeneracy and obtain more stringent constraints with respect to other recent studies.« less
Multisite EPR oximetry from multiple quadrature harmonics.
Ahmad, R; Som, S; Johnson, D H; Zweier, J L; Kuppusamy, P; Potter, L C
2012-01-01
Multisite continuous wave (CW) electron paramagnetic resonance (EPR) oximetry using multiple quadrature field modulation harmonics is presented. First, a recently developed digital receiver is used to extract multiple harmonics of field modulated projection data. Second, a forward model is presented that relates the projection data to unknown parameters, including linewidth at each site. Third, a maximum likelihood estimator of unknown parameters is reported using an iterative algorithm capable of jointly processing multiple quadrature harmonics. The data modeling and processing are applicable for parametric lineshapes under nonsaturating conditions. Joint processing of multiple harmonics leads to 2-3-fold acceleration of EPR data acquisition. For demonstration in two spatial dimensions, both simulations and phantom studies on an L-band system are reported. Copyright © 2011 Elsevier Inc. All rights reserved.
Li, Haocheng; Zhang, Yukun; Carroll, Raymond J; Keadle, Sarah Kozey; Sampson, Joshua N; Matthews, Charles E
2017-11-10
A mixed effect model is proposed to jointly analyze multivariate longitudinal data with continuous, proportion, count, and binary responses. The association of the variables is modeled through the correlation of random effects. We use a quasi-likelihood type approximation for nonlinear variables and transform the proposed model into a multivariate linear mixed model framework for estimation and inference. Via an extension to the EM approach, an efficient algorithm is developed to fit the model. The method is applied to physical activity data, which uses a wearable accelerometer device to measure daily movement and energy expenditure information. Our approach is also evaluated by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
A long-term earthquake rate model for the central and eastern United States from smoothed seismicity
Moschetti, Morgan P.
2015-01-01
I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.
Joint analysis of BICEP2/keck array and Planck Data.
Ade, P A R; Aghanim, N; Ahmed, Z; Aikin, R W; Alexander, K D; Arnaud, M; Aumont, J; Baccigalupi, C; Banday, A J; Barkats, D; Barreiro, R B; Bartlett, J G; Bartolo, N; Battaner, E; Benabed, K; Benoît, A; Benoit-Lévy, A; Benton, S J; Bernard, J-P; Bersanelli, M; Bielewicz, P; Bischoff, C A; Bock, J J; Bonaldi, A; Bonavera, L; Bond, J R; Borrill, J; Bouchet, F R; Boulanger, F; Brevik, J A; Bucher, M; Buder, I; Bullock, E; Burigana, C; Butler, R C; Buza, V; Calabrese, E; Cardoso, J-F; Catalano, A; Challinor, A; Chary, R-R; Chiang, H C; Christensen, P R; Colombo, L P L; Combet, C; Connors, J; Couchot, F; Coulais, A; Crill, B P; Curto, A; Cuttaia, F; Danese, L; Davies, R D; Davis, R J; de Bernardis, P; de Rosa, A; de Zotti, G; Delabrouille, J; Delouis, J-M; Désert, F-X; Dickinson, C; Diego, J M; Dole, H; Donzelli, S; Doré, O; Douspis, M; Dowell, C D; Duband, L; Ducout, A; Dunkley, J; Dupac, X; Dvorkin, C; Efstathiou, G; Elsner, F; Enßlin, T A; Eriksen, H K; Falgarone, E; Filippini, J P; Finelli, F; Fliescher, S; Forni, O; Frailis, M; Fraisse, A A; Franceschi, E; Frejsel, A; Galeotta, S; Galli, S; Ganga, K; Ghosh, T; Giard, M; Gjerløw, E; Golwala, S R; González-Nuevo, J; Górski, K M; Gratton, S; Gregorio, A; Gruppuso, A; Gudmundsson, J E; Halpern, M; Hansen, F K; Hanson, D; Harrison, D L; Hasselfield, M; Helou, G; Henrot-Versillé, S; Herranz, D; Hildebrandt, S R; Hilton, G C; Hivon, E; Hobson, M; Holmes, W A; Hovest, W; Hristov, V V; Huffenberger, K M; Hui, H; Hurier, G; Irwin, K D; Jaffe, A H; Jaffe, T R; Jewell, J; Jones, W C; Juvela, M; Karakci, A; Karkare, K S; Kaufman, J P; Keating, B G; Kefeli, S; Keihänen, E; Kernasovskiy, S A; Keskitalo, R; Kisner, T S; Kneissl, R; Knoche, J; Knox, L; Kovac, J M; Krachmalnicoff, N; Kunz, M; Kuo, C L; Kurki-Suonio, H; Lagache, G; Lähteenmäki, A; Lamarre, J-M; Lasenby, A; Lattanzi, M; Lawrence, C R; Leitch, E M; Leonardi, R; Levrier, F; Lewis, A; Liguori, M; Lilje, P B; Linden-Vørnle, M; López-Caniego, M; Lubin, P M; Lueker, M; Macías-Pérez, J F; Maffei, B; Maino, D; Mandolesi, N; Mangilli, A; Maris, M; Martin, P G; Martínez-González, E; Masi, S; Mason, P; Matarrese, S; Megerian, K G; Meinhold, P R; Melchiorri, A; Mendes, L; Mennella, A; Migliaccio, M; Mitra, S; Miville-Deschênes, M-A; Moneti, A; Montier, L; Morgante, G; Mortlock, D; Moss, A; Munshi, D; Murphy, J A; Naselsky, P; Nati, F; Natoli, P; Netterfield, C B; Nguyen, H T; Nørgaard-Nielsen, H U; Noviello, F; Novikov, D; Novikov, I; O'Brient, R; Ogburn, R W; Orlando, A; Pagano, L; Pajot, F; Paladini, R; Paoletti, D; Partridge, B; Pasian, F; Patanchon, G; Pearson, T J; Perdereau, O; Perotto, L; Pettorino, V; Piacentini, F; Piat, M; Pietrobon, D; Plaszczynski, S; Pointecouteau, E; Polenta, G; Ponthieu, N; Pratt, G W; Prunet, S; Pryke, C; Puget, J-L; Rachen, J P; Reach, W T; Rebolo, R; Reinecke, M; Remazeilles, M; Renault, C; Renzi, A; Richter, S; Ristorcelli, I; Rocha, G; Rossetti, M; Roudier, G; Rowan-Robinson, M; Rubiño-Martín, J A; Rusholme, B; Sandri, M; Santos, D; Savelainen, M; Savini, G; Schwarz, R; Scott, D; Seiffert, M D; Sheehy, C D; Spencer, L D; Staniszewski, Z K; Stolyarov, V; Sudiwala, R; Sunyaev, R; Sutton, D; Suur-Uski, A-S; Sygnet, J-F; Tauber, J A; Teply, G P; Terenzi, L; Thompson, K L; Toffolatti, L; Tolan, J E; Tomasi, M; Tristram, M; Tucci, M; Turner, A D; Valenziano, L; Valiviita, J; Van Tent, B; Vibert, L; Vielva, P; Vieregg, A G; Villa, F; Wade, L A; Wandelt, B D; Watson, R; Weber, A C; Wehus, I K; White, M; White, S D M; Willmert, J; Wong, C L; Yoon, K W; Yvon, D; Zacchei, A; Zonca, A
2015-03-13
We report the results of a joint analysis of data from BICEP2/Keck Array and Planck. BICEP2 and Keck Array have observed the same approximately 400 deg^{2} patch of sky centered on RA 0 h, Dec. -57.5°. The combined maps reach a depth of 57 nK deg in Stokes Q and U in a band centered at 150 GHz. Planck has observed the full sky in polarization at seven frequencies from 30 to 353 GHz, but much less deeply in any given region (1.2 μK deg in Q and U at 143 GHz). We detect 150×353 cross-correlation in B modes at high significance. We fit the single- and cross-frequency power spectra at frequencies ≥150 GHz to a lensed-ΛCDM model that includes dust and a possible contribution from inflationary gravitational waves (as parametrized by the tensor-to-scalar ratio r), using a prior on the frequency spectral behavior of polarized dust emission from previous Planck analysis of other regions of the sky. We find strong evidence for dust and no statistically significant evidence for tensor modes. We probe various model variations and extensions, including adding a synchrotron component in combination with lower frequency data, and find that these make little difference to the r constraint. Finally, we present an alternative analysis which is similar to a map-based cleaning of the dust contribution, and show that this gives similar constraints. The final result is expressed as a likelihood curve for r, and yields an upper limit r_{0.05}<0.12 at 95% confidence. Marginalizing over dust and r, lensing B modes are detected at 7.0σ significance.
Functional mapping of quantitative trait loci associated with rice tillering.
Liu, G F; Li, M; Wen, J; Du, Y; Zhang, Y-M
2010-10-01
Several biologically significant parameters that are related to rice tillering are closely associated with rice grain yield. Although identification of the genes that control rice tillering and therefore influence crop yield would be valuable for rice production management and genetic improvement, these genes remain largely unidentified. In this study, we carried out functional mapping of quantitative trait loci (QTLs) for rice tillering in 129 doubled haploid lines, which were derived from a cross between IR64 and Azucena. We measured the average number of tillers in each plot at seven developmental stages and fit the growth trajectory of rice tillering with the Wang-Lan-Ding mathematical model. Four biologically meaningful parameters in this model--the potential maximum for tiller number (K), the optimum tiller time (t(0)), and the increased rate (r), or the reduced rate (c) at the time of deviation from t(0)--were our defined variables for multi-marker joint analysis under the framework of penalized maximum likelihood, as well as composite interval mapping. We detected a total of 27 QTLs that accounted for 2.49-8.54% of the total phenotypic variance. Nine common QTLs across multi-marker joint analysis and composite interval mapping showed high stability, while one QTL was environment-specific and three were epistatic. We also identified several genomic segments that are associated with multiple traits. Our results describe the genetic basis of rice tiller development, enable further marker-assisted selection in rice cultivar development, and provide useful information for rice production management.
Digital Detection and Processing of Multiple Quadrature Harmonics for EPR Spectroscopy
Ahmad, R.; Som, S.; Kesselring, E.; Kuppusamy, P.; Zweier, J.L.; Potter, L.C.
2010-01-01
A quadrature digital receiver and associated signal estimation procedure are reported for L-band electron paramagnetic resonance (EPR) spectroscopy. The approach provides simultaneous acquisition and joint processing of multiple harmonics in both in-phase and out-of-phase channels. The digital receiver, based on a high-speed dual-channel analog-to-digital converter, allows direct digital down-conversion with heterodyne processing using digital capture of the microwave reference signal. Thus, the receiver avoids noise and nonlinearity associated with analog mixers. Also, the architecture allows for low-Q anti-alias filtering and does not require the sampling frequency to be time-locked to the microwave reference. A noise model applicable for arbitrary contributions of oscillator phase noise is presented, and a corresponding maximum-likelihood estimator of unknown parameters is also reported. The signal processing is applicable for Lorentzian lineshape under nonsaturating conditions. The estimation is carried out using a convergent iterative algorithm capable of jointly processing the in-phase and out-of-phase data in the presence of phase noise and unknown microwave phase. Cramér-Rao bound analysis and simulation results demonstrate a significant reduction in linewidth estimation error using quadrature detection, for both low and high values of phase noise. EPR spectroscopic data are also reported for illustration. PMID:20971667
Collective-goal ascription increases cooperation in humans.
Mitkidis, Panagiotis; Sørensen, Jesper; Nielbo, Kristoffer L; Andersen, Marc; Lienard, Pierre
2013-01-01
Cooperation is necessary in many types of human joint activity and relations. Evidence suggests that cooperation has direct and indirect benefits for the cooperators. Given how beneficial cooperation is overall, it seems relevant to investigate the various ways of enhancing individuals' willingness to invest in cooperative endeavors. We studied whether ascription of a transparent collective goal in a joint action promotes cooperation in a group. A total of 48 participants were assigned in teams of 4 individuals to either a "transparent goal-ascription" or an "opaque goal-ascription" condition. After the manipulation, the participants played an anonymous public goods game with another member of their team. We measured the willingness of participants to cooperate and their expectations about the other player's contribution. Between subjects analyses showed that transparent goal ascription impacts participants' likelihood to cooperate with each other in the future, thereby greatly increasing the benefits from social interactions. Further analysis showed that this could be explained with a change in expectations about the partner's behavior and by an emotional alignment of the participants. The study found that a transparent goal ascription is associated with an increase of cooperation. We propose several high-level mechanisms that could explain the observed effect: general affect modulation, trust, expectation and perception of collective efficacy.
Digital detection and processing of multiple quadrature harmonics for EPR spectroscopy.
Ahmad, R; Som, S; Kesselring, E; Kuppusamy, P; Zweier, J L; Potter, L C
2010-12-01
A quadrature digital receiver and associated signal estimation procedure are reported for L-band electron paramagnetic resonance (EPR) spectroscopy. The approach provides simultaneous acquisition and joint processing of multiple harmonics in both in-phase and out-of-phase channels. The digital receiver, based on a high-speed dual-channel analog-to-digital converter, allows direct digital down-conversion with heterodyne processing using digital capture of the microwave reference signal. Thus, the receiver avoids noise and nonlinearity associated with analog mixers. Also, the architecture allows for low-Q anti-alias filtering and does not require the sampling frequency to be time-locked to the microwave reference. A noise model applicable for arbitrary contributions of oscillator phase noise is presented, and a corresponding maximum-likelihood estimator of unknown parameters is also reported. The signal processing is applicable for Lorentzian lineshape under nonsaturating conditions. The estimation is carried out using a convergent iterative algorithm capable of jointly processing the in-phase and out-of-phase data in the presence of phase noise and unknown microwave phase. Cramér-Rao bound analysis and simulation results demonstrate a significant reduction in linewidth estimation error using quadrature detection, for both low and high values of phase noise. EPR spectroscopic data are also reported for illustration. Copyright © 2010 Elsevier Inc. All rights reserved.
Prigerson, H G; Shear, M K; Bierhals, A J; Zonarich, D L; Reynolds, C F
1996-01-01
The purpose of this study was to examine the ways in which childhood adversity, attachment and personality styles influenced the likelihood of having an anxiety disorder among aged caregivers for terminally ill spouses. We also sought to determine how childhood adversity and attachment/personality styles jointly influenced the likelihood of developing an anxiety disorder among aged caregivers. Data were derived from semistructured interviews with 50 spouses (aged 60 and above) of terminally ill patients. The Childhood Experience of Care and Abuse (CECA) record provided retrospective, behaviorally based information on childhood adversity. Measures of attachment and personality styles were obtained from self-report questionnaires, and the Structured Clinical Interview for the DSM-III-R (SCID) was used to determine diagnoses for anxiety disorders. Logistic regression models estimated the effects of childhood adversity, attachment/personality disturbances, and the interaction between the two on the likelihood of having an anxiety disorder. Results indicated that childhood adversity and paranoid, histrionic and self-defeating styles all directly increase the odds of having an anxiety disorder as an elderly spousal caregiver. In addition, childhood adversity in conjunction with borderline, antisocial and excessively dependent styles increased the likelihood of having an anxiety disorder. The results indicate the need to investigate further the interaction between childhood experiences and current attachment/personality styles in their effects on the development of anxiety disorders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Georges, M.; Nielsen, D.; Mackinnon, M.
1995-02-01
We have exploited {open_quotes}progeny testing{close_quotes} to map quantitative trait loci (QTL) underlying the genetic variation of milk production in a selected dairy cattle population. A total of 1,518 sires, with progeny tests based on the milking performances of >150,000 daughters jointly, was genotyped for 159 autosomal microsatellites bracketing 1645 centimorgan or approximately two thirds of the bovine genome. Using a maximum likelihood multilocus linkage analysis accounting for variance heterogeneity of the phenotypes, we identified five chromosomes giving very strong evidence (LOD score {ge} 3) for the presence of a QTL controlling milk production: chromosomes 1, 6, 9, 10 and 20.more » These findings demonstrate that loci with considerable effects on milk production are still segregating in highly selected populations and pave the way toward marker-assisted selection in dairy cattle breeding. 44 refs., 4 figs., 3 tabs.« less
Bellamy, Gail R; Stone, Kendall; Richardson, Sally K; Goldsteen, Raymond L
2003-01-01
With funding from the 21st Century Challenge Fund, the West Virginia Rural Health Access Program created Transportation for Health, a demonstration project for rural nonemergency medical transportation. The project was implemented in 3 sites around the state, building on existing transportation systems--specifically, a multicounty transit authority, a joint senior center/transit system, and a senior services center. An evaluation of the project was undertaken to answer 3 major questions: (1) Did the project reach the population of people who need transportation assistance? (2) Are users of the transportation project satisfied with the service? (3) Is the program sustainable? Preliminary results from survey data indicate that the answers to questions 1 and 2 are affirmative. A break-even analysis of all 3 sites begins to identify programmatic and policy issues that challenge the likelihood of financial sustainability, including salary expenses, unreimbursed mileage, and reliance on Medicaid reimbursement.
Arcuti, Simona; Pollice, Alessio; Ribecco, Nunziata; D'Onghia, Gianfranco
2016-03-01
We evaluate the spatiotemporal changes in the density of a particular species of crustacean known as deep-water rose shrimp, Parapenaeus longirostris, based on biological sample data collected during trawl surveys carried out from 1995 to 2006 as part of the international project MEDITS (MEDiterranean International Trawl Surveys). As is the case for many biological variables, density data are continuous and characterized by unusually large amounts of zeros, accompanied by a skewed distribution of the remaining values. Here we analyze the normalized density data by a Bayesian delta-normal semiparametric additive model including the effects of covariates, using penalized regression with low-rank thin-plate splines for nonlinear spatial and temporal effects. Modeling the zero and nonzero values by two joint processes, as we propose in this work, allows to obtain great flexibility and easily handling of complex likelihood functions, avoiding inaccurate statistical inferences due to misclassification of the high proportion of exact zeros in the model. Bayesian model estimation is obtained by Markov chain Monte Carlo simulations, suitably specifying the complex likelihood function of the zero-inflated density data. The study highlights relevant nonlinear spatial and temporal effects and the influence of the annual Mediterranean oscillations index and of the sea surface temperature on the distribution of the deep-water rose shrimp density. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Brady, David; Fullerton, Andrew S; Cross, Jennifer Moren
2010-01-01
Despite its centrality to contemporary inequality, working poverty is often popularly discussed but rarely studied by sociologists. Using the Luxembourg Income Study (2009), we analyze whether an individual is working poor across 18 affluent democracies circa 2000. We demonstrate that working poverty does not simply mirror overall poverty and that there is greater cross-national variation in working than overall poverty. We then examine four explanations for working poverty: demographic characteristics, economic performance, unified theory, and welfare generosity. We utilize Heckman probit models to jointly model the likelihood of employment and poverty among the employed. Our analyses provide the least support for the economic performance explanation. There is modest support for unified theory as unionization reduces working poverty in some models. However, most of these effects appear to be mediated by welfare generosity. More substantial evidence exists for the demographic characteristics and welfare generosity explanations. An individual's likelihood of being working poor can be explained by (a) a lack of multiple earners or other adults in one's household, low education, single motherhood, having children and youth; and (b) the generosity of the welfare state in which he or she resides. Also, welfare generosity does not undermine employment and reduces working poverty even among demographically vulnerable groups. Ultimately, we encourage a greater role for the welfare state in debates about working poverty.
Graffelman, Jan; Weir, Bruce S
2018-02-01
Standard statistical tests for equality of allele frequencies in males and females and tests for Hardy-Weinberg equilibrium are tightly linked by their assumptions. Tests for equality of allele frequencies assume Hardy-Weinberg equilibrium, whereas the usual chi-square or exact test for Hardy-Weinberg equilibrium assume equality of allele frequencies in the sexes. In this paper, we propose ways to break this interdependence in assumptions of the two tests by proposing an omnibus exact test that can test both hypotheses jointly, as well as a likelihood ratio approach that permits these phenomena to be tested both jointly and separately. The tests are illustrated with data from the 1000 Genomes project. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.
A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system
NASA Astrophysics Data System (ADS)
Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun
2014-11-01
In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.
Polarimetric image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Valenzuela, John R.
In the field of imaging polarimetry Stokes parameters are sought and must be inferred from noisy and blurred intensity measurements. Using a penalized-likelihood estimation framework we investigate reconstruction quality when estimating intensity images and then transforming to Stokes parameters (traditional estimator), and when estimating Stokes parameters directly (Stokes estimator). We define our cost function for reconstruction by a weighted least squares data fit term and a regularization penalty. It is shown that under quadratic regularization, the traditional and Stokes estimators can be made equal by appropriate choice of regularization parameters. It is empirically shown that, when using edge preserving regularization, estimating the Stokes parameters directly leads to lower RMS error in reconstruction. Also, the addition of a cross channel regularization term further lowers the RMS error for both methods especially in the case of low SNR. The technique of phase diversity has been used in traditional incoherent imaging systems to jointly estimate an object and optical system aberrations. We extend the technique of phase diversity to polarimetric imaging systems. Specifically, we describe penalized-likelihood methods for jointly estimating Stokes images and optical system aberrations from measurements that contain phase diversity. Jointly estimating Stokes images and optical system aberrations involves a large parameter space. A closed-form expression for the estimate of the Stokes images in terms of the aberration parameters is derived and used in a formulation that reduces the dimensionality of the search space to the number of aberration parameters only. We compare the performance of the joint estimator under both quadratic and edge-preserving regularization. The joint estimator with edge-preserving regularization yields higher fidelity polarization estimates than with quadratic regularization. Under quadratic regularization, using the reduced-parameter search strategy, accurate aberration estimates can be obtained without recourse to regularization "tuning". Phase-diverse wavefront sensing is emerging as a viable candidate wavefront sensor for adaptive-optics systems. In a quadratically penalized weighted least squares estimation framework a closed form expression for the object being imaged in terms of the aberrations in the system is available. This expression offers a dramatic reduction of the dimensionality of the estimation problem and thus is of great interest for practical applications. We have derived an expression for an approximate joint covariance matrix for object and aberrations in the phase diversity context. Our expression for the approximate joint covariance is compared with the "known-object" Cramer-Rao lower bound that is typically used for system parameter optimization. Estimates of the optimal amount of defocus in a phase-diverse wavefront sensor derived from the joint-covariance matrix, the known-object Cramer-Rao bound, and Monte Carlo simulations are compared for an extended scene and a point object. It is found that our variance approximation, that incorporates the uncertainty of the object, leads to an improvement in predicting the optimal amount of defocus to use in a phase-diverse wavefront sensor.
Hurdle models for multilevel zero-inflated data via h-likelihood.
Molas, Marek; Lesaffre, Emmanuel
2010-12-30
Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.
Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C
2018-04-01
A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.
Zou, W; Ouyang, H
2016-02-01
We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.
Signal Recovery and System Calibration from Multiple Compressive Poisson Measurements
Wang, Liming; Huang, Jiaji; Yuan, Xin; ...
2015-09-17
The measurement matrix employed in compressive sensing typically cannot be known precisely a priori and must be estimated via calibration. One may take multiple compressive measurements, from which the measurement matrix and underlying signals may be estimated jointly. This is of interest as well when the measurement matrix may change as a function of the details of what is measured. This problem has been considered recently for Gaussian measurement noise, and here we develop this idea with application to Poisson systems. A collaborative maximum likelihood algorithm and alternating proximal gradient algorithm are proposed, and associated theoretical performance guarantees are establishedmore » based on newly derived concentration-of-measure results. A Bayesian model is then introduced, to improve flexibility and generality. Connections between the maximum likelihood methods and the Bayesian model are developed, and example results are presented for a real compressive X-ray imaging system.« less
Saver, Jeffrey L; Gornbein, Jeffrey; Grotta, James; Liebeskind, David; Lutsep, Helmi; Schwamm, Lee; Scott, Phillip; Starkman, Sidney
2009-07-01
Measures of a therapy's effect size are important guides to clinicians, patients, and policy-makers on treatment decisions in clinical practice. The ECASS 3 trial demonstrated a statistically significant benefit of intravenous tissue plasminogen activator for acute cerebral ischemia in the 3- to 4.5-hour window, but an effect size estimate incorporating benefit and harm across all levels of poststroke disability has not previously been derived. Joint outcome table specification was used to derive number needed to treat to benefit (NNTB) and number needed to treat to harm (NNTH) values summarizing treatment impact over the entire outcome range on the modified Rankin scale of global disability, including both expert-dependent and expert-independent (algorithmic and repeated random sampling) array generation. For the full 7-category modified Rankin scale, algorithmic analysis demonstrated that the NNTB for 1 additional patient to have a better outcome by >or=1 grades than with placebo must lie between 4.0 and 13.0. In bootstrap simulations, the mean NNTB was 7.1. Expert joint outcome table analyses indicated that the NNTB for improved final outcome was 6.1 (95% CI, 5.6-6.7) and the NNTH 37.5 (95% CI, 34.6-40.5). Benefit per 100 patients treated was 16.3 and harm per 100 was 2.7. The likelihood of help to harm ratio was 6.0. Treatment with tissue plasminogen activator in the 3- to 4.5-hour window confers benefit on approximately half as many patients as treatment <3 hours, with no increase in the conferral of harm. Approximately 1 in 6 patients has a better and 1 in 35 has a worse outcome as a result of therapy.
NASA Astrophysics Data System (ADS)
Mearns, L. O.; Sain, S. R.; McGinnis, S. A.; Steinschneider, S.; Brown, C. M.
2015-12-01
In this talk we present the development of a joint Bayesian Probabilistic Model for the climate change results of the North American Regional Climate Change Assessment Program (NARCCAP) that uses a unique prior in the model formulation. We use the climate change results (joint distribution of seasonal temperature and precipitation changes (future vs. current)) from the global climate models (GCMs) that provided boundary conditions for the six different regional climate models used in the program as informative priors for the bivariate Bayesian Model. The two variables involved are seasonal temperature and precipitation over sub-regions (i.e., Bukovsky Regions) of the full NARCCAP domain. The basic approach to the joint Bayesian hierarchical model follows the approach of Tebaldi and Sansó (2009). We compare model results using informative (i.e., GCM information) as well as uninformative priors. We apply these results to the Water Evaluation and Planning System (WEAP) model for the Colorado Springs Utility in Colorado. We investigate the layout of the joint pdfs in the context of the water model sensitivities to ranges of temperature and precipitation results to determine the likelihoods of future climate conditions that cannot be accommodated by possible adaptation options. Comparisons may also be made with joint pdfs formed from the CMIP5 collection of global climate models and empirically downscaled to the region of interest.
A stochastic Iwan-type model for joint behavior variability modeling
NASA Astrophysics Data System (ADS)
Mignolet, Marc P.; Song, Pengchao; Wang, X. Q.
2015-08-01
This paper focuses overall on the development and validation of a stochastic model to describe the dissipation and stiffness properties of a bolted joint for which experimental data is available and exhibits a large scatter. An extension of the deterministic parallel-series Iwan model for the characterization of the force-displacement behavior of joints is first carried out. This new model involves dynamic and static coefficients of friction differing from each other and a broadly defined distribution of Jenkins elements. Its applicability is next investigated using the experimental data, i.e. stiffness and dissipation measurements obtained in harmonic testing of 9 nominally identical bolted joints. The model is found to provide a very good fit of the experimental data for each bolted joint notwithstanding the significant variability of their behavior. This finding suggests that this variability can be simulated through the randomization of only the parameters of the proposed Iwan-type model. The distribution of these parameters is next selected based on maximum entropy concepts and their corresponding parameters, i.e. the hyperparameters of the model, are identified using a maximum likelihood strategy. Proceeding with a Monte Carlo simulation of this stochastic Iwan model demonstrates that the experimental data fits well within the uncertainty band corresponding to the 5th and 95th percentiles of the model predictions which well supports the adequacy of the modeling effort.
NASA Astrophysics Data System (ADS)
Feeney, Stephen M.; Mortlock, Daniel J.; Dalmasso, Niccolò
2018-05-01
Estimates of the Hubble constant, H0, from the local distance ladder and from the cosmic microwave background (CMB) are discrepant at the ˜3σ level, indicating a potential issue with the standard Λ cold dark matter (ΛCDM) cosmology. A probabilistic (i.e. Bayesian) interpretation of this tension requires a model comparison calculation, which in turn depends strongly on the tails of the H0 likelihoods. Evaluating the tails of the local H0 likelihood requires the use of non-Gaussian distributions to faithfully represent anchor likelihoods and outliers, and simultaneous fitting of the complete distance-ladder data set to ensure correct uncertainty propagation. We have hence developed a Bayesian hierarchical model of the full distance ladder that does not rely on Gaussian distributions and allows outliers to be modelled without arbitrary data cuts. Marginalizing over the full ˜3000-parameter joint posterior distribution, we find H0 = (72.72 ± 1.67) km s-1 Mpc-1 when applied to the outlier-cleaned Riess et al. data, and (73.15 ± 1.78) km s-1 Mpc-1 with supernova outliers reintroduced (the pre-cut Cepheid data set is not available). Using our precise evaluation of the tails of the H0 likelihood, we apply Bayesian model comparison to assess the evidence for deviation from ΛCDM given the distance-ladder and CMB data. The odds against ΛCDM are at worst ˜10:1 when considering the Planck 2015 XIII data, regardless of outlier treatment, considerably less dramatic than naïvely implied by the 2.8σ discrepancy. These odds become ˜60:1 when an approximation to the more-discrepant Planck Intermediate XLVI likelihood is included.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Bromaghin, Jeffrey F.; Gates, Kenneth S.; Palmer, Douglas E.
2010-01-01
Many fisheries for Pacific salmon Oncorhynchus spp. are actively managed to meet escapement goal objectives. In fisheries where the demand for surplus production is high, an extensive assessment program is needed to achieve the opposing objectives of allowing adequate escapement and fully exploiting the available surplus. Knowledge of abundance is a critical element of such assessment programs. Abundance estimation using mark—recapture experiments in combination with telemetry has become common in recent years, particularly within Alaskan river systems. Fish are typically captured and marked in the lower river while migrating in aggregations of individuals from multiple populations. Recapture data are obtained using telemetry receivers that are co-located with abundance assessment projects near spawning areas, which provide large sample sizes and information on population-specific mark rates. When recapture data are obtained from multiple populations, unequal mark rates may reflect a violation of the assumption of homogeneous capture probabilities. A common analytical strategy is to test the hypothesis that mark rates are homogeneous and combine all recapture data if the test is not significant. However, mark rates are often low, and a test of homogeneity may lack sufficient power to detect meaningful differences among populations. In addition, differences among mark rates may provide information that could be exploited during parameter estimation. We present a temporally stratified mark—recapture model that permits capture probabilities and migratory timing through the capture area to vary among strata. Abundance information obtained from a subset of populations after the populations have segregated for spawning is jointly modeled with telemetry distribution data by use of a likelihood function. Maximization of the likelihood produces estimates of the abundance and timing of individual populations migrating through the capture area, thus yielding substantially more information than the total abundance estimate provided by the conventional approach. The utility of the model is illustrated with data for coho salmon O. kisutch from the Kasilof River in south-central Alaska.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; Ajello, M.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10{sup -26} cm{sup 3} s{sup -1} at 5 GeV to about 5 x 10{supmore » -23} cm{sup 3} s{sup -1} at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section ({approx}3 x 10{sup -26} cm{sup 3} s{sup -1} for a purely s-wave cross section), without assuming additional boost factors.« less
Ackermann, M; Ajello, M; Albert, A; Atwood, W B; Baldini, L; Ballet, J; Barbiellini, G; Bastieri, D; Bechtol, K; Bellazzini, R; Berenji, B; Blandford, R D; Bloom, E D; Bonamente, E; Borgland, A W; Bregeon, J; Brigida, M; Bruel, P; Buehler, R; Burnett, T H; Buson, S; Caliandro, G A; Cameron, R A; Cañadas, B; Caraveo, P A; Casandjian, J M; Cecchi, C; Charles, E; Chekhtman, A; Chiang, J; Ciprini, S; Claus, R; Cohen-Tanugi, J; Conrad, J; Cutini, S; de Angelis, A; de Palma, F; Dermer, C D; Digel, S W; do Couto e Silva, E; Drell, P S; Drlica-Wagner, A; Falletti, L; Favuzzi, C; Fegan, S J; Ferrara, E C; Fukazawa, Y; Funk, S; Fusco, P; Gargano, F; Gasparrini, D; Gehrels, N; Germani, S; Giglietto, N; Giordano, F; Giroletti, M; Glanzman, T; Godfrey, G; Grenier, I A; Guiriec, S; Gustafsson, M; Hadasch, D; Hayashida, M; Hays, E; Hughes, R E; Jeltema, T E; Jóhannesson, G; Johnson, R P; Johnson, A S; Kamae, T; Katagiri, H; Kataoka, J; Knödlseder, J; Kuss, M; Lande, J; Latronico, L; Lionetto, A M; Llena Garde, M; Longo, F; Loparco, F; Lott, B; Lovellette, M N; Lubrano, P; Madejski, G M; Mazziotta, M N; McEnery, J E; Mehault, J; Michelson, P F; Mitthumsiri, W; Mizuno, T; Monte, C; Monzani, M E; Morselli, A; Moskalenko, I V; Murgia, S; Naumann-Godo, M; Norris, J P; Nuss, E; Ohsugi, T; Okumura, A; Omodei, N; Orlando, E; Ormes, J F; Ozaki, M; Paneque, D; Parent, D; Pesce-Rollins, M; Pierbattista, M; Piron, F; Pivato, G; Porter, T A; Profumo, S; Rainò, S; Razzano, M; Reimer, A; Reimer, O; Ritz, S; Roth, M; Sadrozinski, H F-W; Sbarra, C; Scargle, J D; Schalk, T L; Sgrò, C; Siskind, E J; Spandre, G; Spinelli, P; Strigari, L; Suson, D J; Tajima, H; Takahashi, H; Tanaka, T; Thayer, J G; Thayer, J B; Thompson, D J; Tibaldo, L; Tinivella, M; Torres, D F; Troja, E; Uchiyama, Y; Vandenbroucke, J; Vasileiou, V; Vianello, G; Vitale, V; Waite, A P; Wang, P; Winer, B L; Wood, K S; Wood, M; Yang, Z; Zimmer, S; Kaplinghat, M; Martinez, G D
2011-12-09
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10(-26) cm3 s(-1) at 5 GeV to about 5×10(-23) cm3 s(-1) at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section (∼3×10(-26) cm3 s(-1) for a purely s-wave cross section), without assuming additional boost factors.
Mikolai, Júlia; Kulu, Hill
2018-02-01
This study investigates the effect of marital and nonmarital separation on individuals' residential and housing trajectories. Using rich data from the British Household Panel Survey (BHPS) and applying multilevel competing-risks event history models, we analyze the risk of a move of single, married, cohabiting, and separated men and women to different housing types. We distinguish moves due to separation from moves of separated people and account for unobserved codeterminants of moving and separation risks. Our analysis shows that many individuals move due to separation, as expected, but that the likelihood of moving is also relatively high among separated individuals. We find that separation has a long-term effect on individuals' residential careers. Separated women exhibit high moving risks regardless of whether they moved out of the joint home upon separation, whereas separated men who did not move out upon separation are less likely to move. Interestingly, separated women are most likely to move to terraced houses, whereas separated men are equally likely to move to flats (apartments) and terraced (row) houses, suggesting that family structure shapes moving patterns of separated individuals.
Biomechanics of injury prediction for anthropomorphic manikins - preliminary design considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engin, A.E.
1996-12-31
The anthropomorphic manikins are used in automobile safety research as well as in aerospace related applications. There is now a strong need to advance the biomechanics knowledge to determine appropriate criteria for injury likelihood prediction as functions of manikin-measured responses. In this paper, three regions of a manikin, namely, the head, knee joint, and lumbar spine are taken as examples to introduce preliminary design considerations for injury prediction by means of responses of theoretical models and strategically placed sensing devices.
Remote detection of riverine traffic using an ad hoc wireless sensor network
NASA Astrophysics Data System (ADS)
Athan, Stephan P.
2005-05-01
Trafficking of illegal drugs on riverine and inland waterways continues to proliferate in South America. While there has been a successful joint effort to cut off overland and air trafficking routes, there exists a vast river network and Amazon region consisting of over 13,000 water miles that remains difficult to adequately monitor, increasing the likelihood of narcotics moving along this extensive river system. Hence, an effort is underway to provide remote unattended riverine detection in lieu of manned or attended detection measures.
Economic growth and carbon emission control
NASA Astrophysics Data System (ADS)
Zhang, Zhenyu
The question about whether environmental improvement is compatible with continued economic growth remains unclear and requires further study in a specific context. This study intends to provide insight on the potential for carbon emissions control in the absence of international agreement, and connect the empirical analysis with theoretical framework. The Chinese electricity generation sector is used as a case study to demonstrate the problem. Both social planner and private problems are examined to derive the conditions that define the optimal level of production and pollution. The private problem will be demonstrated under the emission regulation using an emission tax, an input tax and an abatement subsidy respectively. The social optimal emission flow is imposed into the private problem. To provide tractable analytical results, a Cobb-Douglas type production function is used to describe the joint production process of the desired output and undesired output (i.e., electricity and emissions). A modified Hamiltonian approach is employed to solve the system and the steady state solutions are examined for policy implications. The theoretical analysis suggests that the ratio of emissions to desired output (refer to 'emission factor'), is a function of productive capital and other parameters. The finding of non-constant emission factor shows that reducing emissions without further cutting back the production of desired outputs is feasible under some circumstances. Rather than an ad hoc specification, the optimal conditions derived from our theoretical framework are used to examine the relationship between desired output and emission level. Data comes from the China Statistical Yearbook and China Electric Power Yearbook and provincial information of electricity generation for the year of 1993-2003 are used to estimate the Cobb-Douglas type joint production by the full information maximum likelihood (FIML) method. The empirical analysis shed light on the optimal policies of emissions control required for achieving the social goal in a private context. The results suggest that the efficiency of abatement technology is crucial for the timing of executing the emission tax. And emission tax is preferred to an input tax, as long as the detection of emissions is not costly and abatement technology is efficient. Keywords: Economic growth, Carbon emission, Power generation, Joint production, China
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
NASA Astrophysics Data System (ADS)
Chen, Siyue; Leung, Henry; Dondo, Maxwell
2014-05-01
As computer network security threats increase, many organizations implement multiple Network Intrusion Detection Systems (NIDS) to maximize the likelihood of intrusion detection and provide a comprehensive understanding of intrusion activities. However, NIDS trigger a massive number of alerts on a daily basis. This can be overwhelming for computer network security analysts since it is a slow and tedious process to manually analyse each alert produced. Thus, automated and intelligent clustering of alerts is important to reveal the structural correlation of events by grouping alerts with common features. As the nature of computer network attacks, and therefore alerts, is not known in advance, unsupervised alert clustering is a promising approach to achieve this goal. We propose a joint optimization technique for feature selection and clustering to aggregate similar alerts and to reduce the number of alerts that analysts have to handle individually. More precisely, each identified feature is assigned a binary value, which reflects the feature's saliency. This value is treated as a hidden variable and incorporated into a likelihood function for clustering. Since computing the optimal solution of the likelihood function directly is analytically intractable, we use the Expectation-Maximisation (EM) algorithm to iteratively update the hidden variable and use it to maximize the expected likelihood. Our empirical results, using a labelled Defense Advanced Research Projects Agency (DARPA) 2000 reference dataset, show that the proposed method gives better results than the EM clustering without feature selection in terms of the clustering accuracy.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Cosmological Parameters and Hyper-Parameters: The Hubble Constant from Boomerang and Maxima
NASA Astrophysics Data System (ADS)
Lahav, Ofer
Recently several studies have jointly analysed data from different cosmological probes with the motivation of estimating cosmological parameters. Here we generalise this procedure to allow freedom in the relative weights of various probes. This is done by including in the joint likelihood function a set of `Hyper-Parameters', which are dealt with using Bayesian considerations. The resulting algorithm, which assumes uniform priors on the log of the Hyper-Parameters, is very simple to implement. We illustrate the method by estimating the Hubble constant H0 from different sets of recent CMB experiments (including Saskatoon, Python V, MSAM1, TOCO, Boomerang and Maxima). The approach can be generalised for a combination of cosmic probes, and for other priors on the Hyper-Parameters. Reference: Lahav, Bridle, Hobson, Lasenby & Sodre, 2000, MNRAS, in press (astro-ph/9912105)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zielinski, J.M.; Krewski, D.
1992-12-31
In this paper, we describe application of the two-stage clonal expansion model to characterize the joint effect of exposure to two carcinogens. This biologically based model of carcinogenesis provides a useful framework for the quantitative description of carcinogenic risks and for defining agents that act as initiators, promoters, and completers. Depending on the mechanism of action, the agent-specific relative risk following exposure to two carcinogens can be additive, multiplicative, or supramultiplicative, with supra-additive relative risk indicating a synergistic effect between the two agents. Maximum-likelihood methods for fitting the two-stage clonal expansion model with intermittent exposure to two carcinogens are describedmore » and illustrated, using data on lung-cancer mortality among Colorado uranium miners exposed to both radon and tobacco smoke.« less
Factors associated with surgical management in an underinsured, safety net population.
Winton, Lisa M; Nodora, Jesse N; Martinez, Maria Elena; Hsu, Chiu-Hsieh; Djenic, Brano; Bouton, Marcia E; Aristizabal, Paula; Ferguson, Elizabeth M; Weiss, Barry D; Komenaka, Ian K
2016-02-01
Few studies include significant numbers of racial and ethnic minority patients. The current study was performed to examine factors that affect breast cancer operations in an underinsured population. We performed a retrospective review of all breast cancer patients from January 2010 to May 2012. Patients with American Joint Committee on Cancer clinical stage 0-IIIA breast cancer underwent evaluation for type of operation: breast conservation, mastectomy alone, and reconstruction after mastectomy. The population included 403 patients with mean age 53 years. Twelve of the 50 patients (24%) diagnosed at stage IIIB presented with synchronous metastatic disease. Of the remaining patients, only 2 presented with metastatic disease (0.6%). The initial operation was 65% breast conservation, 26% mastectomy alone, and 10% reconstruction after mastectomy. Multivariate analysis revealed that Hispanic ethnicity (odds ratio [OR], 0.38; 95% CI, 0.19-0.73; P = .004), presentation with palpable mass (OR, 0.34; 95% CI, 0.13-0.90; P = .03), preoperative chemotherapy (OR, 0.25; 95% CI, 0.10-0.62; P = .003) were associated with a lesser likelihood of mastectomy. Multivariate analysis of factors associated with reconstruction after mastectomy showed that operation with Breast surgical oncologist (OR, 18.4; 95% CI, 2.18-155.14; P < .001) and adequate health literacy (OR, 3.13; 95% CI, 0.95-10.30; P = .06) were associated with reconstruction. The majority of safety net patients can undergo breast conservation despite delayed presentation and poor use of screening mammography. Preoperative chemotherapy increased the likelihood of breast conservation. Routine systemic workup in patients with operable breast cancer is not indicated. Copyright © 2016 Elsevier Inc. All rights reserved.
Zhang, Jenny J; Wang, Molin
2010-09-30
Breast cancer is the leading cancer in women of reproductive age; more than a quarter of women diagnosed with breast cancer in the US are premenopausal. A common adjuvant treatment for this patient population is chemotherapy, which has been shown to cause premature menopause and infertility with serious consequences to quality of life. Luteinizing-hormone-releasing hormone (LHRH) agonists, which induce temporary ovarian function suppression (OFS), has been shown to be a useful alternative to chemotherapy in the adjuvant setting for estrogen-receptor-positive breast cancer patients. LHRH agonists have the potential to preserve fertility after treatment, thus, reducing the negative effects on a patient's reproductive health. However, little is known about the association between a patient's underlying degree of OFS and disease-free survival (DFS) after receiving LHRH agonists. Specifically, we are interested in whether patients with lower underlying degrees of OFS (i.e. higher estrogen production) after taking LHRH agonists are at a higher risk for late breast cancer events. In this paper, we propose a latent class joint model (LCJM) to analyze a data set from International Breast Cancer Study Group (IBCSG) Trial VIII to investigate the association between OFS and DFS. Analysis of this data set is challenging due to the fact that the main outcome of interest, OFS, is unobservable and the available surrogates for this latent variable involve masked event and cured proportions. We employ a likelihood approach and the EM algorithm to obtain parameter estimates and present results from the IBCSG data analysis.
Benoit, Julia S; Chan, Wenyaw; Doody, Rachelle S
2015-01-01
Parameter dependency within data sets in simulation studies is common, especially in models such as Continuous-Time Markov Chains (CTMC). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: 1) to develop a multivariate approach for assessing accuracy and precision for simulation studies 2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures
Theobald, Douglas L.; Wuttke, Deborah S.
2008-01-01
Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907
Lewis, Joseph M; Folb, Jonathan; Kalra, Sanjay; Squire, S Bertel; Taegtmeyer, Miriam; Beeching, Nick J
Brucella spp. prosthetic joint infections are infrequently reported in the literature, particularly in returning travellers, and optimal treatment is unknown. We describe a prosthetic joint infection (PJI) caused by Brucella melitensis in a traveller returning to the UK from Thailand, which we believe to be the first detailed report of brucellosis in a traveller returning from this area. The 23 patients with Brucella-related PJI reported in the literature are summarised, together with our case. The diagnosis of Brucella-related PJI is difficult to make; only 30% of blood cultures and 75% of joint aspiration cultures were positive in the reported cases. Culture of intraoperative samples provides the best diagnostic yield. In the absence of radiological evidence of joint loosening, combination antimicrobial therapy alone may be appropriate treatment in the first instance; this was successful in 6/7 [86%] of patients, though small numbers of patients and the likelihood of reporting bias warrant caution in drawing any firm conclusions about optimal treatment. Aerosolisation of synovial fluid during joint aspiration procedures and nosocomial infection has been described. Brucella-related PJI should be considered in the differential of travellers returning from endemic areas with PJI, including Thailand. Personal protective equipment including fit tested filtering face piece-3 (FFP3) mask or equivalent is recommended for personnel carrying out joint aspiration when brucellosis is suspected. Travellers can reduce the risk of brucellosis by avoiding unpasteurised dairy products and animal contact (particularly on farms and abattoirs) in endemic areas and should be counselled regarding these risks as part of their pre-travel assessment. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Idealized models of the joint probability distribution of wind speeds
NASA Astrophysics Data System (ADS)
Monahan, Adam H.
2018-05-01
The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.
Laloo, Frederiek; Herregods, N; Jaremko, J L; Verstraete, K; Jans, L
2018-05-01
To determine if intra-articular signal changes at the sacroiliac joint space on MRI have added diagnostic value for spondyloarthritis, when compared to bone marrow edema (BME). A retrospective study was performed on the MRIs of sacroiliac joints of 363 patients, aged 16-45 years, clinically suspected of sacroiliitis. BME of the sacroiliac joints was correlated to intra-articular sacroiliac joint MR signal changes: high T1 signal, fluid signal, ankylosis and vacuum phenomenon (VP). These MRI findings were correlated with final clinical diagnosis. Sensitivity (SN), specificity (SP), likelihood ratios (LR), predictive values and post-test probabilities were calculated. BME had SN of 68.9%, SP of 74.0% and LR+ of 2.6 for diagnosis of spondyloarthritis. BME in absence of intra-articular signal changes had a lower SN and LR+ for spondyloarthritis (SN = 20.5%, LR+ 1.4). Concomitant BME and high T1 signal (SP = 97.2%, LR + = 10.5), BME and fluid signal (SP = 98.6%, LR + = 10.3) or BME and ankylosis (SP = 100%) had higher SP and LR+ for spondyloarthritis. Concomitant BME and VP had low LR+ for spondyloarthritis (SP = 91%, LR + =0.9). When BME was absent, intra-articular signal changes were less prevalent, but remained highly specific for spondyloarthritis. Our results suggest that both periarticular and intra-articular MR signal of the sacroiliac joint should be examined to determine whether an MRI is 'positive' or 'not positive' for sacroiliitis associated with spondyloarthritis.
Shi, Chenguang; Wang, Fei; Salous, Sana; Zhou, Jianjiang
2017-10-18
In this study, the modified Cramér-Rao lower bounds (MCRLBs) on the joint estimation of target position and velocity is investigated for a universal mobile telecommunication system (UMTS)-based passive multistatic radar system with antenna arrays. First, we analyze the log-likelihood redfunction of the received signal for a complex Gaussian extended target. Then, due to the non-deterministic transmitted data symbols, the analytically closed-form expressions of the MCRLBs on the Cartesian coordinates of target position and velocity are derived for a multistatic radar system with N t UMTS-based transmit station of L t antenna elements and N r receive stations of L r antenna elements. With the aid of numerical simulations, it is shown that increasing the number of receiving elements in each receive station can reduce the estimation errors. In addition, it is demonstrated that the MCRLB is not only a function of signal-to-noise ratio (SNR), the number of receiving antenna elements and the properties of the transmitted UMTS signals, but also a function of the relative geometric configuration between the target and the multistatic radar system.The analytical expressions for MCRLB will open up a new dimension for passive multistatic radar system by aiding the optimal placement of receive stations to improve the target parameter estimation performance.
Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F
2015-01-01
Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.
Wang, Fei; Salous, Sana; Zhou, Jianjiang
2017-01-01
In this study, the modified Cramér-Rao lower bounds (MCRLBs) on the joint estimation of target position and velocity is investigated for a universal mobile telecommunication system (UMTS)-based passive multistatic radar system with antenna arrays. First, we analyze the log-likelihood redfunction of the received signal for a complex Gaussian extended target. Then, due to the non-deterministic transmitted data symbols, the analytically closed-form expressions of the MCRLBs on the Cartesian coordinates of target position and velocity are derived for a multistatic radar system with Nt UMTS-based transmit station of Lt antenna elements and Nr receive stations of Lr antenna elements. With the aid of numerical simulations, it is shown that increasing the number of receiving elements in each receive station can reduce the estimation errors. In addition, it is demonstrated that the MCRLB is not only a function of signal-to-noise ratio (SNR), the number of receiving antenna elements and the properties of the transmitted UMTS signals, but also a function of the relative geometric configuration between the target and the multistatic radar system.The analytical expressions for MCRLB will open up a new dimension for passive multistatic radar system by aiding the optimal placement of receive stations to improve the target parameter estimation performance. PMID:29057805
Design/Analysis of the JWST ISIM Bonded Joints for Survivability at Cryogenic Temperatures
NASA Technical Reports Server (NTRS)
Bartoszyk, Andrew; Johnston, John; Kaprielian, Charles; Kuhn, Jonathan; Kunt, Cengiz; Rodini, Benjamin; Young, Daniel
2005-01-01
Contents include the following: JWST/ISIM introduction. Design and analysis challenges for ISIM bonded joints. JWST/ISIM joint designs. Bonded joint analysis. Finite element modeling. Failure criteria and margin calculation. Analysis/test correlation procedure. Example of test data and analysis.
Weakly Supervised Dictionary Learning
NASA Astrophysics Data System (ADS)
You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub
2018-05-01
We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.
Petersen, Tom; Laslett, Mark; Juhl, Carsten
2017-05-12
Clinical examination findings are used in primary care to give an initial diagnosis to patients with low back pain and related leg symptoms. The purpose of this study was to develop best evidence Clinical Diagnostic Rules (CDR] for the identification of the most common patho-anatomical disorders in the lumbar spine; i.e. intervertebral discs, sacroiliac joints, facet joints, bone, muscles, nerve roots, muscles, peripheral nerve tissue, and central nervous system sensitization. A sensitive electronic search strategy using MEDLINE, EMBASE and CINAHL databases was combined with hand searching and citation tracking to identify eligible studies. Criteria for inclusion were: persons with low back pain with or without related leg symptoms, history or physical examination findings suitable for use in primary care, comparison with acceptable reference standards, and statistical reporting permitting calculation of diagnostic value. Quality assessments were made independently by two reviewers using the Quality Assessment of Diagnostic Accuracy Studies tool. Clinical examination findings that were investigated by at least two studies were included and results that met our predefined threshold of positive likelihood ratio ≥ 2 or negative likelihood ratio ≤ 0.5 were considered for the CDR. Sixty-four studies satisfied our eligible criteria. We were able to construct promising CDRs for symptomatic intervertebral disc, sacroiliac joint, spondylolisthesis, disc herniation with nerve root involvement, and spinal stenosis. Single clinical test appear not to be as useful as clusters of tests that are more closely in line with clinical decision making. This is the first comprehensive systematic review of diagnostic accuracy studies that evaluate clinical examination findings for their ability to identify the most common patho-anatomical disorders in the lumbar spine. In some diagnostic categories we have sufficient evidence to recommend a CDR. In others, we have only preliminary evidence that needs testing in future studies. Most findings were tested in secondary or tertiary care. Thus, the accuracy of the findings in a primary care setting has yet to be confirmed.
The Diagnostic Value of the Clarke Sign in Assessing Chondromalacia Patella
Doberstein, Scott T; Romeyn, Richard L; Reineke, David M
2008-01-01
Context: Various techniques have been described for assessing conditions that cause pain at the patellofemoral (PF) joint. The Clarke sign is one such test, but the diagnostic value of this test in assessing chondromalacia patella is unknown. Objective: To (1) investigate the diagnostic value of the Clarke sign in assessing the presence of chondromalacia patella using arthroscopic examination of the PF joint as the “gold standard,” and (2) provide a historical perspective of the Clarke sign as a clinical diagnostic test. Design: Validation study. Setting: All patients of one of the investigators who had knee pain or injuries unrelated to the patellofemoral joint and were scheduled for arthroscopic surgery were recruited for this study. Patients or Other Participants: A total of 106 otherwise healthy individuals with no history of patellofemoral pain or dysfunction volunteered. Main Outcome Measure(s): The Clarke sign was performed on the surgical knee by a single investigator in the clinic before surgery. A positive test was indicated by the presence of pain sufficient to prevent the patient from maintaining a quadriceps muscle contraction against manual resistance for longer than 2 seconds. The preoperative result was compared with visual evidence of chondromalacia patella during arthroscopy. Results: Sensitivity was 0.39, specificity was 0.67, likelihood ratio for a positive test was 1.18, likelihood ratio for a negative test was 0.91, positive predictive value was 0.25, and negative predictive value was 0.80. Conclusions: Diagnostic validity values for the use of the Clarke sign in assessing chondromalacia patella were unsatisfactory, supporting suggestions that it has poor diagnostic value as a clinical examination technique. Additionally, an extensive search of the available literature for the Clarke sign reveals multiple problems with the test, causing significant confusion for clinicians. Therefore, the use of the Clarke sign as a routine part of a knee examination is not beneficial, and its use should be discontinued. PMID:18345345
The diagnostic value of the Clarke sign in assessing chondromalacia patella.
Doberstein, Scott T; Romeyn, Richard L; Reineke, David M
2008-01-01
Various techniques have been described for assessing conditions that cause pain at the patellofemoral (PF) joint. The Clarke sign is one such test, but the diagnostic value of this test in assessing chondromalacia patella is unknown. To (1) investigate the diagnostic value of the Clarke sign in assessing the presence of chondromalacia patella using arthroscopic examination of the PF joint as the "gold standard," and (2) provide a historical perspective of the Clarke sign as a clinical diagnostic test. Validation study. All patients of one of the investigators who had knee pain or injuries unrelated to the patellofemoral joint and were scheduled for arthroscopic surgery were recruited for this study. A total of 106 otherwise healthy individuals with no history of patellofemoral pain or dysfunction volunteered. The Clarke sign was performed on the surgical knee by a single investigator in the clinic before surgery. A positive test was indicated by the presence of pain sufficient to prevent the patient from maintaining a quadriceps muscle contraction against manual resistance for longer than 2 seconds. The preoperative result was compared with visual evidence of chondromalacia patella during arthroscopy. Sensitivity was 0.39, specificity was 0.67, likelihood ratio for a positive test was 1.18, likelihood ratio for a negative test was 0.91, positive predictive value was 0.25, and negative predictive value was 0.80. Diagnostic validity values for the use of the Clarke sign in assessing chondromalacia patella were unsatisfactory, supporting suggestions that it has poor diagnostic value as a clinical examination technique. Additionally, an extensive search of the available literature for the Clarke sign reveals multiple problems with the test, causing significant confusion for clinicians. Therefore, the use of the Clarke sign as a routine part of a knee examination is not beneficial, and its use should be discontinued.
PAMLX: a graphical user interface for PAML.
Xu, Bo; Yang, Ziheng
2013-12-01
This note announces pamlX, a graphical user interface/front end for the paml (for Phylogenetic Analysis by Maximum Likelihood) program package (Yang Z. 1997. PAML: a program package for phylogenetic analysis by maximum likelihood. Comput Appl Biosci. 13:555-556; Yang Z. 2007. PAML 4: Phylogenetic analysis by maximum likelihood. Mol Biol Evol. 24:1586-1591). pamlX is written in C++ using the Qt library and communicates with paml programs through files. It can be used to create, edit, and print control files for paml programs and to launch paml runs. The interface is available for free download at http://abacus.gene.ucl.ac.uk/software/paml.html.
NASA Astrophysics Data System (ADS)
Bargaoui, Zoubeida Kebaili; Bardossy, Andràs
2015-10-01
The paper aims to develop researches on the spatial variability of heavy rainfall events estimation using spatial copula analysis. To demonstrate the methodology, short time resolution rainfall time series from Stuttgart region are analyzed. They are constituted by rainfall observations on continuous 30 min time scale recorded over a network composed by 17 raingages for the period July 1989-July 2004. The analysis is performed aggregating the observations from 30 min up to 24 h. Two parametric bivariate extreme copula models, the Husler-Reiss model and the Gumbel model are investigated. Both involve a single parameter to be estimated. Thus, model fitting is operated for every pair of stations for a giving time resolution. A rainfall threshold value representing a fixed rainfall quantile is adopted for model inference. Generalized maximum pseudo-likelihood estimation is adopted with censoring by analogy with methods of univariate estimation combining historical and paleoflood information with systematic data. Only pairs of observations greater than the threshold are assumed as systematic data. Using the estimated copula parameter, a synthetic copula field is randomly generated and helps evaluating model adequacy which is achieved using Kolmogorov Smirnov distance test. In order to assess dependence or independence in the upper tail, the extremal coefficient which characterises the tail of the joint bivariate distribution is adopted. Hence, the extremal coefficient is reported as a function of the interdistance between stations. If it is less than 1.7, stations are interpreted as dependent in the extremes. The analysis of the fitted extremal coefficients with respect to stations inter distance highlights two regimes with different dependence structures: a short spatial extent regime linked to short duration intervals (from 30 min to 6 h) with an extent of about 8 km and a large spatial extent regime related to longer rainfall intervals (from 12 h to 24 h) with an extent of 34 to 38 km.
Keiser, Carl N.; Pinter-Wollman, Noa; Augustine, David A.; Ziemba, Michael J.; Hao, Lingran; Lawrence, Jeffrey G.; Pruitt, Jonathan N.
2016-01-01
Despite the importance of host attributes for the likelihood of associated microbial transmission, individual variation is seldom considered in studies of wildlife disease. Here, we test the influence of host phenotypes on social network structure and the likelihood of cuticular bacterial transmission from exposed individuals to susceptible group-mates using female social spiders (Stegodyphus dumicola). Based on the interactions of resting individuals of known behavioural types, we assessed whether individuals assorted according to their behavioural traits. We found that individuals preferentially interacted with individuals of unlike behavioural phenotypes. We next applied a green fluorescent protein-transformed cuticular bacterium, Pantoea sp., to individuals and allowed them to interact with an unexposed colony-mate for 24 h. We found evidence for transmission of bacteria in 55% of cases. The likelihood of transmission was influenced jointly by the behavioural phenotypes of both the exposed and susceptible individuals: transmission was more likely when exposed spiders exhibited higher ‘boldness’ relative to their colony-mate, and when unexposed individuals were in better body condition. Indirect transmission via shared silk took place in only 15% of cases. Thus, bodily contact appears key to transmission in this system. These data represent a fundamental step towards understanding how individual traits influence larger-scale social and epidemiological dynamics. PMID:27097926
Keiser, Carl N; Pinter-Wollman, Noa; Augustine, David A; Ziemba, Michael J; Hao, Lingran; Lawrence, Jeffrey G; Pruitt, Jonathan N
2016-04-27
Despite the importance of host attributes for the likelihood of associated microbial transmission, individual variation is seldom considered in studies of wildlife disease. Here, we test the influence of host phenotypes on social network structure and the likelihood of cuticular bacterial transmission from exposed individuals to susceptible group-mates using female social spiders (Stegodyphus dumicola). Based on the interactions of resting individuals of known behavioural types, we assessed whether individuals assorted according to their behavioural traits. We found that individuals preferentially interacted with individuals of unlike behavioural phenotypes. We next applied a green fluorescent protein-transformed cuticular bacterium,Pantoeasp., to individuals and allowed them to interact with an unexposed colony-mate for 24 h. We found evidence for transmission of bacteria in 55% of cases. The likelihood of transmission was influenced jointly by the behavioural phenotypes of both the exposed and susceptible individuals: transmission was more likely when exposed spiders exhibited higher 'boldness' relative to their colony-mate, and when unexposed individuals were in better body condition. Indirect transmission via shared silk took place in only 15% of cases. Thus, bodily contact appears key to transmission in this system. These data represent a fundamental step towards understanding how individual traits influence larger-scale social and epidemiological dynamics. © 2016 The Author(s).
Escalante, A; Lichtenstein, M J; Hazuda, H P
1999-08-01
To gain a knowledge of factors associated with impaired upper extremity range of motion (ROM) in order to understand pathways that lead to disability. Shoulder and elbow flexion range was measured in a cohort of 695 community-dwelling subjects aged 65 to 74 years. Associations between subjects' shoulder and elbow flexion ranges and their demographic and anthropometric characteristics, as well as the presence of diabetes mellitus or self-reported physician-diagnosed arthritis, were examined using multivariate regression models. The relationship between shoulder or elbow flexion range and subjects' functional reach was examined to explore the functional significance of ROM in these joints. The flexion range for the 4 joints studied was at least 120 degrees in nearly all subjects (> or = 99% of the subjects for each of the 4 joints). Multivariate models revealed significant associations between male sex, Mexican American ethnic background, the use of oral hypoglycemic drugs or insulin to treat diabetes mellitus, and a lower shoulder flexion range. A lower elbow flexion range was associated with male sex, increasing body mass index, and the use of oral hypoglycemic drugs or insulin. A higher shoulder or elbow flexion range was associated with a lower likelihood of having a short functional reach. The great majority of community-dwelling elderly have a flexion range of shoulder and elbow joints that can be considered functional. Diabetes mellitus and obesity are two potentially treatable factors associated with reduced flexion range of these two functionally important joints.
Multilevel joint competing risk models
NASA Astrophysics Data System (ADS)
Karunarathna, G. H. S.; Sooriyarachchi, M. R.
2017-09-01
Joint modeling approaches are often encountered for different outcomes of competing risk time to event and count in many biomedical and epidemiology studies in the presence of cluster effect. Hospital length of stay (LOS) has been the widely used outcome measure in hospital utilization due to the benchmark measurement for measuring multiple terminations such as discharge, transferred, dead and patients who have not completed the event of interest at the follow up period (censored) during hospitalizations. Competing risk models provide a method of addressing such multiple destinations since classical time to event models yield biased results when there are multiple events. In this study, the concept of joint modeling has been applied to the dengue epidemiology in Sri Lanka, 2006-2008 to assess the relationship between different outcomes of LOS and platelet count of dengue patients with the district cluster effect. Two key approaches have been applied to build up the joint scenario. In the first approach, modeling each competing risk separately using the binary logistic model, treating all other events as censored under the multilevel discrete time to event model, while the platelet counts are assumed to follow a lognormal regression model. The second approach is based on the endogeneity effect in the multilevel competing risks and count model. Model parameters were estimated using maximum likelihood based on the Laplace approximation. Moreover, the study reveals that joint modeling approach yield more precise results compared to fitting two separate univariate models, in terms of AIC (Akaike Information Criterion).
Robust Multipoint Water-Fat Separation Using Fat Likelihood Analysis
Yu, Huanzhou; Reeder, Scott B.; Shimakawa, Ann; McKenzie, Charles A.; Brittain, Jean H.
2016-01-01
Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. PMID:21842498
A Study of Item Bias for Attitudinal Measurement Using Maximum Likelihood Factor Analysis.
ERIC Educational Resources Information Center
Mayberry, Paul W.
A technique for detecting item bias that is responsive to attitudinal measurement considerations is a maximum likelihood factor analysis procedure comparing multivariate factor structures across various subpopulations, often referred to as SIFASP. The SIFASP technique allows for factorial model comparisons in the testing of various hypotheses…
Haile, Zewdu; Khatua, Sanjeeb
2010-12-01
About 15% of patients presenting in a primary care clinic have joint pain as their primary complaint (level B). Disseminated gonorrhea is the most common cause of infectious arthritis in sexually active, previously healthy patients (level B). Prompt arthrocentesis, microscopic examination, and the culture of any purulent material plus appropriate antibiotic therapy are the mainstay of treatment in infectious arthritis (level C). Detailed history, including family history and comprehensive examination, is more useful in accurate diagnosis than expensive laboratory and radiological investigations for noninfectious arthritis (level C). Regarding inflammatory noninfectious arthritis with the potential to cause destructive joint damage, early referral to a subspecialist, when indicated, increases the likelihood of optimal outcome (level C). Nonsteroidal antiinflammatory drugs are the first line of therapeutic agents to reduce pain and swelling in the management of most noninfectious inflammatory arthritis seen in the primary care office (level C). Copyright © 2010 Elsevier Inc. All rights reserved.
Transponder-aided joint calibration and synchronization compensation for distributed radar systems.
Wang, Wen-Qin
2015-01-01
High-precision radiometric calibration and synchronization compensation must be provided for distributed radar system due to separate transmitters and receivers. This paper proposes a transponder-aided joint radiometric calibration, motion compensation and synchronization for distributed radar remote sensing. As the transponder signal can be separated from the normal radar returns, it is used to calibrate the distributed radar for radiometry. Meanwhile, the distributed radar motion compensation and synchronization compensation algorithms are presented by utilizing the transponder signals. This method requires no hardware modifications to both the normal radar transmitter and receiver and no change to the operating pulse repetition frequency (PRF). The distributed radar radiometric calibration and synchronization compensation require only one transponder, but the motion compensation requires six transponders because there are six independent variables in the distributed radar geometry. Furthermore, a maximum likelihood method is used to estimate the transponder signal parameters. The proposed methods are verified by simulation results.
New color-based tracking algorithm for joints of the upper extremities
NASA Astrophysics Data System (ADS)
Wu, Xiangping; Chow, Daniel H. K.; Zheng, Xiaoxiang
2007-11-01
To track the joints of the upper limb of stroke sufferers for rehabilitation assessment, a new tracking algorithm which utilizes a developed color-based particle filter and a novel strategy for handling occlusions is proposed in this paper. Objects are represented by their color histogram models and particle filter is introduced to track the objects within a probability framework. Kalman filter, as a local optimizer, is integrated into the sampling stage of the particle filter that steers samples to a region with high likelihood and therefore fewer samples is required. A color clustering method and anatomic constraints are used in dealing with occlusion problem. Compared with the general basic particle filtering method, the experimental results show that the new algorithm has reduced the number of samples and hence the computational consumption, and has achieved better abilities of handling complete occlusion over a few frames.
Analysis of a Preloaded Bolted Joint in a Ceramic Composite Combustor
NASA Technical Reports Server (NTRS)
Hissam, D. Andy; Bower, Mark V.
2003-01-01
This paper presents the detailed analysis of a preloaded bolted joint incorporating ceramic materials. The objective of this analysis is to determine the suitability of a joint design for a ceramic combustor. The analysis addresses critical factors in bolted joint design including preload, preload uncertainty, and load factor. The relationship between key joint variables is also investigated. The analysis is based on four key design criteria, each addressing an anticipated failure mode. The criteria are defined in terms of margin of safety, which must be greater than zero for the design criteria to be satisfied. Since the proposed joint has positive margins of safety, the design criteria are satisfied. Therefore, the joint design is acceptable.
NASA Technical Reports Server (NTRS)
Hoffbeck, Joseph P.; Landgrebe, David A.
1994-01-01
Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.
Impact of divorce on children: developmental considerations.
Kleinsorge, Christy; Covitz, Lynne M
2012-04-01
Although divorce can have significant negative impact on children, a variety of protective factors can increase the likelihood of long-term positive psychological adjustment. • Exposure to high levels of parental conflict is predictive of poor emotional adjustment by the child regardless of the parents' marital status. • Epidemiologic data reveal that custody and parenting arrangements are evolving, with more emphasis on joint custody and access to both parents by the child. • Pediatricians' knowledge of childhood development is essential in providing anticipatory guidance to parents throughout the divorce process and beyond.
Search for gamma-ray lines towards galaxy clusters with the Fermi-LAT
Anderson, B.; Zimmer, S.; Conrad, J.; ...
2016-02-09
We report on a search for monochromatic γ-ray features in the spectra of galaxy clusters observed by the Fermi Large Area Telescope. Galaxy clusters are the largest structures in the Universe that are bound by dark matter (DM), making them an important testing ground for possible selfinteractions or decays of the DM particles. Monochromatic γ-ray lines provide a unique signature due to the absence of astrophysical backgrounds and are as such considered a smoking-gun signature for new physics. An unbinned joint likelihood analysis of the sixteen most promising clusters using five years of data at energies between 10 and 400more » GeV revealed no significant features. For the case of self-annihilation, we set upper limits on the monochromatic velocity-averaged interaction cross section. These limits are compatible with those obtained from observations of the Galactic Center, albeit weaker due to the larger distance to the studied clusters.« less
Back to Normal! Gaussianizing posterior distributions for cosmological probes
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2014-05-01
We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.
Combining evidence using likelihood ratios in writer verification
NASA Astrophysics Data System (ADS)
Srihari, Sargur; Kovalenko, Dimitry; Tang, Yi; Ball, Gregory
2013-01-01
Forensic identification is the task of determining whether or not observed evidence arose from a known source. It involves determining a likelihood ratio (LR) - the ratio of the joint probability of the evidence and source under the identification hypothesis (that the evidence came from the source) and under the exclusion hypothesis (that the evidence did not arise from the source). In LR- based decision methods, particularly handwriting comparison, a variable number of input evidences is used. A decision based on many pieces of evidence can result in nearly the same LR as one based on few pieces of evidence. We consider methods for distinguishing between such situations. One of these is to provide confidence intervals together with the decisions and another is to combine the inputs using weights. We propose a new method that generalizes the Bayesian approach and uses an explicitly defined discount function. Empirical evaluation with several data sets including synthetically generated ones and handwriting comparison shows greater flexibility of the proposed method.
Agaliotis, Maria; Mackey, Martin G; Heard, Robert; Jan, Stephen; Fransen, Marlene
2017-04-01
The aim of this study was to explore personal and workplace environmental factors as predictors of reduced worker productivity among older workers with chronic knee pain. A questionnaire-based survey was conducted among 129 older workers who had participated in a randomized clinical trial evaluating dietary supplements. Multivariable analyses were used to explore predictors of reduced work productivity among older workers with chronic knee pain. The likelihood of presenteeism was higher in those reporting knee pain (≥3/10) or problems with other joints, and lower in those reporting job insecurity. The likelihood of work transitions was higher in people reporting knee pain (≥3/10), a high comorbidity score or low coworker support, and lower in those having an occupation involving sitting more than 30% of the day. Allowing access to sitting and promoting positive affiliations between coworkers are likely to provide an enabling workplace environment for older workers with chronic knee pain.
Likelihood-Based Confidence Intervals in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Oort, Frans J.
2011-01-01
In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…
Vocational Qualifications, Employment Status and Income: 2006 Census Analysis. Technical Paper
ERIC Educational Resources Information Center
Daly, Anne
2011-01-01
Two features of the labour market for vocationally qualified workers are explored in this technical paper: the likelihood of self-employment versus wage employment and the determinants of income. The analysis showed that demographic, occupational and local labour market characteristics all influence the likelihood of self-employment. Self-employed…
High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm
ERIC Educational Resources Information Center
Cai, Li
2010-01-01
A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…
Near-Source Mechanism for Creating Shear Content from Buried Explosions
NASA Astrophysics Data System (ADS)
Steedman, D. W.; Bradley, C. R.
2017-12-01
The Source Physics Experiment (SPE) has the goal of developing a greater understanding of explosion phenomenology at various spatial scales, from near-source to the far-field. SPE Phase I accomplished a series of six chemical explosive tests of varying scaled depth of burial within a borehole in granite. The testbed included an extensive array of triaxial accelerometers. Velocity traces derived from these accelerometers allow for detailed study of the shock environment close in to the explosion. A specific goal of SPE is to identify various mechanisms for generating shear within the propagation environment and how this might be informative on the identification of explosive events that otherwise fail historic compression wave energy/shear wave energy (P/S) event discrimination. One of these sources was hypothesized to derive from slippage along joint sets near to the source. Velocity traces from SPE Phase I events indicate that motion tangential to a theoretically spherical shock wave are initially quiescent after shock arrival. But this period of quiescence is followed by a sudden increase in amplitude that consistently occurs just after the peak of the radial velocity (i.e., onset of shock unloading). The likelihood of occurrence of this response is related to yield-scaled depth-of-burial (SDOB). We describe a mechanism where unloading facilitates dilation of closed joints accompanied by a release of shear energy stored during compression. However, occurrence of this mechanism relies on relative amplitudes between the shock loading caused at a point and the in situ stress: at too large a SDOB the stored energy is insufficient to overcome the combination of the overburden stress and traction on the joint. On the other hand, too small of a SDOB provides that the in situ stress is insufficient to keep joints from storing stress, thus overriding the release mechanism and mitigating rupture-like slippage. We develop a notional relationship between SPE Phase I SDOB and the likelihood of shear release. We then compare this to the six recorded DPRK events in terms of where these events fall in relation to the accepted mb:MS discriminant using estimated SDOB values for those events. To first order SPE SDOBs resulting in shear release appear to map to estimated DPRK SDOBs which display excessive shear magnitude. LA-UR-17-29528.
Babiak, Ireneusz
2012-07-03
Deep infection of a joint endoprosthesis constitutes a threat to the stability of the implant and joint function. It requires a comprehensive and interdisciplinary approach, involving the joint revision and removal of the bacterial biofilm from all tissues, the endoprosthesis must be often removed and bone stock infection treated. The paper presents the author's experience with the use of acrylic cement spacers, custom-made during the surgery and containing low dose of an antibiotic supplemented with 5% of a selected, targeted antibiotic for the infection of hip and knee endoprostheses. 33 two-stage revisions of knee and hip joints with the use of a spacer were performed. They involved 24 knee joints and 9 hip joints. The infections were mostly caused by staphylococci MRSA (18) and MSSA (8), and in some cases Enterococci (4), Salmonella (1), Pseudomonas (1) and Acinetobacter (1). The infection was successfully treated in 31 out of 33 cases (93.93%), including 8 patients with the hip infection and 23 patients with the knee infection. The endoprosthesis was reimplanted in 30 cases: for 7 hips and 23 knees, in 3 remaining cases the endoprosthesis was not reimplanted. Mechanical complications due to the spacer occurred in 4 cases: 3 dislocations and 1 fracture (hip spacer). The patients with hip spacers were ambulatory with a partial weight bearing of the operated extremity and those with knee spacers were also ambulatory with a partial weight bearing, but the extremity was initially protected by an orthosis. The spacer enables to maintain a limb function, and making it by hand allows the addition of the specific bacteria targeted antibiotic thus increasing the likelihood of the effective antibacterial treatment.
On the uncertainty in single molecule fluorescent lifetime and energy emission measurements
NASA Technical Reports Server (NTRS)
Brown, Emery N.; Zhang, Zhenhua; Mccollom, Alex D.
1995-01-01
Time-correlated single photon counting has recently been combined with mode-locked picosecond pulsed excitation to measure the fluorescent lifetimes and energy emissions of single molecules in a flow stream. Maximum likelihood (ML) and least square methods agree and are optimal when the number of detected photons is large however, in single molecule fluorescence experiments the number of detected photons can be less than 20, 67% of those can be noise and the detection time is restricted to 10 nanoseconds. Under the assumption that the photon signal and background noise are two independent inhomogeneous poisson processes, we derive the exact joint arrival time probably density of the photons collected in a single counting experiment performed in the presence of background noise. The model obviates the need to bin experimental data for analysis, and makes it possible to analyze formally the effect of background noise on the photon detection experiment using both ML or Bayesian methods. For both methods we derive the joint and marginal probability densities of the fluorescent lifetime and fluorescent emission. the ML and Bayesian methods are compared in an analysis of simulated single molecule fluorescence experiments of Rhodamine 110 using different combinations of expected background nose and expected fluorescence emission. While both the ML or Bayesian procedures perform well for analyzing fluorescence emissions, the Bayesian methods provide more realistic measures of uncertainty in the fluorescent lifetimes. The Bayesian methods would be especially useful for measuring uncertainty in fluorescent lifetime estimates in current single molecule flow stream experiments where the expected fluorescence emission is low. Both the ML and Bayesian algorithms can be automated for applications in molecular biology.
On the Uncertainty in Single Molecule Fluorescent Lifetime and Energy Emission Measurements
NASA Technical Reports Server (NTRS)
Brown, Emery N.; Zhang, Zhenhua; McCollom, Alex D.
1996-01-01
Time-correlated single photon counting has recently been combined with mode-locked picosecond pulsed excitation to measure the fluorescent lifetimes and energy emissions of single molecules in a flow stream. Maximum likelihood (ML) and least squares methods agree and are optimal when the number of detected photons is large, however, in single molecule fluorescence experiments the number of detected photons can be less than 20, 67 percent of those can be noise, and the detection time is restricted to 10 nanoseconds. Under the assumption that the photon signal and background noise are two independent inhomogeneous Poisson processes, we derive the exact joint arrival time probability density of the photons collected in a single counting experiment performed in the presence of background noise. The model obviates the need to bin experimental data for analysis, and makes it possible to analyze formally the effect of background noise on the photon detection experiment using both ML or Bayesian methods. For both methods we derive the joint and marginal probability densities of the fluorescent lifetime and fluorescent emission. The ML and Bayesian methods are compared in an analysis of simulated single molecule fluorescence experiments of Rhodamine 110 using different combinations of expected background noise and expected fluorescence emission. While both the ML or Bayesian procedures perform well for analyzing fluorescence emissions, the Bayesian methods provide more realistic measures of uncertainty in the fluorescent lifetimes. The Bayesian methods would be especially useful for measuring uncertainty in fluorescent lifetime estimates in current single molecule flow stream experiments where the expected fluorescence emission is low. Both the ML and Bayesian algorithms can be automated for applications in molecular biology.
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
NASA Astrophysics Data System (ADS)
Marinkovic, Slavica; Guillemot, Christine
2006-12-01
Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
Free energies from dynamic weighted histogram analysis using unbiased Markov state model.
Rosta, Edina; Hummer, Gerhard
2015-01-13
The weighted histogram analysis method (WHAM) is widely used to obtain accurate free energies from biased molecular simulations. However, WHAM free energies can exhibit significant errors if some of the biasing windows are not fully equilibrated. To account for the lack of full equilibration, we develop the dynamic histogram analysis method (DHAM). DHAM uses a global Markov state model to obtain the free energy along the reaction coordinate. A maximum likelihood estimate of the Markov transition matrix is constructed by joint unbiasing of the transition counts from multiple umbrella-sampling simulations along discretized reaction coordinates. The free energy profile is the stationary distribution of the resulting Markov matrix. For this matrix, we derive an explicit approximation that does not require the usual iterative solution of WHAM. We apply DHAM to model systems, a chemical reaction in water treated using quantum-mechanics/molecular-mechanics (QM/MM) simulations, and the Na(+) ion passage through the membrane-embedded ion channel GLIC. We find that DHAM gives accurate free energies even in cases where WHAM fails. In addition, DHAM provides kinetic information, which we here use to assess the extent of convergence in each of the simulation windows. DHAM may also prove useful in the construction of Markov state models from biased simulations in phase-space regions with otherwise low population.
Janeczek, Maciej; Chrószcz, Aleksander; Onar, Vedat; Henklewski, Radomir; Skalec, Aleksandra
2017-06-01
Animal remains that are unearthed during archaeological excavations often provide useful information about socio-cultural context, including human habits, beliefs, and ancestral relationships. In this report, we present pathologically altered equine first and second phalanges from an 11th century specimen that was excavated at Wrocław Cathedral Island, Poland. The results of gross examination, radiography, and computed tomography, indicate osteoarthritis of the proximal interphalangeal joint, with partial ankylosis. Based on comparison with living modern horses undergoing lameness examination, as well as with recent literature, we conclude that the horse likely was lame for at least several months prior to death. The ability of this horse to work probably was reduced, but the degree of compromise during life cannot be stated precisely. Present day medical knowledge indicates that there was little likelihood of successful treatment for this condition during the middle ages. However, modern horses with similar pathology can function reasonably well with appropriate treatment and management, particularly following joint ankylosis. Thus, we approach the cultural question of why such an individual would have been maintained with limitations, for a probably-significant period of time. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Bonamente, Massimiliano; Joy, Marshall K.; Carlstrom, John E.; LaRoque, Samuel J.
2004-01-01
X-ray and Sunyaev-Zeldovich Effect data ca,n be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from the Chandra Observatory, which provides both spatial and spectral information, and interferometric radio measurements of the Sunyam-Zeldovich Effect are available from the BIMA and 0VR.O arrays. We introduce a Monte Carlo Markov chain procedure for the joint analysis of X-ray and Sunyaev-Zeldovich Effect data. The advantages of this method are the high computational efficiency and the ability to measure the full probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and the cluster distance. We apply this technique to the Chandra X-ray data and the OVRO radio data for the galaxy cluster Abell 611. Comparisons with traditional likelihood-ratio methods reveal the robustness of the method. This method will be used in a follow-up paper to determine the distance of a large sample of galaxy clusters for which high-resolution Chandra X-ray and BIMA/OVRO radio data are available.
Sandoval-Castellanos, Edson; Palkopoulou, Eleftheria; Dalén, Love
2014-01-01
Inference of population demographic history has vastly improved in recent years due to a number of technological and theoretical advances including the use of ancient DNA. Approximate Bayesian computation (ABC) stands among the most promising methods due to its simple theoretical fundament and exceptional flexibility. However, limited availability of user-friendly programs that perform ABC analysis renders it difficult to implement, and hence programming skills are frequently required. In addition, there is limited availability of programs able to deal with heterochronous data. Here we present the software BaySICS: Bayesian Statistical Inference of Coalescent Simulations. BaySICS provides an integrated and user-friendly platform that performs ABC analyses by means of coalescent simulations from DNA sequence data. It estimates historical demographic population parameters and performs hypothesis testing by means of Bayes factors obtained from model comparisons. Although providing specific features that improve inference from datasets with heterochronous data, BaySICS also has several capabilities making it a suitable tool for analysing contemporary genetic datasets. Those capabilities include joint analysis of independent tables, a graphical interface and the implementation of Markov-chain Monte Carlo without likelihoods.
Barriers in the Physics Pipeline from K-12 to Tenure
NASA Astrophysics Data System (ADS)
Kilburn, Micha
2016-09-01
The lack of diversity in physics is a known problem, and yet efforts to change our demographics have only had minor effects during the last decade. I will explain some of the hidden barriers that dissuade underrepresented minorities in becoming physicists using a framework borrowed from sociology, Maslow's hierarchy of needs. I will draw from current research at the undergraduate to faculty levels over a variety of STEM fields that are also addressing a lack of diversity. I will also provide analysis from the Joint Institute for Nuclear Astrophysics Center for the Evolution of Elements (JINA-CEE) outreach programs to understand the likelihood of current K-12 students in becoming physicists. Specifically, I will present results from the pre-surveys from our Art 2 Science Camps (ages 8-14) about their attitudes towards science as well as results from analysis of teacher recommendations for our high school summer program. I will conclude with a positive outlook describing the pipeline created by JINA-CEE to retain students from middle school through college. This work was supported in part by the National Science Foundation under Grant No. PHY-1430152 (JINA Center for the Evolution of the Elements).
Precipitation and floodiness: forecasts of flood hazard at the regional scale
NASA Astrophysics Data System (ADS)
Stephens, Liz; Day, Jonny; Pappenberger, Florian; Cloke, Hannah
2016-04-01
In 2008, a seasonal forecast of an increased likelihood of above-normal rainfall in West Africa led the Red Cross to take early humanitarian action (such as prepositioning of relief items) on the basis that this forecast implied heightened flood risk. However, there are a number of factors that lead to non-linearity between precipitation anomalies and flood hazard, so in this presentation we use a recently developed global-scale hydrological model driven by the ERA-Interim/Land precipitation reanalysis (1980-2010) to quantify this non-linearity. Using these data, we introduce the concept of floodiness to measure the incidence of floods over a large area, and quantify the link between monthly precipitation, river discharge and floodiness anomalies. Our analysis shows that floodiness is not well correlated with precipitation, demonstrating the problem of using seasonal precipitation forecasts as a proxy for forecasting flood hazard. This analysis demonstrates the value of developing hydrometeorological forecasts of floodiness for decision-makers. As a result, we are now working with the European Centre for Medium-Range Weather Forecasts and the Joint Research Centre, as partners of the operational Global Flood Awareness System (GloFAS), to implement floodiness forecasts in real-time.
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.
Theobald, Douglas L; Wuttke, Deborah S
2006-09-01
THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.
Katz, Jeffrey N.; Smith, Savannah R.; Yang, Heidi Y.; Martin, Scott D.; Wright, John; Donnell-Fink, Laurel A.; Losina, Elena
2016-01-01
Objective To evaluate the utility of clinical history, radiographic and physical exam findings in the diagnosis of symptomatic meniscal tear (SMT) in patients over age 45, in whom concomitant osteoarthritis is prevalent. Methods In a cross-sectional study of patients from two orthopedic surgeons’ clinics we assessed clinical history, physical examination and radiographic findings in patients over 45 with knee pain. The orthopedic surgeons rated their confidence that subjects’ symptoms were due to MT; we defined the diagnosis of SMT as at least 70% confidence. We used logistic regression to identify factors independently associated with diagnosis of SMT and we used the regression results to construct an index of the likelihood of SMT. Results In 174 participants, six findings were associated independently with the expert clinician having ≥70% confidence that symptoms were due to MT: localized pain, ability to fully bend the knee, pain duration <1 year, lack of varus alignment, lack of pes planus, and absence of joint space narrowing on radiographs. The index identified a low risk group with 3% likelihood of SMT. Conclusion While clinicians traditionally rely upon mechanical symptoms in this diagnostic setting, our findings did not support the conclusion that mechanical symptoms were associated with the expert’s confidence that symptoms were due to MT. An index that includes history of localized pain, full flexion, duration <1 year, pes planus, varus alignment, and joint space narrowing can be used to stratify patients according to their risk of SMT and it identifies a subgroup with very low risk. PMID:27390312
Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET
NASA Astrophysics Data System (ADS)
Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E.; Nuyts, Johan
2016-02-01
Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov’s momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved.
Improvements in Spectrum's fit to program data tool.
Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John
2017-04-01
The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.
On the Likelihood Ratio Test for the Number of Factors in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.; Yuan, Ke-Hai
2007-01-01
In the exploratory factor analysis, when the number of factors exceeds the true number of factors, the likelihood ratio test statistic no longer follows the chi-square distribution due to a problem of rank deficiency and nonidentifiability of model parameters. As a result, decisions regarding the number of factors may be incorrect. Several…
Design/Analysis of the JWST ISIM Bonded Joints for Survivability at Cryogenic Temperatures
NASA Technical Reports Server (NTRS)
Bartoszyk, Andrew; Johnston, John; Kaprielian, Charles; Kuhn, Jonathan; Kunt, Cengiz; Rodini,Benjamin; Young, Daniel
1990-01-01
A major design and analysis challenge for the JWST ISIM structure is thermal survivability of metal/composite bonded joints below the cryogenic temperature of 30K (-405 F). Current bonded joint concepts include internal invar plug fittings, external saddle titanium/invar fittings and composite gusset/clip joints all bonded to M55J/954-6 and T300/954-6 hybrid composite tubes (75mm square). Analytical experience and design work done on metal/composite bonded joints at temperatures below that of liquid nitrogen are limited and important analysis tools, material properties, and failure criteria for composites at cryogenic temperatures are sparse in the literature. Increasing this challenge is the difficulty in testing for these required tools and properties at cryogenic temperatures. To gain confidence in analyzing and designing the ISIM joints, a comprehensive joint development test program has been planned and is currently running. The test program is designed to produce required analytical tools and develop a composite failure criterion for bonded joint strengths at cryogenic temperatures. Finite element analysis is used to design simple test coupons that simulate anticipated stress states in the flight joints; subsequently the test results are used to correlate the analysis technique for the final design of the bonded joints. In this work, we present an overview of the analysis and test methodology, current results, and working joint designs based on developed techniques and properties.
Safakish, Ramin
2017-01-01
Lower back pain (LBP) is a global public health issue and is associated with substantial financial costs and loss of quality of life. Over the years, different literature has provided different statistics regarding the causes of the back pain. The following statistic is the closest estimation regarding our patient population. The sacroiliac (SI) joint pain is responsible for LBP in 18%-30% of individuals with LBP. Quadrapolar™ radiofrequency ablation, which involves ablation of the nerves of the SI joint using heat, is a commonly used treatment for SI joint pain. However, the standard Quadrapolar radiofrequency procedure is not always effective at ablating all the sensory nerves that cause the pain in the SI joint. One of the major limitations of the standard Quadrapolar radiofrequency procedure is that it produces small lesions of ~4 mm in diameter. Smaller lesions increase the likelihood of failure to ablate all nociceptive input. In this study, we compare the standard Quadrapolar radiofrequency ablation technique to a modified Quadrapolar ablation technique that has produced improved patient outcomes in our clinic. The methodology of the two techniques are compared. In addition, we compare results from an experimental model comparing the lesion sizes produced by the two techniques. Taken together, the findings from this study suggest that the modified Quadrapolar technique provides longer lasting relief for the back pain that is caused by SI joint dysfunction. A randomized controlled clinical trial is the next step required to quantify the difference in symptom relief and quality of life produced by the two techniques.
Uddin, Jalal; Pulok, Mohammad Habibullah; Sabah, Md Nasim-Us
2016-07-01
A large body of literature has highlighted that women's household decision-making power is associated with better reproductive health outcomes, while most of the studies tend to measure such power from only women's point of view. Using both husband's and wife's matched responses to decision-making questions, this study examined the association between couples' concordant and discordant decision makings, and wife's unmet need for contraception in Bangladesh. This study used couple's data set (n=3336) from Bangladesh Demographic and Health Survey of 2007. Multivariate logistic regression was used to examine the likelihood of unmet need for contraception among married women of reproductive age. Study results suggested that couples who support the equalitarian power structure seemed to be more powerful in meeting the unmet demand for contraception. Logistic regression analysis revealed that compared to couple's concordant joint decision making, concordance in husband-only or other's involvement in decision making was associated with higher odds of unmet need for contraception. Wives exposed to family planning information discussed family planning more often with husbands, and those from richest households were less likely to have unmet need for contraception. Couple's concordant joint decision making, reflecting the concept of equalitarian power structure, appeared to be a significant analytic category. Policy makers in the field of family planning may promote community-based outreach programs and communication campaigns for family planning focusing on egalitarian gender roles in the household. Copyright © 2016 Elsevier Inc. All rights reserved.
Fithian, William; Elith, Jane; Hastie, Trevor; Keith, David A
2015-04-01
Presence-only records may provide data on the distributions of rare species, but commonly suffer from large, unknown biases due to their typically haphazard collection schemes. Presence-absence or count data collected in systematic, planned surveys are more reliable but typically less abundant.We proposed a probabilistic model to allow for joint analysis of presence-only and survey data to exploit their complementary strengths. Our method pools presence-only and presence-absence data for many species and maximizes a joint likelihood, simultaneously estimating and adjusting for the sampling bias affecting the presence-only data. By assuming that the sampling bias is the same for all species, we can borrow strength across species to efficiently estimate the bias and improve our inference from presence-only data.We evaluate our model's performance on data for 36 eucalypt species in south-eastern Australia. We find that presence-only records exhibit a strong sampling bias towards the coast and towards Sydney, the largest city. Our data-pooling technique substantially improves the out-of-sample predictive performance of our model when the amount of available presence-absence data for a given species is scarceIf we have only presence-only data and no presence-absence data for a given species, but both types of data for several other species that suffer from the same spatial sampling bias, then our method can obtain an unbiased estimate of the first species' geographic range.
Fithian, William; Elith, Jane; Hastie, Trevor; Keith, David A.
2016-01-01
Summary Presence-only records may provide data on the distributions of rare species, but commonly suffer from large, unknown biases due to their typically haphazard collection schemes. Presence–absence or count data collected in systematic, planned surveys are more reliable but typically less abundant.We proposed a probabilistic model to allow for joint analysis of presence-only and survey data to exploit their complementary strengths. Our method pools presence-only and presence–absence data for many species and maximizes a joint likelihood, simultaneously estimating and adjusting for the sampling bias affecting the presence-only data. By assuming that the sampling bias is the same for all species, we can borrow strength across species to efficiently estimate the bias and improve our inference from presence-only data.We evaluate our model’s performance on data for 36 eucalypt species in south-eastern Australia. We find that presence-only records exhibit a strong sampling bias towards the coast and towards Sydney, the largest city. Our data-pooling technique substantially improves the out-of-sample predictive performance of our model when the amount of available presence–absence data for a given species is scarceIf we have only presence-only data and no presence–absence data for a given species, but both types of data for several other species that suffer from the same spatial sampling bias, then our method can obtain an unbiased estimate of the first species’ geographic range. PMID:27840673
Dai, Jiajuan; Wang, Xusheng; Chen, Ying; Wang, Xiaodong; Zhu, Jun; Lu, Lu
2009-11-01
Previous studies have revealed that the subunit alpha 2 (Gabra2) of the gamma-aminobutyric acid receptor plays a critical role in the stress response. However, little is known about the gentetic regulatory network for Gabra2 and the stress response. We combined gene expression microarray analysis and quantitative trait loci (QTL) mapping to characterize the genetic regulatory network for Gabra2 expression in the hippocampus of BXD recombinant inbred (RI) mice. Our analysis found that the expression level of Gabra2 exhibited much variation in the hippocampus across the BXD RI strains and between the parental strains, C57BL/6J, and DBA/2J. Expression QTL (eQTL) mapping showed three microarray probe sets of Gabra2 to have highly significant linkage likelihood ratio statistic (LRS) scores. Gene co-regulatory network analysis showed that 10 genes, including Gria3, Chka, Drd3, Homer1, Grik2, Odz4, Prkag2, Grm5, Gabrb1, and Nlgn1 are directly or indirectly associated with stress responses. Eleven genes were implicated as Gabra2 downstream genes through mapping joint modulation. The genetical genomics approach demonstrates the importance and the potential power of the eQTL studies in identifying genetic regulatory networks that contribute to complex traits, such as stress responses.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).
Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei
2016-03-01
We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
ERIC Educational Resources Information Center
Khattab, Ali-Maher; And Others
1982-01-01
A causal modeling system, using confirmatory maximum likelihood factor analysis with the LISREL IV computer program, evaluated the construct validity underlying the higher order factor structure of a given correlation matrix of 46 structure-of-intellect tests emphasizing the product of transformations. (Author/PN)
Finite Element Analysis of the Maximum Stress at the Joints of the Transmission Tower
NASA Astrophysics Data System (ADS)
Itam, Zarina; Beddu, Salmia; Liyana Mohd Kamal, Nur; Bamashmos, Khaled H.
2016-03-01
Transmission towers are tall structures, usually a steel lattice tower, used to support an overhead power line. Usually, transmission towers are analyzed as frame-truss systems and the members are assumed to be pin-connected without explicitly considering the effects of joints on the tower behavior. In this research, an engineering example of joint will be analyzed with the consideration of the joint detailing to investigate how it will affect the tower analysis. A static analysis using STAAD Pro was conducted to indicate the joint with the maximum stress. This joint will then be explicitly analyzed in ANSYS using the Finite Element Method. Three approaches were used in the software which are the simple plate model, bonded contact with no bolts, and beam element bolts. Results from the joint analysis show that stress values increased with joint details consideration. This proves that joints and connections play an important role in the distribution of stress within the transmission tower.
Design/analysis of the JWST ISIM bonded joints for survivability at cryogenic temperatures
NASA Astrophysics Data System (ADS)
Bartoszyk, Andrew; Johnston, John; Kaprielian, Charles; Kuhn, Jonathan; Kunt, Cengiz; Rodini, Benjamin; Young, Daniel
2005-08-01
A major design and analysis challenge for the JWST ISIM structure is thermal survivability of metal/composite adhesively bonded joints at the cryogenic temperature of 30K (-405°F). Current bonded joint concepts include internal invar plug fittings, external saddle titanium/invar fittings and composite gusset/clip joints all bonded to hybrid composite tubes (75mm square) made with M55J/954-6 and T300/954-6 prepregs. Analytical experience and design work done on metal/composite bonded joints at temperatures below that of liquid nitrogen are limited and important analysis tools, material properties, and failure criteria for composites at cryogenic temperatures are sparse in the literature. Increasing this challenge is the difficulty in testing for these required tools and properties at cryogenic temperatures. To gain confidence in analyzing and designing the ISIM joints, a comprehensive joint development test program has been planned and is currently running. The test program is designed to produce required analytical tools and develop a composite failure criterion for bonded joint strengths at cryogenic temperatures. Finite element analysis is used to design simple test coupons that simulate anticipated stress states in the flight joints; subsequently, the test results are used to correlate the analysis technique for the final design of the bonded joints. In this work, we present an overview of the analysis and test methodology, current results, and working joint designs based on developed techniques and properties.
A study of parameter identification
NASA Technical Reports Server (NTRS)
Herget, C. J.; Patterson, R. E., III
1978-01-01
A set of definitions for deterministic parameter identification ability were proposed. Deterministic parameter identificability properties are presented based on four system characteristics: direct parameter recoverability, properties of the system transfer function, properties of output distinguishability, and uniqueness properties of a quadratic cost functional. Stochastic parameter identifiability was defined in terms of the existence of an estimation sequence for the unknown parameters which is consistent in probability. Stochastic parameter identifiability properties are presented based on the following characteristics: convergence properties of the maximum likelihood estimate, properties of the joint probability density functions of the observations, and properties of the information matrix.
Analysis of Bonded Joints Between the Facesheet and Flange of Corrugated Composite Panels
NASA Technical Reports Server (NTRS)
Yarrington, Phillip W.; Collier, Craig S.; Bednarcyk, Brett A.
2008-01-01
This paper outlines a method for the stress analysis of bonded composite corrugated panel facesheet to flange joints. The method relies on the existing HyperSizer Joints software, which analyzes the bonded joint, along with a beam analogy model that provides the necessary boundary loading conditions to the joint analysis. The method is capable of predicting the full multiaxial stress and strain fields within the flange to facesheet joint and thus can determine ply-level margins and evaluate delamination. Results comparing the method to NASTRAN finite element model stress fields are provided illustrating the accuracy of the method.
Dumas, R; Cheze, L
2008-08-01
Joint power is commonly used in orthopaedics, ergonomics or sports analysis but its clinical interpretation remains controversial. Some basic principles on muscle actions and energy transfer have been proposed in 2D. The decomposition of power on 3 axes, although questionable, allows the same analysis in 3D. However, these basic principles have been widely criticized, mainly because bi-articular muscles must be considered. This requires a more complex computation in order to determine how the individual muscle force contributes to drive the joint. Conversely, with simple 3D inverse dynamics, the analysis of both joint moment and angular velocity directions is essential to clarify when the joint moment can contribute or not to drive the joint. The present study evaluates the 3D angle between the joint moment and the joint angular velocity and investigates when the hip, knee and ankle joints are predominantly driven (angle close to 0 degrees and 180 degrees ) or stabilized (angle close to 90 degrees ) during gait. The 3D angle curves show that the three joints are never fully but only partially driven and that the hip and knee joints are mainly stabilized during the stance phase. The notion of stabilization should be further investigated, especially for subjects with motion disorders or prostheses.
Double symbolic joint entropy in nonlinear dynamic complexity analysis
NASA Astrophysics Data System (ADS)
Yao, Wenpo; Wang, Jun
2017-07-01
Symbolizations, the base of symbolic dynamic analysis, are classified as global static and local dynamic approaches which are combined by joint entropy in our works for nonlinear dynamic complexity analysis. Two global static methods, symbolic transformations of Wessel N. symbolic entropy and base-scale entropy, and two local ones, namely symbolizations of permutation and differential entropy, constitute four double symbolic joint entropies that have accurate complexity detections in chaotic models, logistic and Henon map series. In nonlinear dynamical analysis of different kinds of heart rate variability, heartbeats of healthy young have higher complexity than those of the healthy elderly, and congestive heart failure (CHF) patients are lowest in heartbeats' joint entropy values. Each individual symbolic entropy is improved by double symbolic joint entropy among which the combination of base-scale and differential symbolizations have best complexity analysis. Test results prove that double symbolic joint entropy is feasible in nonlinear dynamic complexity analysis.
NASA Technical Reports Server (NTRS)
Cash, W.
1979-01-01
Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.
ATAC Autocuer Modeling Analysis.
1981-01-01
the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of
Massive Halos in Millennium Gas Simulations: Multivariate Scaling Relations
NASA Astrophysics Data System (ADS)
Stanek, R.; Rasia, E.; Evrard, A. E.; Pearce, F.; Gazzola, L.
2010-06-01
The joint likelihood of observable cluster signals reflects the astrophysical evolution of the coupled baryonic and dark matter components in massive halos, and its knowledge will enhance cosmological parameter constraints in the coming era of large, multiwavelength cluster surveys. We present a computational study of intrinsic covariance in cluster properties using halo populations derived from Millennium Gas Simulations (MGS). The MGS are re-simulations of the original 500 h -1 Mpc Millennium Simulation performed with gas dynamics under two different physical treatments: shock heating driven by gravity only (GO) and a second treatment with cooling and preheating (PH). We examine relationships among structural properties and observable X-ray and Sunyaev-Zel'dovich (SZ) signals for samples of thousands of halos with M 200 >= 5 × 1013 h -1 M sun and z < 2. While the X-ray scaling behavior of PH model halos at low redshift offers a good match to local clusters, the model exhibits non-standard features testable with larger surveys, including weakly running slopes in hot gas observable-mass relations and ~10% departures from self-similar redshift evolution for 1014 h -1 M sun halos at redshift z ~ 1. We find that the form of the joint likelihood of signal pairs is generally well described by a multivariate, log-normal distribution, especially in the PH case which exhibits less halo substructure than the GO model. At fixed mass and epoch, joint deviations of signal pairs display mainly positive correlations, especially the thermal SZ effect paired with either hot gas fraction (r = 0.88/0.69 for PH/GO at z = 0) or X-ray temperature (r = 0.62/0.83). The levels of variance in X-ray luminosity, temperature, and gas mass fraction are sensitive to the physical treatment, but offsetting shifts in the latter two measures maintain a fixed 12% scatter in the integrated SZ signal under both gas treatments. We discuss halo mass selection by signal pairs, and find a minimum mass scatter of 4% in the PH model by combining thermal SZ and gas fraction measurements.
Lu, Jianing; Li, Xiang; Fu, Songnian; Luo, Ming; Xiang, Meng; Zhou, Huibin; Tang, Ming; Liu, Deming
2017-03-06
We present dual-polarization complex-weighted, decision-aided, maximum-likelihood algorithm with superscalar parallelization (SSP-DP-CW-DA-ML) for joint carrier phase and frequency-offset estimation (FOE) in coherent optical receivers. By pre-compensation of the phase offset between signals in dual polarizations, the performance can be substantially improved. Meanwhile, with the help of modified SSP-based parallel implementation, the acquisition time of FO and the required number of training symbols are reduced by transferring the complex weights of the filters between adjacent buffers, where differential coding/decoding is not required. Simulation results show that the laser linewidth tolerance of our proposed algorithm is comparable to traditional blind phase search (BPS), while a complete FOE range of ± symbol rate/2 can be achieved. Finally, performance of our proposed algorithm is experimentally verified under the scenario of back-to-back (B2B) transmission using 10 Gbaud DP-16/32-QAM formats.
Lee, Jung Yeon; Brook, Judith S; Finch, Stephen J; De La Rosa, Mario; Brook, David W
2017-01-01
The current study examines longitudinal patterns of cigarette smoking and depressive symptoms as predictors of generalized anxiety disorder using data from the Harlem Longitudinal Development Study. There were 674 African American (53%) and Puerto Rican (47%) participants. Among the 674 participants, 60% were females. In the logistic regression analyses, the indicators of membership in each of the joint trajectories of cigarette smoking and depressive symptoms from the mid-20s to the mid-30s were used as the independent variables, and the diagnosis of generalized anxiety disorder in the mid-30s was used as the dependent variable. The high cigarette smoking with high depressive symptoms group and the low cigarette smoking with high depressive symptoms group were associated with an increased likelihood of having generalized anxiety disorder as compared to the no cigarette smoking with low depressive symptoms group. The findings shed light on the prevention and treatment of generalized anxiety disorder.
Lee, Jung Yeon; Brook, Judith S.; Finch, Stephen J.; De La Rosa, Mario; Brook, David W.
2017-01-01
The current study examines the longitudinal patterns of both cigarette smoking and depressive symptoms as predictors of generalized anxiety disorder (GAD) using data from the Harlem Longitudinal Development Study. There were 674 African American (53%) and Puerto Rican (47%) participants. Among the 674 participants, 60% were females. In the logistic regression analyses, the indicator variables of membership in each of the joint trajectories of cigarette smoking and depressive symptoms from the mid 20s to the mid 30s were used as the independent variables, and the diagnosis of GAD in the mid 30s was used as the dependent variable. The high cigarette smoking with high depressive symptoms group and the low cigarette smoking with high depressive symptoms group were associated with an increased likelihood of having GAD as compared to the no cigarette smoking with low depressive symptoms group. The findings shed light on the prevention and treatment of GAD. PMID:28281938
Estimation of inflation parameters for Perturbed Power Law model using recent CMB measurements
NASA Astrophysics Data System (ADS)
Mukherjee, Suvodip; Das, Santanu; Joy, Minu; Souradeep, Tarun
2015-01-01
Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass meff for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independent parameters, namely spectral index for tensor perturbation νt and change in spectral index for scalar perturbation νst to explain the observed features in the scalar and tensor power spectrum of perturbation. From the recent measurements of CMB power spectra by WMAP, Planck and BICEP-2 for temperature and polarization, we estimate the feasibility of PPL model with standard ΛCDM model. Although BICEP-2 claimed a detection of r=0.2, estimates of dust contamination provided by Planck have left open the possibility that only upper bound on r will be expected in a joint analysis. As a result we consider different upper bounds on the value of r and show that PPL model can explain a lower value of tensor to scalar ratio (r<0.1 or r<0.01) for a scalar spectral index of ns=0.96 by having a non-zero value of effective mass of the inflaton field m2eff/H2. The analysis with WP + Planck likelihood shows a non-zero detection of m2eff/H2 with 5.7 σ and 8.1 σ respectively for r<0.1 and r<0.01. Whereas, with BICEP-2 likelihood m2eff/H2 = -0.0237 ± 0.0135 which is consistent with zero.
Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.
2015-01-01
In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152
Elasto-Plastic Analysis of Tee Joints Using HOT-SMAC
NASA Technical Reports Server (NTRS)
Arnold, Steve M. (Technical Monitor); Bednarcyk, Brett A.; Yarrington, Phillip W.
2004-01-01
The Higher Order Theory - Structural/Micro Analysis Code (HOT-SMAC) software package is applied to analyze the linearly elastic and elasto-plastic response of adhesively bonded tee joints. Joints of this type are finding an increasing number of applications with the increased use of composite materials within advanced aerospace vehicles, and improved tools for the design and analysis of these joints are needed. The linearly elastic results of the code are validated vs. finite element analysis results from the literature under different loading and boundary conditions, and new results are generated to investigate the inelastic behavior of the tee joint. The comparison with the finite element results indicates that HOT-SMAC is an efficient and accurate alternative to the finite element method and has a great deal of potential as an analysis tool for a wide range of bonded joints.
Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.
2015-01-01
We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.
Maximum likelihood decoding analysis of accumulate-repeat-accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, A.; Divsalar, D.; Yao, K.
2004-01-01
In this paper, the performance of the repeat-accumulate codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. Some simple codes are shown that perform very close to Shannon limit with maximum likelihood decoding.
Pain and pain management in haemophilia
Auerswald, Günter; Dolan, Gerry; Duffy, Anne; Hermans, Cedric; Jiménez-Yuste, Victor; Ljung, Rolf; Morfini, Massimo; Lambert, Thierry; Šalek, Silva Zupančić
2016-01-01
Joint pain is common in haemophilia and may be acute or chronic. Effective pain management in haemophilia is essential to reduce the burden that pain imposes on patients. However, the choice of appropriate pain-relieving measures is challenging, as there is a complex interplay of factors affecting pain perception. This can manifest as differences in patients’ experiences and response to pain, which require an individualized approach to pain management. Prophylaxis with factor replacement reduces the likelihood of bleeds and bleed-related pain, whereas on-demand therapy ensures rapid bleed resolution and pain relief. Although use of replacement or bypassing therapy is often the first intervention for pain, additional pain relief strategies may be required. There is an array of analgesic options, but consideration should be paid to the adverse effects of each class. Nevertheless, a combination of medications that act at different points in the pain pathway may be beneficial. Nonpharmacological measures may also help patients and include active coping strategies; rest, ice, compression, and elevation; complementary therapies; and physiotherapy. Joint aspiration may also reduce acute joint pain, and joint steroid injections may alleviate chronic pain. In the longer term, increasing use of prophylaxis or performing surgery may be necessary to reduce the burden of pain caused by the degenerative effects of repeated bleeds. Whichever treatment option is chosen, it is important to monitor pain and adjust patient management accordingly. Beyond specific pain management approaches, ongoing collaboration between multidisciplinary teams, which should include physiotherapists and pain specialists, may improve outcomes for patients. PMID:27439216
Diagnosis, treatment, and prevention of gout.
Hainer, Barry L; Matheson, Eric; Wilkes, R Travis
2014-12-15
Gout is characterized by painful joint inflammation, most commonly in the first metatarsophalangeal joint, resulting from precipitation of monosodium urate crystals in a joint space. Gout is typically diagnosed using clinical criteria from the American College of Rheumatology. Diagnosis may be confirmed by identification of monosodium urate crystals in synovial fluid of the affected joint. Acute gout may be treated with nonsteroidal anti-inflammatory drugs, corticosteroids, or colchicine. To reduce the likelihood of recurrent flares, patients should limit their consumption of certain purine-rich foods (e.g., organ meats, shellfish) and avoid alcoholic drinks (especially beer) and beverages sweetened with high-fructose corn syrup. Consumption of vegetables and low-fat or nonfat dairy products should be encouraged. The use of loop and thiazide diuretics can increase uric acid levels, whereas the use of the angiotensin receptor blocker losartan increases urinary excretion of uric acid. Reduction of uric acid levels is key to avoiding gout flares. Allopurinol and febuxostat are first-line medications for the prevention of recurrent gout, and colchicine and/or probenecid are reserved for patients who cannot tolerate first-line agents or in whom first-line agents are ineffective. Patients receiving urate-lowering medications should be treated concurrently with nonsteroidal anti-inflammatory drugs, colchicine, or low-dose corticosteroids to prevent flares. Treatment should continue for at least three months after uric acid levels fall below the target goal in those without tophi, and for six months in those with a history of tophi.
Kloefkorn, Heidi E.; Allen, Kyle D.
2017-01-01
Aim of the Study The importance of the medial meniscus to knee health is demonstrated by studies which show meniscus injuries significantly increase the likelihood of developing osteoarthritis (OA), and knee OA can be modeled in rodents using simulated meniscus injuries. Traditionally, histological assessments of OA in these models have focused on damage to the articular cartilage; however, OA is now viewed as a disease of the entire joint as an organ system. The aim of this study was to develop quantitative histological measures of bone and synovial changes in a rat medial meniscus injury model of knee OA. Materials and Methods To initiate OA, a medial meniscus transection (MMT) and a medial collateral ligament transection (MCLT) were performed in 32 male Lewis rats (MMT group). MCLT alone served as the sham procedure in 32 additional rats (MCLT sham group). At weeks 1, 2, 4, and 6 post-surgery, histological assessment of subchondral bone and synovium was performed (n = 8 per group per time point). Results Trabecular bone area and the ossification width at the osteochondral interface increased in both the MMT and MCLT groups. Subintimal synovial cell morphology also changed in MMT and MCLT groups relative to naïve animals. Conclusions OA affects the joint as an organ system, and quantifying changes throughout an entire joint can improve our understanding of the relationship between joint destruction and painful OA symptoms following meniscus injury. PMID:27797605
Influence of solder joint length to the mechanical aspect during the thermal stress analysis
NASA Astrophysics Data System (ADS)
Tan, J. S.; Khor, C. Y.; Rahim, Wan Mohd Faizal Wan Abd; Ishak, Muhammad Ikman; Rosli, M. U.; Jamalludin, Mohd Riduan; Zakaria, M. S.; Nawi, M. A. M.; Aziz, M. S. Abdul; Ani, F. Che
2017-09-01
Solder joint is an important interconnector in surface mount technology (SMT) assembly process. The real time stress, strain and displacement of the solder joint is difficult to observe and assess the experiment. To tackle these problems, simulation analysis was employed to study the von Mises stress, strain and displacement in the thermal stress analysis by using Finite element based software. In this study, a model of leadless electronic package was considered. The thermal stress analysis was performed to investigate the effect of the solder length to those mechanical aspects. The simulation results revealed that solder length gives significant effect to the maximum von Mises stress to the solder joint. Besides, changes in solder length also influence the displacement of the solder joint in the thermal environment. The increment of the solder length significantly reduces the von Mises stress and strain on the solder joint. Thus, the understanding of the physical parameter for solder joint is important for engineer prior to designing the solder joint of the electronic component.
Gutenkunst, Ryan N.; Hernandez, Ryan D.; Williamson, Scott H.; Bustamante, Carlos D.
2009-01-01
Demographic models built from genetic data play important roles in illuminating prehistorical events and serving as null models in genome scans for selection. We introduce an inference method based on the joint frequency spectrum of genetic variants within and between populations. For candidate models we numerically compute the expected spectrum using a diffusion approximation to the one-locus, two-allele Wright-Fisher process, involving up to three simultaneous populations. Our approach is a composite likelihood scheme, since linkage between neutral loci alters the variance but not the expectation of the frequency spectrum. We thus use bootstraps incorporating linkage to estimate uncertainties for parameters and significance values for hypothesis tests. Our method can also incorporate selection on single sites, predicting the joint distribution of selected alleles among populations experiencing a bevy of evolutionary forces, including expansions, contractions, migrations, and admixture. We model human expansion out of Africa and the settlement of the New World, using 5 Mb of noncoding DNA resequenced in 68 individuals from 4 populations (YRI, CHB, CEU, and MXL) by the Environmental Genome Project. We infer divergence between West African and Eurasian populations 140 thousand years ago (95% confidence interval: 40–270 kya). This is earlier than other genetic studies, in part because we incorporate migration. We estimate the European (CEU) and East Asian (CHB) divergence time to be 23 kya (95% c.i.: 17–43 kya), long after archeological evidence places modern humans in Europe. Finally, we estimate divergence between East Asians (CHB) and Mexican-Americans (MXL) of 22 kya (95% c.i.: 16.3–26.9 kya), and our analysis yields no evidence for subsequent migration. Furthermore, combining our demographic model with a previously estimated distribution of selective effects among newly arising amino acid mutations accurately predicts the frequency spectrum of nonsynonymous variants across three continental populations (YRI, CHB, CEU). PMID:19851460
Northwest Climate Risk Assessment
NASA Astrophysics Data System (ADS)
Mote, P.; Dalton, M. M.; Snover, A. K.
2012-12-01
As part of the US National Climate Assessment, the Northwest region undertook a process of climate risk assessment. This process included an expert evaluation of previously identified impacts, their likelihoods, and consequences, and engaged experts from both academia and natural resource management practice (federal, tribal, state, local, private, and non-profit) in a workshop setting. An important input was a list of 11 risks compiled by state agencies in Oregon and similar adaptation efforts in Washington. By considering jointly the likelihoods, consequences, and adaptive capacity, participants arrived at an approximately ranked list of risks which was further assessed and prioritized through a series of risk scoring exercises to arrive at the top three climate risks facing the Northwest: 1) changes in amount and timing of streamflow related to snowmelt, causing far-reaching ecological and socioeconomic consequences; 2) coastal erosion and inundation, and changing ocean acidity, combined with low adaptive capacity in the coastal zone to create large risks; and 3) the combined effects of wildfire, insect outbreaks, and diseases will cause large areas of forest mortality and long-term transformation of forest landscapes.
Balsa, Ana I.; Homer, Jenny F.; French, Michael T.; Weisner, Constance M.
2010-01-01
Although the primary outcome of interest in clinical evaluations of addiction treatment programs is usually abstinence, participation in these programs can have a wide range of consequences. This study evaluated the effects of treatment initiation on substance use, school attendance, employment, and involvement in criminal activity at 12 months post-admission for 419 adolescents (aged 12 to 18) enrolled in chemical dependency recovery programs in a large managed care health plan. Instrumental variables estimation methods were used to account for unobserved selection into treatment by jointly modeling the likelihood of participation in treatment and the odds of attaining a certain outcome or level of an outcome. Treatment initiation significantly increased the likelihood of attending school, promoted abstinence, and decreased the probability of adolescent employment, but it did not significantly affect participation in criminal activity at the 12-month follow-up. These findings highlight the need to address selection in a non-experimental study and demonstrate the importance of considering multiple outcomes when assessing the effectiveness of adolescent treatment. PMID:18064572
Maximum likelihood sequence estimation for optical complex direct modulation.
Che, Di; Yuan, Feng; Shieh, William
2017-04-17
Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.
MODELING LEFT-TRUNCATED AND RIGHT-CENSORED SURVIVAL DATA WITH LONGITUDINAL COVARIATES
Su, Yu-Ru; Wang, Jane-Ling
2018-01-01
There is a surge in medical follow-up studies that include longitudinal covariates in the modeling of survival data. So far, the focus has been largely on right censored survival data. We consider survival data that are subject to both left truncation and right censoring. Left truncation is well known to produce biased sample. The sampling bias issue has been resolved in the literature for the case which involves baseline or time-varying covariates that are observable. The problem remains open however for the important case where longitudinal covariates are present in survival models. A joint likelihood approach has been shown in the literature to provide an effective way to overcome those difficulties for right censored data, but this approach faces substantial additional challenges in the presence of left truncation. Here we thus propose an alternative likelihood to overcome these difficulties and show that the regression coefficient in the survival component can be estimated unbiasedly and efficiently. Issues about the bias for the longitudinal component are discussed. The new approach is illustrated numerically through simulations and data from a multi-center AIDS cohort study. PMID:29479122
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
Hospital mergers and market overlap.
Brooks, G R; Jones, V G
1997-01-01
OBJECTIVE: To address two questions: What are the characteristics of hospitals that affect the likelihood of their being involved in a merger? What characteristics of particular pairs of hospitals affect the likelihood of the pair engaging in a merger? DATA SOURCES/STUDY SETTING: Hospitals in the 12 county region surrounding the San Francisco Bay during the period 1983 to 1992 were the focus of the study. Data were drawn from secondary sources, including the Lexis/Nexis database, the American Hospital Association, and the Office of Statewide Health Planning and Development of the State of California. STUDY DESIGN: Seventeen hospital mergers during the study period were identified. A random sample of pairs of hospitals that did not merge was drawn to establish a statistically efficient control set. Models constructed from hypotheses regarding hospital and market characteristics believed to be related to merger likelihood were tested using logistic regression analysis. DATA COLLECTION: See Data Sources/Study Setting. PRINCIPAL FINDINGS: The analysis shows that the likelihood of a merger between a particular pair of hospitals is positively related to the degree of market overlap that exists between them. Furthermore, market overlap and performance difference interact in their effect on merger likelihood. In an analysis of individual hospitals, conditions of rivalry, hospital market share, and hospital size were not found to influence the likelihood that a hospital will engage in a merger. CONCLUSIONS: Mergers between hospitals are not driven directly by considerations of market power or efficiency as much as by the existence of specific merger opportunities in the hospitals' local markets. Market overlap is a condition that enables a merger to occur, but other factors, such as the relative performance levels of the hospitals in question and their ownership and teaching status, also play a role in influencing the likelihood that a merger will in fact take place. PMID:9018212
Synchrony in Joint Action Is Directed by Each Participant’s Motor Control System
Noy, Lior; Weiser, Netta; Friedman, Jason
2017-01-01
In this work, we ask how the probability of achieving synchrony in joint action is affected by the choice of motion parameters of each individual. We use the mirror game paradigm to study how changes in leader’s motion parameters, specifically frequency and peak velocity, affect the probability of entering the state of co-confidence (CC) motion: a dyadic state of synchronized, smooth and co-predictive motions. In order to systematically study this question, we used a one-person version of the mirror game, where the participant mirrored piece-wise rhythmic movements produced by a computer on a graphics tablet. We systematically varied the frequency and peak velocity of the movements to determine how these parameters affect the likelihood of synchronized joint action. To assess synchrony in the mirror game we used the previously developed marker of co-confident (CC) motions: smooth, jitter-less and synchronized motions indicative of co-predicative control. We found that when mirroring movements with low frequencies (i.e., long duration movements), the participants never showed CC, and as the frequency of the stimuli increased, the probability of observing CC also increased. This finding is discussed in the framework of motor control studies showing an upper limit on the duration of smooth motion. We confirmed the relationship between motion parameters and the probability to perform CC with three sets of data of open-ended two-player mirror games. These findings demonstrate that when performing movements together, there are optimal movement frequencies to use in order to maximize the possibility of entering a state of synchronized joint action. It also shows that the ability to perform synchronized joint action is constrained by the properties of our motor control systems. PMID:28443047
Sotos-Prieto, Mercedes; Baylin, Ana; Campos, Hannia; Qi, Lu; Mattei, Josiemer
2016-12-20
A lifestyle cardiovascular risk score (LCRS) and a genetic risk score (GRS) have been independently associated with myocardial infarction (MI) in Hispanics/Latinos. Interaction or joint association between these scores has not been examined. Thus, our aim was to assess interactive and joint associations between LCRS and GRS, and each individual lifestyle risk factor, on likelihood of MI. Data included 1534 Costa Rican adults with nonfatal acute MI and 1534 matched controls. The LCRS used estimated coefficients as weights for each factor: unhealthy diet, physical inactivity, smoking, elevated waist:hip ratio, low/high alcohol intake, low socioeconomic status. The GRS included 14 MI-associated risk alleles. Conditional logistic regressions were used to calculate adjusted odds ratios. The odds ratios for MI were 2.72 (2.33, 3.17) per LCRS unit and 1.13 (95% CI 1.06, 1.21) per GRS unit. A significant joint association for highest GRS tertile and highest LCRS tertile and odds of MI was detected (odds ratio=5.43 [3.71, 7.94]; P<1.00×10 -7 ), compared to both lowest tertiles. The odds ratios were 1.74 (1.22, 2.49) under optimal lifestyle and unfavorable genetic profile, and 5.02 (3.46, 7.29) under unhealthy lifestyle but advantageous genetic profile. Significant joint associations were observed for the highest GRS tertile and the highest of each lifestyle component risk category. The interaction term was nonsignificant (P=0.33). Lifestyle risk factors and genetics are jointly associated with higher odds of MI among Hispanics/Latinos. Individual and combined lifestyle risk factors showed stronger associations. Efforts to improve lifestyle behaviors could help prevent MI regardless of genetic susceptibility. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Katz, Jeffrey N; Smith, Savannah R; Yang, Heidi Y; Martin, Scott D; Wright, John; Donnell-Fink, Laurel A; Losina, Elena
2017-04-01
To evaluate the utility of clinical history, radiographic findings, and physical examination findings in the diagnosis of symptomatic meniscal tear (SMT) in patients over age 45 years, in whom concomitant osteoarthritis is prevalent. In a cross-sectional study of patients from 2 orthopedic surgeons' clinics, we assessed clinical history, physical examination findings, and radiographic findings in patients age >45 years with knee pain. The orthopedic surgeons rated their confidence that subjects' symptoms were due to meniscal tear; we defined the diagnosis of SMT as at least 70% confidence. We used logistic regression to identify factors independently associated with diagnosis of SMT, and we used the regression results to construct an index of the likelihood of SMT. In 174 participants, 6 findings were associated independently with the expert clinician having ≥70% confidence that symptoms were due to meniscal tear: localized pain, ability to fully bend the knee, pain duration <1 year, lack of varus alignment, lack of pes planus, and absence of joint space narrowing on radiographs. The index identified a low-risk group with 3% likelihood of SMT. While clinicians traditionally rely upon mechanical symptoms in this diagnostic setting, our findings did not support the conclusion that mechanical symptoms were associated with the expert's confidence that symptoms were due to meniscal tear. An index that includes history of localized pain, full flexion, duration <1 year, pes planus, varus alignment, and joint space narrowing can be used to stratify patients according to their risk of SMT, and it identifies a subgroup with very low risk. © 2016, American College of Rheumatology.
Influence of movie smoking exposure and team sports participation on established smoking.
Adachi-Mejia, Anna M; Primack, Brian A; Beach, Michael L; Titus-Ernstoff, Linda; Longacre, Meghan R; Weiss, Julia E; Dalton, Madeline A
2009-07-01
To examine the joint effects of movie smoking exposure and team sports participation on established smoking. Longitudinal study. School- and telephone-based surveys in New Hampshire and Vermont between September 1999 through November 1999 and February 2006 through February 2007. A total of 2048 youths aged 16 to 21 years at follow-up. Main Exposures Baseline movie smoking exposure categorized in quartiles assessed when respondents were aged 9 to 14 years and team sports participation assessed when respondents were aged 16 to 21 years. Main Outcome Measure Established smoking (having smoked > or =100 cigarettes in one's lifetime) at follow-up. At follow-up, 353 respondents (17.2%) were established smokers. Exposure to the highest quartile of movie smoking compared with the lowest increased the likelihood of established smoking (odds ratio = 1.63; 95% confidence interval, 1.03-2.57), and team sports nonparticipants compared with participants were twice as likely to be established smokers (odds ratio = 2.01; 95% confidence interval, 1.47-2.74). The joint effects of movie smoking exposure and team sports participation revealed that at each quartile of movie smoking exposure, the odds of established smoking were greater for team sports nonparticipants than for participants. We saw a dose-response relationship of movie smoking exposure for established smoking only among team sports participants. Team sports participation clearly plays a protective role against established smoking, even in the face of exposure to movie smoking. However, movie smoking exposure increases the risk of established smoking among both team sports participants and nonparticipants. Parents, teachers, coaches, and clinicians should be aware that encouraging team sports participation in tandem with minimizing early exposure to movie smoking may offer the greatest likelihood of preventing youth smoking.
Influence of Movie Smoking Exposure and Team Sports Participation on Established Smoking
Adachi-Mejia, Anna M.; Primack, Brian A.; Beach, Michael L.; Titus-Ernstoff, Linda; Longacre, Meghan R.; Weiss, Julia E.; Dalton, Madeline A.
2010-01-01
Objective To examine the joint effects of movie smoking exposure and team sports participation on established smoking. Design Longitudinal study. Setting School- and telephone-based surveys in New Hampshire and Vermont between September 1999 through November 1999 and February 2006 through February 2007. Participants A total of 2048 youths aged 16 to 21 years at follow-up. Main Exposures Baseline movie smoking exposure categorized in quartiles assessed when respondents were aged 9 to 14 years and team sports participation assessed when respondents were aged 16 to 21 years. Main Outcome Measure Established smoking (having smoked ≥100 cigarettes in one’s lifetime) at follow-up. Results At follow-up, 353 respondents (17.2%) were established smokers. Exposure to the highest quartile of movie smoking compared with the lowest increased the likelihood of established smoking (odds ratio=1.63; 95% confidence interval, 1.03–2.57), and team sports nonparticipants compared with participants were twice as likely to be established smokers (odds ratio=2.01; 95% confidence interval, 1.47–2.74). The joint effects of movie smoking exposure and team sports participation revealed that at each quartile of movie smoking exposure, the odds of established smoking were greater for team sports nonparticipants than for participants. We saw a dose-response relationship of movie smoking exposure for established smoking only among team sports participants. Conclusions Team sports participation clearly plays a protective role against established smoking, even in the face of exposure to movie smoking. However, movie smoking exposure increases the risk of established smoking among both team sports participants and nonparticipants. Parents, teachers, coaches, and clinicians should be aware that encouraging team sports participation in tandem with minimizing early exposure to movie smoking may offer the greatest likelihood of preventing youth smoking. PMID:19581547
Conditional High-Order Boltzmann Machines for Supervised Relation Learning.
Huang, Yan; Wang, Wei; Wang, Liang; Tan, Tieniu
2017-09-01
Relation learning is a fundamental problem in many vision tasks. Recently, high-order Boltzmann machine and its variants have shown their great potentials in learning various types of data relation in a range of tasks. But most of these models are learned in an unsupervised way, i.e., without using relation class labels, which are not very discriminative for some challenging tasks, e.g., face verification. In this paper, with the goal to perform supervised relation learning, we introduce relation class labels into conventional high-order multiplicative interactions with pairwise input samples, and propose a conditional high-order Boltzmann Machine (CHBM), which can learn to classify the data relation in a binary classification way. To be able to deal with more complex data relation, we develop two improved variants of CHBM: 1) latent CHBM, which jointly performs relation feature learning and classification, by using a set of latent variables to block the pathway from pairwise input samples to output relation labels and 2) gated CHBM, which untangles factors of variation in data relation, by exploiting a set of latent variables to multiplicatively gate the classification of CHBM. To reduce the large number of model parameters generated by the multiplicative interactions, we approximately factorize high-order parameter tensors into multiple matrices. Then, we develop efficient supervised learning algorithms, by first pretraining the models using joint likelihood to provide good parameter initialization, and then finetuning them using conditional likelihood to enhance the discriminant ability. We apply the proposed models to a series of tasks including invariant recognition, face verification, and action similarity labeling. Experimental results demonstrate that by exploiting supervised relation labels, our models can greatly improve the performance.
Image transmission system using adaptive joint source and channel decoding
NASA Astrophysics Data System (ADS)
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
A pilot study on the improvement of the lying area of finishing pigs by a soft lying mat.
Savary, Pascal; Gygax, Lorenz; Jungbluth, Thomas; Wechsler, Beat; Hauser, Rudolf
2011-01-01
In this pilot study, we tested whether a soft mat (foam covered with a heat-sealed thermoplastic) reduces alterations and injuries at the skin and the leg joints.The soft mat in the lying area of partly slatted pens was compared to a lying area consisting of either bare or slightly littered (100 g straw per pig and day) concrete flooring. In this study we focused on skin lesions on the legs of finishing pigs as indicators of impaired welfare. Pigs were kept in 19 groups of 8-10 individuals and were examined for skin lesions around the carpal and tarsal joints either at a weight of <35 kg, or at close to 100 kg. The likelihood of hairless patches and wounds at the tarsal joints was significantly lower in pens with the soft lying mat than in pens with a bare concrete floor. Pens with a littered concrete floor did not differ compared to pens with a bare concrete floor. The soft lying mat thus improved floor quality in the lying area in terms of preventing skin lesions compared to bare and slightly littered concrete flooring. Such soft lying mats have thus the potential to improve lying comfort and welfare of finishing pigs.
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
Measurement of CIB power spectra with CAM-SPEC from Planck HFI maps
NASA Astrophysics Data System (ADS)
Mak, Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine
2015-08-01
We present new measurements of the cosmic infrared background (CIB) anisotropies and its first likelihood using Planck HFI data at 353, 545, and 857 GHz. The measurements are based on cross-frequency power spectra and likelihood analysis using the CAM-SPEC package, rather than map based template removal of foregrounds as done in previous Planck CIB analysis. We construct the likelihood of the CIB temperature fluctuations, an extension of CAM-SPEC likelihood as used in CMB analysis to higher frequency, and use it to drive the best estimate of the CIB power spectrum over three decades in multiple moment, l, covering 50 ≤ l ≤ 2500. We adopt parametric models of the CIB and foreground contaminants (Galactic cirrus, infrared point sources, and cosmic microwave background anisotropies), and calibrate the dataset uniformly across frequencies with known Planck beam and noise properties in the likelihood construction. We validate our likelihood through simulations and extensive suite of consistency tests, and assess the impact of instrumental and data selection effects on the final CIB power spectrum constraints. Two approaches are developed for interpreting the CIB power spectrum. The first approach is based on simple parametric model which model the cross frequency power using amplitudes, correlation coefficients, and known multipole dependence. The second approach is based on the physical models for galaxy clustering and the evolution of infrared emission of galaxies. The new approaches fit all auto- and cross- power spectra very well, with the best fit of χ2ν = 1.04 (parametric model). Using the best foreground solution, we find that the cleaned CIB power spectra are in good agreement with previous Planck and Herschel measurements.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
NASA Technical Reports Server (NTRS)
Gellman, A. J.; Price, J. P.
1978-01-01
A study to examine the question of technology transfer through international arrangements for production of commercial transport aircraft is presented. The likelihood of such transfer under various representative conditions was determined and an understanding of the economic motivations for, effects of, joint venture arrangements was developed. Relevant public policy implications were also assessed. Multinational consortia with U.S. participation were focused upon because they generate the full range of pertinent public issues (including especially technology transfer), and also because of recognized trends toward such arrangements. An extensive search and analysis of existing literature to identify the key issues, and in-person interviews with executives of U.S. and European commercial airframe producers was reviewed. Distinctions were drawn among product-embodied, process, and management technologies in terms of their relative possibilities of transfer and the significance of such transfer. Also included are observations on related issues such as the implications of U.S. antitrust policy with respect to the formation of consortia and the competitive viability of the U.S. aircraft manufacturing industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M.G.; et al.
2015-11-06
We have conducted three searches for correlations between ultra-high energy cosmic rays detected by the Telescope Array and the Pierre Auger Observatory, and high-energy neutrino candidate events from IceCube. Two cross-correlation analyses with UHECRs are done: one with 39 cascades from the IceCube `high-energy starting events' sample and the other with 16 high-energy `track events'. The angular separation between the arrival directions of neutrinos and UHECRs is scanned over. The same events are also used in a separate search using a maximum likelihood approach, after the neutrino arrival directions are stacked. To estimate the significance we assume UHECR magnetic deflections to be inversely proportional to their energy, with valuesmore » $$3^\\circ$$, $$6^\\circ$$ and $$9^\\circ$$ at 100 EeV to allow for the uncertainties on the magnetic field strength and UHECR charge. A similar analysis is performed on stacked UHECR arrival directions and the IceCube sample of through-going muon track events which were optimized for neutrino point-source searches.« less
Stinnett, Jacob; Sullivan, Clair J.; Xiong, Hao
2017-03-02
Low-resolution isotope identifiers are widely deployed for nuclear security purposes, but these detectors currently demonstrate problems in making correct identifications in many typical usage scenarios. While there are many hardware alternatives and improvements that can be made, performance on existing low resolution isotope identifiers should be able to be improved by developing new identification algorithms. We have developed a wavelet-based peak extraction algorithm and an implementation of a Bayesian classifier for automated peak-based identification. The peak extraction algorithm has been extended to compute uncertainties in the peak area calculations. To build empirical joint probability distributions of the peak areas andmore » uncertainties, a large set of spectra were simulated in MCNP6 and processed with the wavelet-based feature extraction algorithm. Kernel density estimation was then used to create a new component of the likelihood function in the Bayesian classifier. Furthermore, identification performance is demonstrated on a variety of real low-resolution spectra, including Category I quantities of special nuclear material.« less
Women's Retirement Expectations: How Stable Are They?
Hardy, Melissa A.
2009-01-01
Objective Using the National Longitudinal Survey of Mature Women, we examine between- and within-person differences in expected retirement age as a key element of the retirement planning process. The expectation typologies of 1,626 women born between 1923 and 1937 were classified jointly on the basis of specificity and consistency. Methods Latent class analysis was used to determine retirement expectation patterns over a 7-year span. Multinomial logistic regression analyses were employed to estimate the effects of demographic and status characteristics on the likelihood of reporting 4 distinct longitudinal patterns of retirement expectations. Results Substantial heterogeneity in reports of expected retirement age between and within individuals over the 7-year span was found. Demographic and status characteristics, specifically age, race, marital status, job tenure, and recent job change, sorted respondents into different retirement expectation patterns. Conclusions The frequent within-person fluctuations and substantial between-person heterogeneity in retirement expectations indicate uncertainty and variability in both expectations and process of expectation formation. Variability in respondents' reports suggests that studying retirement expectations at multiple time points better captures the dynamics of preretirement planning. PMID:19176483
A Walk on the Wild Side: The Impact of Music on Risk-Taking Likelihood.
Enström, Rickard; Schmaltz, Rodney
2017-01-01
From a marketing perspective, there has been substantial interest in on the role of risk-perception on consumer behavior. Specific 'problem music' like rap and heavy metal has long been associated with delinquent behavior, including violence, drug use, and promiscuous sex. Although individuals' risk preferences have been investigated across a range of decision-making situations, there has been little empirical work demonstrating the direct role music may have on the likelihood of engaging in risky activities. In the exploratory study reported here, we assessed the impact of listening to different styles of music while assessing risk-taking likelihood through a psychometric scale. Risk-taking likelihood was measured across ethical, financial, health and safety, recreational and social domains. Through the means of a canonical correlation analysis, the multivariate relationship between different music styles and individual risk-taking likelihood across the different domains is discussed. Our results indicate that listening to different types of music does influence risk-taking likelihood, though not in areas of health and safety.
A Walk on the Wild Side: The Impact of Music on Risk-Taking Likelihood
Enström, Rickard; Schmaltz, Rodney
2017-01-01
From a marketing perspective, there has been substantial interest in on the role of risk-perception on consumer behavior. Specific ‘problem music’ like rap and heavy metal has long been associated with delinquent behavior, including violence, drug use, and promiscuous sex. Although individuals’ risk preferences have been investigated across a range of decision-making situations, there has been little empirical work demonstrating the direct role music may have on the likelihood of engaging in risky activities. In the exploratory study reported here, we assessed the impact of listening to different styles of music while assessing risk-taking likelihood through a psychometric scale. Risk-taking likelihood was measured across ethical, financial, health and safety, recreational and social domains. Through the means of a canonical correlation analysis, the multivariate relationship between different music styles and individual risk-taking likelihood across the different domains is discussed. Our results indicate that listening to different types of music does influence risk-taking likelihood, though not in areas of health and safety. PMID:28539908
Tests for detecting overdispersion in models with measurement error in covariates.
Yang, Yingsi; Wong, Man Yu
2015-11-30
Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.
Porta, Alberto; Marchi, Andrea; Bari, Vlasta; Heusser, Karsten; Tank, Jens; Jordan, Jens; Barbic, Franca; Furlan, Raffaello
2015-01-01
We propose a symbolic analysis framework for the quantitative characterization of complex dynamical systems. It allows the description of the time course of a single variable, the assessment of joint interactions and an analysis triggered by a conditioning input. The framework was applied to spontaneous variability of heart period (HP), systolic arterial pressure (SAP) and integrated muscle sympathetic nerve activity (MSNA) with the aim of characterizing cardiovascular control and nonlinear influences of respiration at rest in supine position, during orthostatic challenge induced by 80° head-up tilt (TILT) and about 3 min before evoked pre-syncope signs (PRESY). The approach detected (i) the exaggerated sympathetic modulation and vagal withdrawal from HP variability and the increased presence of fast MSNA variability components during PRESY compared with TILT; (ii) the increase of the SAP–HP coordination occurring at slow temporal scales and a decrease of that occurring at faster time scales during PRESY compared with TILT; (iii) the reduction of the coordination between fast MSNA and SAP patterns during TILT and PRESY; (iv) the nonlinear influences of respiration leading to an increased likelihood to observe the abovementioned findings during expiration compared with inspiration one. The framework provided simple, quantitative indexes able to distinguish experimental conditions characterized by different states of the autonomic nervous system and to detect the early signs of a life threatening situation such as postural syncope. PMID:25548269
A general methodology for maximum likelihood inference from band-recovery data
Conroy, M.J.; Williams, B.K.
1984-01-01
A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.
Havelin, Joshua; Imbert, Ian; Cormier, Jennifer; Allen, Joshua; Porreca, Frank; King, Tamara
2016-03-01
Osteoarthritis (OA) pain is most commonly characterized by movement-triggered joint pain. However, in advanced disease, OA pain becomes persistent, ongoing and resistant to treatment with nonsteroidal anti-inflammatory drugs (NSAIDs). The mechanisms underlying ongoing pain in advanced OA are poorly understood. We recently showed that intra-articular (i.a.) injection of monosodium iodoacetate (MIA) into the rat knee joint produces concentration-dependent outcomes. Thus, a low dose of i.a. MIA produces NSAID-sensitive weight asymmetry without evidence of ongoing pain and a high i.a. MIA dose produces weight asymmetry and NSAID-resistant ongoing pain. In the present study, palpation of the ipsilateral hind limb of rats treated 14 days previously with high, but not low, doses of i.a. MIA produced expression of the early oncogene, FOS, in the spinal dorsal horn. Inactivation of descending pain facilitatory pathways using a microinjection of lidocaine within the rostral ventromedial medulla induced conditioned place preference selectively in rats treated with the high dose of MIA. Conditioned place preference to intra-articular lidocaine was blocked by pretreatment with duloxetine (30 mg/kg, intraperitoneally at -30 minutes). These observations are consistent with the likelihood of a neuropathic component of OA that elicits ongoing, NSAID-resistant pain and central sensitization that is mediated, in part, by descending modulatory mechanisms. This model provides a basis for exploration of underlying mechanisms promoting neuropathic components of OA pain and for the identification of mechanisms that might guide drug discovery for treatment of advanced OA pain without the need for joint replacement. Difficulty in managing advanced OA pain often results in joint replacement therapy in these patients. Improved understanding of mechanisms driving NSAID-resistant ongoing OA pain might facilitate development of alternatives to joint replacement therapy. Our findings suggest that central sensitization and neuropathic features contribute to NSAID-resistant ongoing OA joint pain. Copyright © 2016 American Pain Society. Published by Elsevier Inc. All rights reserved.
Maximum Likelihood Analysis in the PEN Experiment
NASA Astrophysics Data System (ADS)
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Robbins, L G
2000-01-01
Graduate school programs in genetics have become so full that courses in statistics have often been eliminated. In addition, typical introductory statistics courses for the "statistics user" rather than the nascent statistician are laden with methods for analysis of measured variables while genetic data are most often discrete numbers. These courses are often seen by students and genetics professors alike as largely irrelevant cookbook courses. The powerful methods of likelihood analysis, although commonly employed in human genetics, are much less often used in other areas of genetics, even though current computational tools make this approach readily accessible. This article introduces the MLIKELY.PAS computer program and the logic of do-it-yourself maximum-likelihood statistics. The program itself, course materials, and expanded discussions of some examples that are only summarized here are available at http://www.unisi. it/ricerca/dip/bio_evol/sitomlikely/mlikely.h tml. PMID:10628965
Prioritizing conservation investments for mammal species globally
Wilson, Kerrie A.; Evans, Megan C.; Di Marco, Moreno; Green, David C.; Boitani, Luigi; Possingham, Hugh P.; Chiozza, Federica; Rondinini, Carlo
2011-01-01
We need to set priorities for conservation because we cannot do everything, everywhere, at the same time. We determined priority areas for investment in threat abatement actions, in both a cost-effective and spatially and temporally explicit way, for the threatened mammals of the world. Our analysis presents the first fine-resolution prioritization analysis for mammals at a global scale that accounts for the risk of habitat loss, the actions required to abate this risk, the costs of these actions and the likelihood of investment success. We evaluated the likelihood of success of investments using information on the past frequency and duration of legislative effectiveness at a country scale. The establishment of new protected areas was the action receiving the greatest investment, while restoration was never chosen. The resolution of the analysis and the incorporation of likelihood of success made little difference to this result, but affected the spatial location of these investments. PMID:21844046
Gandhi, Rikesh; Silverman, Edward; Courtney, Paul M; Lee, Gwo-Chin
2017-09-01
Identification of the infecting organism is critical to the successful management of deep prosthetic joint infections about the hip and the knee. However, the number of culture specimens and which culture specimens are best to identify these organisms is unknown. We evaluated 113 consecutive patients with infected total hip and total knee arthroplasties and correlated the type of culture specimen and number of specimens taken during surgery to the likelihood of a positive culture result. From these data, we subsequently developed a model to maximize culture yield at the time of surgical intervention. After exclusions, 74 patients meeting the Musculoskeletal Infection Society criteria were left for final analysis. From this cohort, 63 of 74 patients had a positive culture result (85%). The odds of a fluid culture result being positive was 35 of 47 (0.75), whereas the likelihood of tissue cultures yielding a positive result was 164 of 245 (0.67; P = .313). The sample designated "best culture" specimen was the only culture with a positive result in 1 of 48 cases in which a best culture was identified. The optimal number of cultures needed to yield a positive test result was 4 (specificity = 0.61 and sensitivity = 0.63). Increasing the number of samples increases sensitivity but reduces specificity. A minimum of 4 tissue cultures from representative areas is necessary to maximize the chance of identifying the infecting organism during management of the infected total hip and total knee arthroplasties. The designation of the best culture specimen for additional testing is arbitrary and may not be clinically efficacious. Copyright © 2017 Elsevier Inc. All rights reserved.
The effects of velocities and lensing on moments of the Hubble diagram
NASA Astrophysics Data System (ADS)
Macaulay, E.; Davis, T. M.; Scovacricchi, D.; Bacon, D.; Collett, T.; Nichol, R. C.
2017-05-01
We consider the dispersion on the supernova distance-redshift relation due to peculiar velocities and gravitational lensing, and the sensitivity of these effects to the amplitude of the matter power spectrum. We use the Method-of-the-Moments (MeMo) lensing likelihood developed by Quartin et al., which accounts for the characteristic non-Gaussian distribution caused by lensing magnification with measurements of the first four central moments of the distribution of magnitudes. We build on the MeMo likelihood by including the effects of peculiar velocities directly into the model for the moments. In order to measure the moments from sparse numbers of supernovae, we take a new approach using Kernel density estimation to estimate the underlying probability density function of the magnitude residuals. We also describe a bootstrap re-sampling approach to estimate the data covariance matrix. We then apply the method to the joint light-curve analysis (JLA) supernova catalogue. When we impose only that the intrinsic dispersion in magnitudes is independent of redshift, we find σ _8=0.44^{+0.63}_{-0.44} at the one standard deviation level, although we note that in tests on simulations, this model tends to overestimate the magnitude of the intrinsic dispersion, and underestimate σ8. We note that the degeneracy between intrinsic dispersion and the effects of σ8 is more pronounced when lensing and velocity effects are considered simultaneously, due to a cancellation of redshift dependence when both effects are included. Keeping the model of the intrinsic dispersion fixed as a Gaussian distribution of width 0.14 mag, we find σ _8 = 1.07^{+0.50}_{-0.76}.
Non-ignorable missingness in logistic regression.
Wang, Joanna J J; Bartlett, Mark; Ryan, Louise
2017-08-30
Nonresponses and missing data are common in observational studies. Ignoring or inadequately handling missing data may lead to biased parameter estimation, incorrect standard errors and, as a consequence, incorrect statistical inference and conclusions. We present a strategy for modelling non-ignorable missingness where the probability of nonresponse depends on the outcome. Using a simple case of logistic regression, we quantify the bias in regression estimates and show the observed likelihood is non-identifiable under non-ignorable missing data mechanism. We then adopt a selection model factorisation of the joint distribution as the basis for a sensitivity analysis to study changes in estimated parameters and the robustness of study conclusions against different assumptions. A Bayesian framework for model estimation is used as it provides a flexible approach for incorporating different missing data assumptions and conducting sensitivity analysis. Using simulated data, we explore the performance of the Bayesian selection model in correcting for bias in a logistic regression. We then implement our strategy using survey data from the 45 and Up Study to investigate factors associated with worsening health from the baseline to follow-up survey. Our findings have practical implications for the use of the 45 and Up Study data to answer important research questions relating to health and quality-of-life. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Search For Cosmic-Ray-Induced Gamma-Ray Emission In Galaxy Clusters
Ackermann, M.
2014-04-30
Current theories predict relativistic hadronic particle populations in clusters of galaxies in addition to the already observed relativistic leptons. In these scenarios hadronic interactions give rise to neutral pions which decay into rays that are potentially observable with the Large Area Telescope (LAT) on board the Fermi space telescope. We present a joint likelihood analysis searching for spatially extended γ-ray emission at the locations of 50 galaxy clusters in 4 years of Fermi-LAT data under the assumption of the universal cosmic-ray model proposed by Pinzke & Pfrommer (2010). We find an excess at a significance of 2.7 σ which uponmore » closer inspection is however correlated to individual excess emission towards three galaxy clusters: Abell 400, Abell 1367 and Abell 3112. We discuss these cases in detail and conservatively attribute the emission to unmodeled background (for example, radio galaxies within the clusters). Through the combined analysis of 50 clusters we exclude hadronic injection efficiencies in simple hadronic models above 21% and establish limits on the cosmic-ray to thermal pressure ratio within the virial radius, R200, to be below 1.2-1.4% depending on the morphological classification. In addition we derive new limits on the γ-ray flux from individual clusters in our sample.« less
Search for Cosmic-Ray-Induced Gamma-Ray Emission in Galaxy Clusters
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Albert, A.; Allafort, A.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.;
2014-01-01
Current theories predict relativistic hadronic particle populations in clusters of galaxies in addition to the already observed relativistic leptons. In these scenarios hadronic interactions give rise to neutral pions which decay into gamma rays that are potentially observable with the Large Area Telescope (LAT) on board the Fermi space telescope. We present a joint likelihood analysis searching for spatially extended gamma-ray emission at the locations of 50 galaxy clusters in four years of Fermi-LAT data under the assumption of the universal cosmic-ray (CR) model proposed by Pinzke & Pfrommer. We find an excess at a significance of 2.7 delta, which upon closer inspection, however, is correlated to individual excess emission toward three galaxy clusters: A400, A1367, and A3112. We discuss these cases in detail and conservatively attribute the emission to unmodeled background systems (for example, radio galaxies within the clusters).Through the combined analysis of 50 clusters, we exclude hadronic injection efficiencies in simple hadronic models above 21% and establish limits on the CR to thermal pressure ratio within the virial radius, R(sub 200), to be below 1.25%-1.4% depending on the morphological classification. In addition, we derive new limits on the gamma-ray flux from individual clusters in our sample.
On analyzing ordinal data when responses and covariates are both missing at random.
Rana, Subrata; Roy, Surupa; Das, Kalyan
2016-08-01
In many occasions, particularly in biomedical studies, data are unavailable for some responses and covariates. This leads to biased inference in the analysis when a substantial proportion of responses or a covariate or both are missing. Except a few situations, methods for missing data have earlier been considered either for missing response or for missing covariates, but comparatively little attention has been directed to account for both missing responses and missing covariates, which is partly attributable to complexity in modeling and computation. This seems to be important as the precise impact of substantial missing data depends on the association between two missing data processes as well. The real difficulty arises when the responses are ordinal by nature. We develop a joint model to take into account simultaneously the association between the ordinal response variable and covariates and also that between the missing data indicators. Such a complex model has been analyzed here by using the Markov chain Monte Carlo approach and also by the Monte Carlo relative likelihood approach. Their performance on estimating the model parameters in finite samples have been looked into. We illustrate the application of these two methods using data from an orthodontic study. Analysis of such data provides some interesting information on human habit. © The Author(s) 2013.
Arbelle, Assaf; Reyes, Jose; Chen, Jia-Yun; Lahav, Galit; Riklin Raviv, Tammy
2018-04-22
We present a novel computational framework for the analysis of high-throughput microscopy videos of living cells. The proposed framework is generally useful and can be applied to different datasets acquired in a variety of laboratory settings. This is accomplished by tying together two fundamental aspects of cell lineage construction, namely cell segmentation and tracking, via a Bayesian inference of dynamic models. In contrast to most existing approaches, which aim to be general, no assumption of cell shape is made. Spatial, temporal, and cross-sectional variation of the analysed data are accommodated by two key contributions. First, time series analysis is exploited to estimate the temporal cell shape uncertainty in addition to cell trajectory. Second, a fast marching (FM) algorithm is used to integrate the inferred cell properties with the observed image measurements in order to obtain image likelihood for cell segmentation, and association. The proposed approach has been tested on eight different time-lapse microscopy data sets, some of which are high-throughput, demonstrating promising results for the detection, segmentation and association of planar cells. Our results surpass the state of the art for the Fluo-C2DL-MSC data set of the Cell Tracking Challenge (Maška et al., 2014). Copyright © 2018 Elsevier B.V. All rights reserved.
An analysis of crash likelihood : age versus driving experience
DOT National Transportation Integrated Search
1995-05-01
The study was designed to determine the crash likelihood of drivers in Michigan as a function of two independent variables: driver age and driving experience. The age variable had eight levels (18, 19, 20, 21, 22, 23, 24, and 25 years old) and the ex...
Wide step width reduces knee abduction moment of obese adults during stair negotiation.
Yocum, Derek; Weinhandl, Joshua T; Fairbrother, Jeffrey T; Zhang, Songning
2018-05-15
An increased likelihood of developing obesity-related knee osteoarthritis may be associated with increased peak internal knee abduction moments (KAbM). Increases in step width (SW) may act to reduce this moment. The purpose of this study was to determine the effects of increased SW on knee biomechanics during stair negotiation of healthy-weight and obese participants. Participants (24: 10 obese and 14 healthy-weight) used stairs and walked over level ground while walking at their preferred speed in two different SW conditions - preferred and wide (200% preferred). A 2 × 2 (group × condition) mixed model analysis of variance was performed to analyze differences between groups and conditions (p < 0.05). Increased SW increased the loading-response peak knee extension moment during descent and level gait, decreased loading-response KAbMs, knee extension and abduction range of motion (ROM) during ascent, and knee adduction ROM during descent. Increased SW increased loading-response peak mediolateral ground reaction force (GRF), increased peak knee abduction angle during ascent, and decreased peak knee adduction angle during descent and level gait. Obese participants experienced disproportionate changes in loading-response mediolateral GRF, KAbM and peak adduction angle during level walking, and peak knee abduction angle and ROM during ascent. Increased SW successfully decreased loading-response peak KAbM. Implications of this finding are that increased SW may decrease medial compartment knee joint loading, decreasing pain and reducing joint deterioration. Increased SW influenced obese and healthy-weight participants differently and should be investigated further. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Schaan, Emmanuel; Krause, Elisabeth; Eifler, Tim; Doré, Olivier; Miyatake, Hironao; Rhodes, Jason; Spergel, David N.
2017-06-01
The next-generation weak lensing surveys (i.e., LSST, Euclid, and WFIRST) will require exquisite control over systematic effects. In this paper, we address shear calibration and present the most realistic forecast to date for LSST/Euclid/WFIRST and CMB lensing from a stage 4 CMB experiment ("CMB S4"). We use the cosmolike code to simulate a joint analysis of all the two-point functions of galaxy density, galaxy shear, and CMB lensing convergence. We include the full Gaussian and non-Gaussian covariances and explore the resulting joint likelihood with Monte Carlo Markov chains. We constrain shear calibration biases while simultaneously varying cosmological parameters, galaxy biases, and photometric redshift uncertainties. We find that CMB lensing from CMB S4 enables the calibration of the shear biases down to 0.2%-3% in ten tomographic bins for LSST (below the ˜0.5 % requirements in most tomographic bins), down to 0.4%-2.4% in ten bins for Euclid, and 0.6%-3.2% in ten bins for WFIRST. For a given lensing survey, the method works best at high redshift where shear calibration is otherwise most challenging. This self-calibration is robust to Gaussian photometric redshift uncertainties and to a reasonable level of intrinsic alignment. It is also robust to changes in the beam and the effectiveness of the component separation of the CMB experiment, and slowly dependent on its depth, making it possible with third-generation CMB experiments such as AdvACT and SPT-3G, as well as the Simons Observatory.
Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics
NASA Astrophysics Data System (ADS)
Abe, Sumiyoshi
2014-11-01
The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.
Weak value amplification considered harmful
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-03-01
We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.
Common Bolted Joint Analysis Tool
NASA Technical Reports Server (NTRS)
Imtiaz, Kauser
2011-01-01
Common Bolted Joint Analysis Tool (comBAT) is an Excel/VB-based bolted joint analysis/optimization program that lays out a systematic foundation for an inexperienced or seasoned analyst to determine fastener size, material, and assembly torque for a given design. Analysts are able to perform numerous what-if scenarios within minutes to arrive at an optimal solution. The program evaluates input design parameters, performs joint assembly checks, and steps through numerous calculations to arrive at several key margins of safety for each member in a joint. It also checks for joint gapping, provides fatigue calculations, and generates joint diagrams for a visual reference. Optimum fastener size and material, as well as correct torque, can then be provided. Analysis methodology, equations, and guidelines are provided throughout the solution sequence so that this program does not become a "black box:" for the analyst. There are built-in databases that reduce the legwork required by the analyst. Each step is clearly identified and results are provided in number format, as well as color-coded spelled-out words to draw user attention. The three key features of the software are robust technical content, innovative and user friendly I/O, and a large database. The program addresses every aspect of bolted joint analysis and proves to be an instructional tool at the same time. It saves analysis time, has intelligent messaging features, and catches operator errors in real time.
Silverman, Merav H.; Jedd, Kelly; Luciana, Monica
2015-01-01
Behavioral responses to, and the neural processing of, rewards change dramatically during adolescence and may contribute to observed increases in risk-taking during this developmental period. Functional MRI (fMRI) studies suggest differences between adolescents and adults in neural activation during reward processing, but findings are contradictory, and effects have been found in non-predicted directions. The current study uses an activation likelihood estimation (ALE) approach for quantitative meta-analysis of functional neuroimaging studies to: 1) confirm the network of brain regions involved in adolescents’ reward processing, 2) identify regions involved in specific stages (anticipation, outcome) and valence (positive, negative) of reward processing, and 3) identify differences in activation likelihood between adolescent and adult reward-related brain activation. Results reveal a subcortical network of brain regions involved in adolescent reward processing similar to that found in adults with major hubs including the ventral and dorsal striatum, insula, and posterior cingulate cortex (PCC). Contrast analyses find that adolescents exhibit greater likelihood of activation in the insula while processing anticipation relative to outcome and greater likelihood of activation in the putamen and amygdala during outcome relative to anticipation. While processing positive compared to negative valence, adolescents show increased likelihood for activation in the posterior cingulate cortex (PCC) and ventral striatum. Contrasting adolescent reward processing with the existing ALE of adult reward processing (Liu et al., 2011) reveals increased likelihood for activation in limbic, frontolimbic, and striatal regions in adolescents compared with adults. Unlike adolescents, adults also activate executive control regions of the frontal and parietal lobes. These findings support hypothesized elevations in motivated activity during adolescence. PMID:26254587
Janssen, Eva; van Osch, Liesbeth; Lechner, Lilian; Candel, Math; de Vries, Hein
2012-01-01
Despite the increased recognition of affect in guiding probability estimates, perceived risk has been mainly operationalised in a cognitive way and the differentiation between rational and intuitive judgements is largely unexplored. This study investigated the validity of a measurement instrument differentiating cognitive and affective probability beliefs and examined whether behavioural decision making is mainly guided by cognition or affect. Data were obtained from four surveys focusing on smoking (N=268), fruit consumption (N=989), sunbed use (N=251) and sun protection (N=858). Correlational analyses showed that affective likelihood was more strongly correlated with worry compared to cognitive likelihood and confirmatory factor analysis provided support for a two-factor model of perceived likelihood instead of a one-factor model (i.e. cognition and affect combined). Furthermore, affective likelihood was significantly associated with the various outcome variables, whereas the association for cognitive likelihood was absent in three studies. The findings provide support for the construct validity of the measures used to assess cognitive and affective likelihood. Since affective likelihood might be a better predictor of health behaviour than the commonly used cognitive operationalisation, both dimensions should be considered in future research.
On Muthen's Maximum Likelihood for Two-Level Covariance Structure Models
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Hayashi, Kentaro
2005-01-01
Data in social and behavioral sciences are often hierarchically organized. Special statistical procedures that take into account the dependence of such observations have been developed. Among procedures for 2-level covariance structure analysis, Muthen's maximum likelihood (MUML) has the advantage of easier computation and faster convergence. When…
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
Maximum likelihood solution for inclination-only data in paleomagnetism
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2010-08-01
We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.
Tudur Smith, Catrin; Gueyffier, François; Kolamunnage‐Dona, Ruwanthi
2017-01-01
Background Joint modelling of longitudinal and time‐to‐event data is often preferred over separate longitudinal or time‐to‐event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time‐to‐event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta‐analysis of joint model estimates from multiple studies. Methods We propose a 2‐stage method for meta‐analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta‐analyses of separate longitudinal or time‐to‐event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Results Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta‐analytic setting where association exists between the longitudinal and time‐to‐event outcomes. Conclusions Where evidence of association between longitudinal and time‐to‐event outcomes exists, results from joint models over standalone analyses should be pooled in 2‐stage meta‐analyses. PMID:29250814
Guetterman, Timothy C; Fetters, Michael D; Creswell, John W
2015-11-01
Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. © 2015 Annals of Family Medicine, Inc.
Guetterman, Timothy C.; Fetters, Michael D.; Creswell, John W.
2015-01-01
PURPOSE Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. METHODS We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. RESULTS The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. CONCLUSIONS Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. PMID:26553895
A Primer on Risks, Issues and Opportunities
2016-08-01
likelihood or consequence. A risk has three main parts: a future root cause, a likelihood and a consequence. The future root cause is determined...through root cause analysis, which is the most important part of any risk management effort. SPECIAL SECTION: RISK MANAGEMENT Defense AT&L: July-August...2016 10 Root cause analysis gets to the heart of the risk. Why does the risk exist? What is its nature? How will the risk occur? What should be
NASA Astrophysics Data System (ADS)
Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro
2003-06-01
In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.
Design, Static Analysis And Fabrication Of Composite Joints
NASA Astrophysics Data System (ADS)
Mathiselvan, G.; Gobinath, R.; Yuvaraja, S.; Raja, T.
2017-05-01
The Bonded joints will be having one of the important issues in the composite technology is the repairing of aging in aircraft applications. In these applications and also for joining various composite material parts together, the composite materials fastened together either using adhesives or mechanical fasteners. In this paper, we have carried out design, static analysis of 3-D models and fabrication of the composite joints (bonded, riveted and hybrid). The 3-D model of the composite structure will be fabricated by using the materials such as epoxy resin, glass fibre material and aluminium rivet for preparing the joints. The static analysis was carried out with different joint by using ANSYS software. After fabrication, parametric study was also conducted to compare the performance of the hybrid joint with varying adherent width, adhesive thickness and overlap length. Different joint and its materials tensile test result have compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henager, Charles H.; Nguyen, Ba Nghiep; Kurtz, Richard J.
2016-03-31
Finite element continuum damage models (FE-CDM) have been developed to simulate and model dual-phase joints and cracked joints for improved analysis of SiC materials in nuclear environments. This report extends the analysis from the last reporting cycle by including results from dual-phase models and from cracked joint models.
2016-06-01
unlimited. v List of Tables Table 1 Single-lap-joint experimental parameters ..............................................7 Table 2 Survey ...Joints: Experimental and Workflow Protocols by Robert E Jensen, Daniel C DeSchepper, and David P Flanagan Approved for...TR-7696 ● JUNE 2016 US Army Research Laboratory Multivariate Analysis of High Through-Put Adhesively Bonded Single Lap Joints: Experimental
Gang, G J; Siewerdsen, J H; Stayman, J W
2017-02-11
This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Design and Performance Analysis of a new Rotary Hydraulic Joint
NASA Astrophysics Data System (ADS)
Feng, Yong; Yang, Junhong; Shang, Jianzhong; Wang, Zhuo; Fang, Delei
2017-07-01
To improve the driving torque of the robots joint, a wobble plate hydraulic joint is proposed, and the structure and working principle are described. Then mathematical models of kinematics and dynamics was established. On the basis of this, dynamic simulation and characteristic analysis are carried out. Results show that the motion curve of the joint is continuous and the impact is small. Moreover the output torque of the joint characterized by simple structure and easy processing is large and can be rotated continuously.
Results and Analysis from Space Suit Joint Torque Testing
NASA Technical Reports Server (NTRS)
Matty, Jennifer
2010-01-01
This joint mobility KC lecture included information from two papers, "A Method for and Issues Associated with the Determination of Space Suit Joint Requirements" and "Results and Analysis from Space Suit Joint Torque Testing," as presented for the International Conference on Environmental Systems in 2009 and 2010, respectively. The first paper discusses historical joint torque testing methodologies and approaches that were tested in 2008 and 2009. The second paper discusses the testing that was completed in 2009 and 2010.
Lentz, Trevor A; Zeppieri, Giorgio; Tillman, Susan M; Indelicato, Peter A; Moser, Michael W; George, Steven Z; Chmielewski, Terese L
2012-11-01
Cross-sectional cohort. (1) To examine differences in clinical variables (demographics, knee impairments, and self-report measures) between those who return to preinjury level of sports participation and those who do not at 1 year following anterior cruciate ligament reconstruction, (2) to determine the factors most strongly associated with return-to-sport status in a multivariate model, and (3) to explore the discriminatory value of clinical variables associated with return to sport at 1 year postsurgery. Demographic, physical impairment, and psychosocial factors individually prohibit return to preinjury levels of sports participation. However, it is unknown which combination of factors contributes to sports participation status. Ninety-four patients (60 men; mean age, 22.4 years) 1 year post-anterior cruciate ligament reconstruction were included. Clinical variables were collected and included demographics, knee impairment measures, and self-report questionnaire responses. Patients were divided into "yes return to sports" or "no return to sports" groups based on their answer to the question, "Have you returned to the same level of sports as before your injury?" Group differences in demographics, knee impairments, and self-report questionnaire responses were analyzed. Discriminant function analysis determined the strongest predictors of group classification. Receiver-operating-characteristic curves determined the discriminatory accuracy of the identified clinical variables. Fifty-two of 94 patients (55%) reported yes return to sports. Patients reporting return to preinjury levels of sports participation were more likely to have had less knee joint effusion, fewer episodes of knee instability, lower knee pain intensity, higher quadriceps peak torque-body weight ratio, higher score on the International Knee Documentation Committee Subjective Knee Evaluation Form, and lower levels of kinesiophobia. Knee joint effusion, episodes of knee instability, and score on the International Knee Documentation Committee Subjective Knee Evaluation Form were identified as the factors most strongly associated with self-reported return-to-sport status. The highest positive likelihood ratio for the yes-return-to-sports group classification (14.54) was achieved when patients met all of the following criteria: no knee effusion, no episodes of instability, and International Knee Documentation Committee Subjective Knee Evaluation Form score greater than 93. In multivariate analysis, the factors most strongly associated with return-to-sport status included only self-reported knee function, episodes of knee instability, and knee joint effusion.
Experimental measurement and modeling analysis on mechanical properties of incudostapedial joint
Zhang, Xiangming
2011-01-01
The incudostapedial (IS) joint between the incus and stapes is a synovial joint consisting of joint capsule, cartilage, and synovial fluid. The mechanical properties of the IS joint directly affect the middle ear transfer function for sound transmission. However, due to the complexity and small size of the joint, the mechanical properties of the IS joint have not been reported in the literature. In this paper, we report our current study on mechanical properties of human IS joint using both experimental measurement and finite element (FE) modeling analysis. Eight IS joint samples with the incus and stapes attached were harvested from human cadaver temporal bones. Tension, compression, stress relaxation and failure tests were performed on those samples in a micro-material testing system. An analytical approach with the hyperelastic Ogden model and a 3D FE model of the IS joint including the cartilage, joint capsule, and synovial fluid were employed to derive mechanical parameters of the IS joint. The comparison of measurements and modeling results reveals the relationship between the mechanical properties and structure of the IS joint. PMID:21061141
Experimental measurement and modeling analysis on mechanical properties of incudostapedial joint.
Zhang, Xiangming; Gan, Rong Z
2011-10-01
The incudostapedial (IS) joint between the incus and stapes is a synovial joint consisting of joint capsule, cartilage, and synovial fluid. The mechanical properties of the IS joint directly affect the middle ear transfer function for sound transmission. However, due to the complexity and small size of the joint, the mechanical properties of the IS joint have not been reported in the literature. In this paper, we report our current study on mechanical properties of human IS joint using both experimental measurement and finite element (FE) modeling analysis. Eight IS joint samples with the incus and stapes attached were harvested from human cadaver temporal bones. Tension, compression, stress relaxation and failure tests were performed on those samples in a micro-material testing system. An analytical approach with the hyperelastic Ogden model and a 3D FE model of the IS joint including the cartilage, joint capsule, and synovial fluid were employed to derive mechanical parameters of the IS joint. The comparison of measurements and modeling results reveals the relationship between the mechanical properties and structure of the IS joint.
Modeling Compound Flood Hazards in Coastal Embayments
NASA Astrophysics Data System (ADS)
Moftakhari, H.; Schubert, J. E.; AghaKouchak, A.; Luke, A.; Matthew, R.; Sanders, B. F.
2017-12-01
Coastal cities around the world are built on lowland topography adjacent to coastal embayments and river estuaries, where multiple factors threaten increasing flood hazards (e.g. sea level rise and river flooding). Quantitative risk assessment is required for administration of flood insurance programs and the design of cost-effective flood risk reduction measures. This demands a characterization of extreme water levels such as 100 and 500 year return period events. Furthermore, hydrodynamic flood models are routinely used to characterize localized flood level intensities (i.e., local depth and velocity) based on boundary forcing sampled from extreme value distributions. For example, extreme flood discharges in the U.S. are estimated from measured flood peaks using the Log-Pearson Type III distribution. However, configuring hydrodynamic models for coastal embayments is challenging because of compound extreme flood events: events caused by a combination of extreme sea levels, extreme river discharges, and possibly other factors such as extreme waves and precipitation causing pluvial flooding in urban developments. Here, we present an approach for flood risk assessment that coordinates multivariate extreme analysis with hydrodynamic modeling of coastal embayments. First, we evaluate the significance of correlation structure between terrestrial freshwater inflow and oceanic variables; second, this correlation structure is described using copula functions in unit joint probability domain; and third, we choose a series of compound design scenarios for hydrodynamic modeling based on their occurrence likelihood. The design scenarios include the most likely compound event (with the highest joint probability density), preferred marginal scenario and reproduced time series of ensembles based on Monte Carlo sampling of bivariate hazard domain. The comparison between resulting extreme water dynamics under the compound hazard scenarios explained above provides an insight to the strengths/weaknesses of each approach and helps modelers choose the appropriate scenario that best fit to the needs of their project. The proposed risk assessment approach can help flood hazard modeling practitioners achieve a more reliable estimate of risk, by cautiously reducing the dimensionality of the hazard analysis.
Jiang, Wei; Yu, Weichuan
2017-02-15
In genome-wide association studies (GWASs) of common diseases/traits, we often analyze multiple GWASs with the same phenotype together to discover associated genetic variants with higher power. Since it is difficult to access data with detailed individual measurements, summary-statistics-based meta-analysis methods have become popular to jointly analyze datasets from multiple GWASs. In this paper, we propose a novel summary-statistics-based joint analysis method based on controlling the joint local false discovery rate (Jlfdr). We prove that our method is the most powerful summary-statistics-based joint analysis method when controlling the false discovery rate at a certain level. In particular, the Jlfdr-based method achieves higher power than commonly used meta-analysis methods when analyzing heterogeneous datasets from multiple GWASs. Simulation experiments demonstrate the superior power of our method over meta-analysis methods. Also, our method discovers more associations than meta-analysis methods from empirical datasets of four phenotypes. The R-package is available at: http://bioinformatics.ust.hk/Jlfdr.html . eeyu@ust.hk. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Ma, Chunming; Liu, Yue; Lu, Qiang; Lu, Na; Liu, Xiaoli; Tian, Yiming; Wang, Rui; Yin, Fuzai
2016-02-01
The blood pressure-to-height ratio (BPHR) has been shown to be an accurate index for screening hypertension in children and adolescents. The aim of the present study was to perform a meta-analysis to assess the performance of BPHR for the assessment of hypertension. Electronic and manual searches were performed to identify studies of the BPHR. After methodological quality assessment and data extraction, pooled estimates of the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, area under the receiver operating characteristic curve and summary receiver operating characteristics were assessed systematically. The extent of heterogeneity for it was assessed. Six studies were identified for analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio and diagnostic odds ratio values of BPHR, for assessment of hypertension, were 96% [95% confidence interval (CI)=0.95-0.97], 90% (95% CI=0.90-0.91), 10.68 (95% CI=8.03-14.21), 0.04 (95% CI=0.03-0.07) and 247.82 (95% CI=114.50-536.34), respectively. The area under the receiver operating characteristic curve was 0.9472. The BPHR had higher diagnostic accuracies for identifying hypertension in children and adolescents.
Accurate Structural Correlations from Maximum Likelihood Superpositions
Theobald, Douglas L; Wuttke, Deborah S
2008-01-01
The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091
Intelligence Fusion for Combined Operations
1994-06-03
Database ISE - Intelligence Support Element JASMIN - Joint Analysis System for Military Intelligence RC - Joint Intelligence Center JDISS - Joint Defense...has made accessable otherwise inaccessible networks such as connectivity to the German Joint Analysis System for Military Intelligence ( JASMIN ) and the...successfully any mission in the Battlespace is the essence of the C41 for the Warrior concept."’ It recognizes that the current C41 systems do not
Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2005-01-01
In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…
Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data
ERIC Educational Resources Information Center
Xi, Nuo; Browne, Michael W.
2014-01-01
A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…
ERIC Educational Resources Information Center
Adank, Patti
2012-01-01
The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…
John Hogland; Nedret Billor; Nathaniel Anderson
2013-01-01
Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...
A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...
Robust analysis of semiparametric renewal process models
Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.
2013-01-01
Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-06-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.
Sudell, Maria; Tudur Smith, Catrin; Gueyffier, François; Kolamunnage-Dona, Ruwanthi
2018-04-15
Joint modelling of longitudinal and time-to-event data is often preferred over separate longitudinal or time-to-event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time-to-event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta-analysis of joint model estimates from multiple studies. We propose a 2-stage method for meta-analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta-analyses of separate longitudinal or time-to-event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta-analytic setting where association exists between the longitudinal and time-to-event outcomes. Where evidence of association between longitudinal and time-to-event outcomes exists, results from joint models over standalone analyses should be pooled in 2-stage meta-analyses. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Ankle joint function during walking in tophaceous gout: A biomechanical gait analysis study.
Carroll, Matthew; Boocock, Mark; Dalbeth, Nicola; Stewart, Sarah; Frampton, Christopher; Rome, Keith
2018-04-17
The foot and ankle are frequently affected in tophaceous gout, yet kinematic and kinetic changes in this region during gait are unknown. The aim of the study was to evaluate ankle biomechanical characteristics in people with tophaceous gout using three-dimensional gait analysis. Twenty-four participants with tophaceous gout were compared with 24 age-and sex-matched control participants. A 9-camera motion analysis system and two floor-mounted force plates were used to calculate kinematic and kinetic parameters. Peak ankle joint angular velocity was significantly decreased in participants with gout (P < 0.01). No differences were found for ankle ROM in either the sagittal (P = 0.43) or frontal planes (P = 0.08). No differences were observed between groups for peak ankle joint power (P = 0.41), peak ankle joint force (P = 0.25), peak ankle joint moment (P = 0.16), timing for peak ankle joint force (P = 0.81), or timing for peak ankle joint moment (P = 0.16). Three dimensional gait analysis demonstrated that ankle joint function does not change in people with gout. People with gout demonstrated a reduced peak ankle joint angular velocity which may reflect gait-limiting factors and adaptations from the high levels of foot pain, impairment and disability experienced by this population. Copyright © 2018 Elsevier B.V. All rights reserved.
Acoustic emission analysis: A test method for metal joints bonded by adhesives
NASA Technical Reports Server (NTRS)
Brockmann, W.; Fischer, T.
1978-01-01
Acoustic emission analysis is applied to study adhesive joints which had been subjected to mechanical and climatic stresses, taking into account conditions which make results applicable to adhesive joints used in aerospace technology. Specimens consisting of the alloy AlMgSi0.5 were used together with a phenolic resin adhesive, an epoxy resin modified with a polyamide, and an epoxy resin modified with a nitrile. Results show that the acoustic emission analysis provides valuable information concerning the behavior of adhesive joints under load and climatic stresses.
Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.
dos Reis, Mario; Yang, Ziheng
2011-07-01
The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.
Joint Transmit and Receive Filter Optimization for Sub-Nyquist Delay-Doppler Estimation
NASA Astrophysics Data System (ADS)
Lenz, Andreas; Stein, Manuel S.; Swindlehurst, A. Lee
2018-05-01
In this article, a framework is presented for the joint optimization of the analog transmit and receive filter with respect to a parameter estimation problem. At the receiver, conventional signal processing systems restrict the two-sided bandwidth of the analog pre-filter $B$ to the rate of the analog-to-digital converter $f_s$ to comply with the well-known Nyquist-Shannon sampling theorem. In contrast, here we consider a transceiver that by design violates the common paradigm $B\\leq f_s$. To this end, at the receiver, we allow for a higher pre-filter bandwidth $B>f_s$ and study the achievable parameter estimation accuracy under a fixed sampling rate when the transmit and receive filter are jointly optimized with respect to the Bayesian Cram\\'{e}r-Rao lower bound. For the case of delay-Doppler estimation, we propose to approximate the required Fisher information matrix and solve the transceiver design problem by an alternating optimization algorithm. The presented approach allows us to explore the Pareto-optimal region spanned by transmit and receive filters which are favorable under a weighted mean squared error criterion. We also discuss the computational complexity of the obtained transceiver design by visualizing the resulting ambiguity function. Finally, we verify the performance of the optimized designs by Monte-Carlo simulations of a likelihood-based estimator.
NASA Astrophysics Data System (ADS)
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
Shi, Chenguang; Salous, Sana; Wang, Fei; Zhou, Jianjiang
2016-12-06
This paper investigates the joint target parameter (delay and Doppler) estimation performance of linear frequency modulation (LFM)-based radar networks in a Rice fading environment. The active radar networks are composed of multiple radar transmitters and multichannel receivers placed on moving platforms. First, the log-likelihood function of the received signal for a Rician target is derived, where the received signal scattered off the target comprises of dominant scatterer (DS) component and weak isotropic scatterers (WIS) components. Then, the analytically closed-form expressions of the Cramer-Rao lower bounds (CRLBs) on the Cartesian coordinates of target position and velocity are calculated, which can be adopted as a performance metric to access the target parameter estimation accuracy for LFM-based radar network systems in a Rice fading environment. It is found that the cumulative Fisher information matrix (FIM) is a linear combination of both DS component and WIS components, and it also demonstrates that the joint CRLB is a function of signal-to-noise ratio (SNR), target's radar cross section (RCS) and transmitted waveform parameters, as well as the relative geometry between the target and the radar network architectures. Finally, numerical results are provided to indicate that the joint target parameter estimation performance of active radar networks can be significantly improved with the exploitation of DS component.
Shi, Chenguang; Salous, Sana; Wang, Fei; Zhou, Jianjiang
2016-01-01
This paper investigates the joint target parameter (delay and Doppler) estimation performance of linear frequency modulation (LFM)-based radar networks in a Rice fading environment. The active radar networks are composed of multiple radar transmitters and multichannel receivers placed on moving platforms. First, the log-likelihood function of the received signal for a Rician target is derived, where the received signal scattered off the target comprises of dominant scatterer (DS) component and weak isotropic scatterers (WIS) components. Then, the analytically closed-form expressions of the Cramer-Rao lower bounds (CRLBs) on the Cartesian coordinates of target position and velocity are calculated, which can be adopted as a performance metric to access the target parameter estimation accuracy for LFM-based radar network systems in a Rice fading environment. It is found that the cumulative Fisher information matrix (FIM) is a linear combination of both DS component and WIS components, and it also demonstrates that the joint CRLB is a function of signal-to-noise ratio (SNR), target’s radar cross section (RCS) and transmitted waveform parameters, as well as the relative geometry between the target and the radar network architectures. Finally, numerical results are provided to indicate that the joint target parameter estimation performance of active radar networks can be significantly improved with the exploitation of DS component. PMID:27929433
Heersink, Daniel K; Caley, Peter; Paini, Dean R; Barry, Simon C
2016-05-01
The cost of an uncontrolled incursion of invasive alien species (IAS) arising from undetected entry through ports can be substantial, and knowledge of port-specific risks is needed to help allocate limited surveillance resources. Quantifying the establishment likelihood of such an incursion requires quantifying the ability of a species to enter, establish, and spread. Estimation of the approach rate of IAS into ports provides a measure of likelihood of entry. Data on the approach rate of IAS are typically sparse, and the combinations of risk factors relating to country of origin and port of arrival diverse. This presents challenges to making formal statistical inference on establishment likelihood. Here we demonstrate how these challenges can be overcome with judicious use of mixed-effects models when estimating the incursion likelihood into Australia of the European (Apis mellifera) and Asian (A. cerana) honeybees, along with the invasive parasites of biosecurity concern they host (e.g., Varroa destructor). Our results demonstrate how skewed the establishment likelihood is, with one-tenth of the ports accounting for 80% or more of the likelihood for both species. These results have been utilized by biosecurity agencies in the allocation of resources to the surveillance of maritime ports. © 2015 Society for Risk Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Henager, Charles H.; Kurtz, Richard J.
2016-09-30
Finite element (FE) continuum damage mechanics (CDM) models have been developed to simulate and model dual-phase joints and cracked joints for improved analysis of SiC materials in nuclear environments. This report extends the analysis from the last reporting cycle by including preliminary thermomechanical analyses of cracked joints and implementation of dual-phase damage models.
CLASH: Weak-lensing shear-and-magnification analysis of 20 galaxy clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umetsu, Keiichi; Czakon, Nicole; Medezinski, Elinor
2014-11-10
We present a joint shear-and-magnification weak-lensing analysis of a sample of 16 X-ray-regular and 4 high-magnification galaxy clusters at 0.19 ≲ z ≲ 0.69 selected from the Cluster Lensing And Supernova survey with Hubble (CLASH). Our analysis uses wide-field multi-color imaging, taken primarily with Suprime-Cam on the Subaru Telescope. From a stacked-shear-only analysis of the X-ray-selected subsample, we detect the ensemble-averaged lensing signal with a total signal-to-noise ratio of ≅ 25 in the radial range of 200-3500 kpc h {sup –1}, providing integrated constraints on the halo profile shape and concentration-mass relation. The stacked tangential-shear signal is well described bymore » a family of standard density profiles predicted for dark-matter-dominated halos in gravitational equilibrium, namely, the Navarro-Frenk-White (NFW), truncated variants of NFW, and Einasto models. For the NFW model, we measure a mean concentration of c{sub 200c}=4.01{sub −0.32}{sup +0.35} at an effective halo mass of M{sub 200c}=1.34{sub −0.09}{sup +0.10}×10{sup 15} M{sub ⊙}. We show that this is in excellent agreement with Λ cold dark matter (ΛCDM) predictions when the CLASH X-ray selection function and projection effects are taken into account. The best-fit Einasto shape parameter is α{sub E}=0.191{sub −0.068}{sup +0.071}, which is consistent with the NFW-equivalent Einasto parameter of ∼0.18. We reconstruct projected mass density profiles of all CLASH clusters from a joint likelihood analysis of shear-and-magnification data and measure cluster masses at several characteristic radii assuming an NFW density profile. We also derive an ensemble-averaged total projected mass profile of the X-ray-selected subsample by stacking their individual mass profiles. The stacked total mass profile, constrained by the shear+magnification data, is shown to be consistent with our shear-based halo-model predictions, including the effects of surrounding large-scale structure as a two-halo term, establishing further consistency in the context of the ΛCDM model.« less
Chaikriangkrai, Kongkiat; Jhun, Hye Yeon; Shantha, Ghanshyam Palamaner Subash; Abdulhak, Aref Bin; Tandon, Rudhir; Alqasrawi, Musab; Klappa, Anthony; Pancholy, Samir; Deshmukh, Abhishek; Bhama, Jay; Sigurdsson, Gardar
2018-07-01
In aortic stenosis patients referred for surgical and transcatheter aortic valve replacement (AVR), the evidence of diagnostic accuracy of coronary computed tomography angiography (CCTA) has been limited. The objective of this study was to investigate the diagnostic accuracy of CCTA for significant coronary artery disease (CAD) in patients referred for AVR using invasive coronary angiography (ICA) as the gold standard. We searched databases for all diagnostic studies of CCTA in patients referred for AVR, which reported diagnostic testing characteristics on patient-based analysis required to pool summary sensitivity, specificity, positive-likelihood ratio, and negative-likelihood ratio. Significant CAD in both CCTA and ICA was defined by >50% stenosis in any coronary artery, coronary stent, or bypass graft. Thirteen studies evaluated 1498 patients (mean age, 74 y; 47% men; 76% transcatheter AVR). The pooled prevalence of significant stenosis determined by ICA was 43%. Hierarchical summary receiver-operating characteristic analysis demonstrated a summary area under curve of 0.96. The pooled sensitivity, specificity, and positive-likelihood and negative-likelihood ratios of CCTA in identifying significant stenosis determined by ICA were 95%, 79%, 4.48, and 0.06, respectively. In subgroup analysis, the diagnostic profiles of CCTA were comparable between surgical and transcatheter AVR. Despite the higher prevalence of significant CAD in patients with aortic stenosis than with other valvular heart diseases, our meta-analysis has shown that CCTA has a suitable diagnostic accuracy profile as a gatekeeper test for ICA. Our study illustrates a need for further study of the potential role of CCTA in preoperative planning for AVR.
Lung nodule malignancy prediction using multi-task convolutional neural network
NASA Astrophysics Data System (ADS)
Li, Xiuli; Kao, Yueying; Shen, Wei; Li, Xiang; Xie, Guotong
2017-03-01
In this paper, we investigated the problem of diagnostic lung nodule malignancy prediction using thoracic Computed Tomography (CT) screening. Unlike most existing studies classify the nodules into two types benign and malignancy, we interpreted the nodule malignancy prediction as a regression problem to predict continuous malignancy level. We proposed a joint multi-task learning algorithm using Convolutional Neural Network (CNN) to capture nodule heterogeneity by extracting discriminative features from alternatingly stacked layers. We trained a CNN regression model to predict the nodule malignancy, and designed a multi-task learning mechanism to simultaneously share knowledge among 9 different nodule characteristics (Subtlety, Calcification, Sphericity, Margin, Lobulation, Spiculation, Texture, Diameter and Malignancy), and improved the final prediction result. Each CNN would generate characteristic-specific feature representations, and then we applied multi-task learning on the features to predict the corresponding likelihood for that characteristic. We evaluated the proposed method on 2620 nodules CT scans from LIDC-IDRI dataset with the 5-fold cross validation strategy. The multitask CNN regression result for regression RMSE and mapped classification ACC were 0.830 and 83.03%, while the results for single task regression RMSE 0.894 and mapped classification ACC 74.9%. Experiments show that the proposed method could predict the lung nodule malignancy likelihood effectively and outperforms the state-of-the-art methods. The learning framework could easily be applied in other anomaly likelihood prediction problem, such as skin cancer and breast cancer. It demonstrated the possibility of our method facilitating the radiologists for nodule staging assessment and individual therapeutic planning.
Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.
Falk, Carl F; Biesanz, Jeremy C
2011-11-30
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.
Kling, Daniel; Tillmar, Andreas; Egeland, Thore; Mostad, Petter
2015-09-01
Several applications necessitate an unbiased determination of relatedness, be it in linkage or association studies or in a forensic setting. An appropriate model to compute the joint probability of some genetic data for a set of persons given some hypothesis about the pedigree structure is then required. The increasing number of markers available through high-density SNP microarray typing and NGS technologies intensifies the demand, where using a large number of markers may lead to biased results due to strong dependencies between closely located loci, both within pedigrees (linkage) and in the population (allelic association or linkage disequilibrium (LD)). We present a new general model, based on a Markov chain for inheritance patterns and another Markov chain for founder allele patterns, the latter allowing us to account for LD. We also demonstrate a specific implementation for X chromosomal markers that allows for computation of likelihoods based on hypotheses of alleged relationships and genetic marker data. The algorithm can simultaneously account for linkage, LD, and mutations. We demonstrate its feasibility using simulated examples. The algorithm is implemented in the software FamLinkX, providing a user-friendly GUI for Windows systems (FamLinkX, as well as further usage instructions, is freely available at www.famlink.se ). Our software provides the necessary means to solve cases where no previous implementation exists. In addition, the software has the possibility to perform simulations in order to further study the impact of linkage and LD on computed likelihoods for an arbitrary set of markers.
Pu, Jia; Zhang, Xiao
2017-11-01
US adolescents are exposed to high levels of advertisements for electronic cigarettes (e-cigarettes). This study aimed to examine the associations between exposure to e-cigarette advertisements and perception, interest, and use of e-cigarettes among US middle school and high school students. Data from the 2014 cross-sectional National Youth Tobacco Survey were used. Logistic regressions were conducted to model four outcomes, including perception of reduced harmfulness compared to regular cigarettes, perception of reduced addictiveness, intention to use, and current use of e-cigarettes. Main predictors were exposure to e-cigarette advertisements via four sources, including Internet, newspaper/magazines, retail stores, and TV. When all the four sources of e-cigarette advertisements exposure were evaluated jointly, exposure via the Internet was associated with elevated likelihood of reporting all four outcomes related to e-cigarettes, while exposure via retail stores was associated with higher likelihood of current e-cigarette use and perception of reduced harmfulness of e-cigarettes compared to regular cigarettes ( p < .05). However, exposure via newspaper/magazines and TV was associated with lower likelihood of perceiving e-cigarettes to be less harmful or addictive ( p < .05). Exposure to e-cigarette advertisements via the Internet and retail stores may play a significant role in adolescents' use and perception of e-cigarettes. The results call for more research on the influence of different sources of advertising exposure on e-cigarette use to help public health programmes curtail the fast growing use of e-cigarette products among youth.
Multivariate Meta-Analysis Using Individual Participant Data
ERIC Educational Resources Information Center
Riley, R. D.; Price, M. J.; Jackson, D.; Wardle, M.; Gueyffier, F.; Wang, J.; Staessen, J. A.; White, I. R.
2015-01-01
When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is…
Simultaneous Control of Error Rates in fMRI Data Analysis
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-01-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730
Ruilong, Zong; Daohai, Xie; Li, Geng; Xiaohong, Wang; Chunjie, Wang; Lei, Tian
2017-01-01
To carry out a meta-analysis on the performance of fluorine-18-fluorodeoxyglucose (F-FDG) PET/computed tomography (PET/CT) for the evaluation of solitary pulmonary nodules. In the meta-analysis, we performed searches of several electronic databases for relevant studies, including Google Scholar, PubMed, Cochrane Library, and several Chinese databases. The quality of all included studies was assessed by Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Two observers independently extracted data of eligible articles. For the meta-analysis, the total sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratios were pooled. A summary receiver operating characteristic curve was constructed. The I-test was performed to assess the impact of study heterogeneity on the results of the meta-analysis. Meta-regression and subgroup analysis were carried out to investigate the potential covariates that might have considerable impacts on heterogeneity. Overall, 12 studies were included in this meta-analysis, including a total of 1297 patients and 1301 pulmonary nodules. The pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with corresponding 95% confidence intervals (CIs) were 0.82 (95% CI, 0.76-0.87), 0.81 (95% CI, 0.66-0.90), 4.3 (95% CI, 2.3-7.9), and 0.22 (95% CI, 0.16-0.30), respectively. Significant heterogeneity was observed in sensitivity (I=81.1%) and specificity (I=89.6%). Subgroup analysis showed that the best results for sensitivity (0.90; 95% CI, 0.68-0.86) and accuracy (0.93; 95% CI, 0.90-0.95) were present in a prospective study. The results of our analysis suggest that PET/CT is a useful tool for detecting malignant pulmonary nodules qualitatively. Although current evidence showed moderate accuracy for PET/CT in differentiating malignant from benign solitary pulmonary nodules, further work needs to be carried out to improve its reliability.
Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates
ERIC Educational Resources Information Center
Lee, Sik-Yum; Song, Xin-Yuan
2005-01-01
In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…
Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes
ERIC Educational Resources Information Center
Leite, Walter L.; Stapleton, Laura M.
2011-01-01
In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…
ERIC Educational Resources Information Center
Petty, Richard E.; And Others
1987-01-01
Answers James Stiff's criticism of the Elaboration Likelihood Model (ELM) of persuasion. Corrects certain misperceptions of the ELM and criticizes Stiff's meta-analysis that compares ELM predictions with those derived from Kahneman's elastic capacity model. Argues that Stiff's presentation of the ELM and the conclusions he draws based on the data…
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
ERIC Educational Resources Information Center
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
ERIC Educational Resources Information Center
Kelderman, Henk
1992-01-01
Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…
Likelihood of Suicidality at Varying Levels of Depression Severity: A Re-Analysis of NESARC Data
ERIC Educational Resources Information Center
Uebelacker, Lisa A.; Strong, David; Weinstock, Lauren M.; Miller, Ivan W.
2010-01-01
Although it is clear that increasing depression severity is associated with more risk for suicidality, less is known about at what levels of depression severity the risk for different suicide symptoms increases. We used item response theory to estimate the likelihood of endorsing suicide symptoms across levels of depression severity in an…
Is Immigrant Status Relevant in School Violence Research? An Analysis with Latino Students
ERIC Educational Resources Information Center
Peguero, Anthony A.
2008-01-01
Background: The role of race and ethnicity is consistently found to be linked to the likelihood of students experiencing school violence-related outcomes; however, the findings are not always consistent. The variation of likelihood, as well as the type, of student-related school violence outcome among the Latino student population may be…
A kinematic analysis of the modified flight telerobotic servicer manipulator system
NASA Technical Reports Server (NTRS)
Crane, Carl; Carnahan, Tim; Duffy, Joseph
1992-01-01
A reverse kinematic analysis is presented of a six-DOF subchain of a modified seven-DOF flight telerobotic servicer manipulator system. The six-DOF subchain is designated as a TR-RT chain, which describes the sequence of manipulator joints beginning with the first grounded hook joint (universal joint) T, where the sequence R-R designates a pair of revolute joints with parallel axes. At the outset, it had been thought that the reverse kinematic analysis would be similar to a TTT manipulator previously analyzed, in which the third and fourth joints intersected at a finite point. However, this is shown not the case, and a 16th-degree tan-half-angle polynomial is derived for the TR-RT manipulator.
Design of simplified maximum-likelihood receivers for multiuser CPM systems.
Bing, Li; Bai, Baoming
2014-01-01
A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.
Analysis of continuous beams with joint slip
L. A. Soltis
1981-01-01
A computer analysis with user guidelines to analyze partially continuous multi-span beams is presented. Partial continuity is due to rotational slip which occurs at spliced joints at the supports of continuous beams such as floor joists. Beam properties, loads, and joint slip are input; internal forces, reactions, and deflections are output.
Dynamic analysis of clamp band joint system subjected to axial vibration
NASA Astrophysics Data System (ADS)
Qin, Z. Y.; Yan, S. Z.; Chu, F. L.
2010-10-01
Clamp band joints are commonly used for connecting circular components together in industry. Some of the systems jointed by clamp band are subjected to dynamic load. However, very little research on the dynamic characteristics for this kind of joint can be found in the literature. In this paper, a dynamic model for clamp band joint system is developed. Contact and frictional slip between the components are accommodated in this model. Nonlinear finite element analysis is conducted to identify the model parameters. Then static experiments are carried out on a scaled model of the clamp band joint to validate the joint model. Finally, the model is adopted to study the dynamic characteristics of the clamp band joint system subjected to axial harmonic excitation and the effects of the wedge angle of the clamp band joint and the preload on the response. The model proposed in this paper can represent the nonlinearity of the clamp band joint and be used conveniently to investigate the effects of the structural and loading parameters on the dynamic characteristics of this type of joint system.
DEMES rotary joint: theories and applications
NASA Astrophysics Data System (ADS)
Wang, Shu; Hao, Zhaogang; Li, Mingyu; Huang, Bo; Sun, Lining; Zhao, Jianwen
2017-04-01
As a kind of dielectric elastomer actuators, dielectric elastomer minimum energy structure (DEMES) can realize large angular deformations by small voltage-induced strains, which make them an attractive candidate for use as biomimetic robotics. Considering the rotary joint is a basic and common component of many biomimetic robots, we have been fabricated rotary joint by DEMES and developed its performances in the past two years. In this paper, we have discussed the static analysis, dynamics analysis and some characteristics of the DEMES rotary joint. Based on theoretical analysis, some different applications of the DEMES rotary joint were presented, such as a flapping wing, a biomimetic fish and a two-legged walker. All of the robots are fabricated by DEMES rotary joint and can realize some basic biomimetic motions. Comparing with traditional rigid robot, the robot based on DEMES is soft and light, so it has advantage on the collision-resistant.
FE analysis of SMA-based bio-inspired bone-joint system
NASA Astrophysics Data System (ADS)
Yang, S.; Seelecke, S.
2009-10-01
This paper presents the finite element (FE) analysis of a bio-inspired bone-joint system. Motivated by the BATMAV project, which aims at the development of a micro-air-vehicle platform that implements bat-like flapping flight capabilities, we study the actuation of a typical elbow joint, using shape memory alloy (SMA) in a dual manner. Micro-scale martensitic SMA wires are used as 'metal muscles' to actuate a system of humerus, elbow joint and radius, in concert with austenitic wires, which operate as flexible joints due to their superelastic character. For the FE analysis, the humerus and radius are modeled as standard elastic beams, while the elbow joint and muscle wires use the Achenbach-Muller-Seelecke SMA model as beams and cable elements, respectively. The particular focus of the paper is on the implementation of the above SMA model in COMSOL.
NASA Astrophysics Data System (ADS)
Pan, Zhen; Anderes, Ethan; Knox, Lloyd
2018-05-01
One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.
Sea urchin puncture resulting in PIP joint synovial arthritis: case report and MRI study.
Liram, N; Gomori, M; Perouansky, M
2000-01-01
Of the 600 species of sea urchins, approximately 80 may be venomous to humans. The long spined or black sea urchin, Diadema setosum may cause damage by the breaking off of its brittle spines after they penetrate the skin. Synovitis followed by arthritis may be an unusual but apparently not a rare sequel to such injury, when implantation occurs near a joint. In this case report, osseous changes were not seen by plain x-rays. Magnetic resonance imaging (MRI) was used to expose the more salient features of both soft tissue and bone changes of black sea urchin puncture injury 30 months after penetration. In all likelihood, this type of injury may be more common than the existing literature at present suggests. It is believed to be the first reported case in this part of the world as well as the first MRI study describing this type of joint pathology. Local and systemic reactions to puncture injuries from sea urchin spines have been described previously. These may range from mild, local irritation lasting a few days to granuloma formation, infection and on occasions systemic illness. The sea urchin spines are composed of calcium carbonate with proteinaceous covering. The covering tends to cause immune reactions of variable presentation. There are only a handful of reported cases with sea urchin stings on record, none of them from the Red Sea. However, this condition is probably more common than is thought and can present difficulty in diagnosis. In this case report, the inflammation responded well to heat treatment, mobilization and manipulation of the joint in its post acute and chronic stages. As some subtle changes in soft tissues and the changes in bone were not seen either on plain x-rays or ultrasound scan, gadolinium-enhanced MRI was used to unveil the marked changes in the joint.
Peel, Trisha N; Cole, Nicolynn C; Dylla, Brenda L; Patel, Robin
2015-03-01
Identification of pathogen(s) associated with prosthetic joint infection (PJI) is critical for patient management. Historically, many laboratories have not routinely identified organisms such as coagulase-negative staphylococci to the species level. The advent of matrix-assisted laser desorption ionization time of flight mass spectrometry (MALDI-TOF MS) has enhanced clinical laboratory capacity for accurate species-level identification. The aim of this study was to describe the species-level identification of microorganisms isolated from periprosthetic tissue and fluid specimens using MALDI-TOF MS alongside other rapid identification tests in a clinical microbiology laboratory. Results of rapid identification of bacteria isolated from periprosthetic joint fluid and/or tissue specimens were correlated with clinical findings at Mayo Clinic, Rochester, Minnesota, between May 2012 and May 2013. There were 178 PJI and 82 aseptic failure (AF) cases analyzed, yielding 770 organisms (median, 3/subject; range, 1-19/subject). MALDI-TOF MS was employed for the identification of 455 organisms (59%) in 197 subjects (123 PJIs and 74 AFs), with 89% identified to the species level using this technique. Gram-positive bacteria accounted for 68% and 93% of isolates in PJI and AF, respectively. However, the profile of species associated with infection compared to specimen contamination differed. Staphylococcus aureus and Staphylococcus caprae were always associated with infection, Staphylococcus epidermidis and Staphylococcus lugdunensis were equally likely to be a pathogen or a contaminant, whereas the other coagulase-negative staphylococci were more frequently contaminants. Most streptococcal and Corynebacterium isolates were pathogens. The likelihood that an organism was a pathogen or contaminant differed with the prosthetic joint location, particularly in the case of Propionibacterium acnes. MALDI-TOF MS is a valuable tool for the identification of bacteria isolated from patients with prosthetic joints, providing species-level identification that may inform culture interpretation of pathogens versus contaminants. Copyright © 2015 Elsevier Inc. All rights reserved.
Geometrically nonlinear analysis of adhesively bonded joints
NASA Technical Reports Server (NTRS)
Dattaguru, B.; Everett, R. A., Jr.; Whitcomb, J. D.; Johnson, W. S.
1982-01-01
A geometrically nonlinear finite element analysis of cohesive failure in typical joints is presented. Cracked-lap-shear joints were chosen for analysis. Results obtained from linear and nonlinear analysis show that nonlinear effects, due to large rotations, significantly affect the calculated mode 1, crack opening, and mode 2, inplane shear, strain-energy-release rates. The ratio of the mode 1 to mode 2 strain-energy-relase rates (G1/G2) was found to be strongly affected by he adhesive modulus and the adherend thickness. The ratios between 0.2 and 0.8 can be obtained by varying adherend thickness and using either a single or double cracked-lap-shear specimen configuration. Debond growth rate data, together with the analysis, indicate that mode 1 strain-energy-release rate governs debond growth. Results from the present analysis agree well with experimentally measured joint opening displacements.
Parameters Estimation For A Patellofemoral Joint Of A Human Knee Using A Vector Method
NASA Astrophysics Data System (ADS)
Ciszkiewicz, A.; Knapczyk, J.
2015-08-01
Position and displacement analysis of a spherical model of a human knee joint using the vector method was presented. Sensitivity analysis and parameter estimation were performed using the evolutionary algorithm method. Computer simulations for the mechanism with estimated parameters proved the effectiveness of the prepared software. The method itself can be useful when solving problems concerning the displacement and loads analysis in the knee joint.
NASA Astrophysics Data System (ADS)
Emge, Darren K.; Adalı, Tülay
2014-06-01
As the availability and use of imaging methodologies continues to increase, there is a fundamental need to jointly analyze data that is collected from multiple modalities. This analysis is further complicated when, the size or resolution of the images differ, implying that the observation lengths of each of modality can be highly varying. To address this expanding landscape, we introduce the multiset singular value decomposition (MSVD), which can perform a joint analysis on any number of modalities regardless of their individual observation lengths. Through simulations, the inter modal relationships across the different modalities which are revealed by the MSVD are shown. We apply the MSVD to forensic fingerprint analysis, showing that MSVD joint analysis successfully identifies relevant similarities for further analysis, significantly reducing the processing time required. This reduction, takes this technique from a laboratory method to a useful forensic tool with applications across the law enforcement and security regimes.
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Design and Evaluation of a Prosthetic Knee Joint Using the Geared Five-Bar Mechanism.
Sun, Yuanxi; Ge, Wenjie; Zheng, Jia; Dong, Dianbiao
2015-11-01
This paper presents the mechanical design, dynamics analysis and ankle trajectory analysis of a prosthetic knee joint using the geared five-bar mechanism. Compared with traditional four-bar or six-bar mechanisms, the geared five-bar mechanism is better at performing diverse movements and is easy to control. This prosthetic knee joint with the geared five-bar mechanism is capable of fine-tuning its relative instantaneous center of rotation and ankle trajectory. The centrode of this prosthetic knee joint, which is mechanically optimized according to the centrode of human knee joint, is better in the bionic performance than that of a prosthetic knee joint using the four-bar mechanism. Additionally, the stability control of this prosthetic knee joint during the swing and stance phase is achieved by a motor. By adjusting the gear ratio of this prosthetic knee joint, the ankle trajectories of both unilateral and bilateral amputees show less deviations from expected than that of the four-bar knee joint.
Kibsgård, Thomas J; Røise, Olav; Sturesson, Bengt; Röhrl, Stephan M; Stuge, Britt
2014-04-01
Chamberlain's projections (anterior-posterior X-ray of the pubic symphysis) have been used to diagnose sacroiliac joint mobility during the single-leg stance test. This study examined the movement in the sacroiliac joint during the single-leg stance test with precise radiostereometric analysis. Under general anesthesia, tantalum markers were inserted into the dorsal sacrum and the ilium of 11 patients with long-lasting and severe pelvic girdle pain. After two to three weeks, a radiostereometric analysis was conducted while the subjects performed a single-leg stance. Small movements were detected in the sacroiliac joint during the single-leg stance. In both the standing- and hanging-leg sacroiliac join, a total of 0.5 degree rotation was observed; however, no translations were detected. There were no differences in total movement between the standing- and hanging-leg sacroiliac joint. The movement in the sacroiliac joint during the single-leg stance is small and almost undetectable by the precise radiostereometric analysis. A complex movement pattern was seen during the test, with a combination of movements in the two joints. The interpretation of the results of this study is that, the Chamberlain examination likely is inadequate in the examination of sacroiliac joint movement in patients with pelvic girdle pain. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Hasler, Nicole; Bulbul, Esra; Bonamente, Massimiliano; Carlstrom, John E.; Culverhouse, Thomas L.; Gralla, Megan; Greer, Christopher; Lamb, James W.; Hawkins, David; Hennessy, Ryan;
2012-01-01
We perform a joint analysis of X-ray and Sunyaev-Zel'dovich effect data using an analytic model that describes the gas properties of galaxy clusters. The joint analysis allows the measurement of the cluster gas mass fraction profile and Hubble constant independent of cosmological parameters. Weak cosmological priors are used to calculate the overdensity radius within which the gas mass fractions are reported. Such an analysis can provide direct constraints on the evolution of the cluster gas mass fraction with redshift. We validate the model and the joint analysis on high signal-to-noise data from the Chandra X-ray Observatory and the Sunyaev-Zel'dovich Array for two clusters, A2631 and A2204.
Tian, Xian-Liang; Guan, Xian
2015-01-01
Objective: The objective of this paper is to examine the impact of Hurricane Katrina on displaced students’ behavioral disorder. Methods: First, we determine displaced students’ likelihood of discipline infraction each year relative to non-evacuees using all K12 student records of the U.S. state of Louisiana during the period of 2000–2008. Second, we investigate the impact of hurricane on evacuee students’ in-school behavior in a difference-in-difference framework. The quasi-experimental nature of the hurricane makes this framework appropriate with the advantage that the problem of endogeneity is of least concern and the causal effect of interest can be reasonably identified. Results: Preliminary analysis demonstrates a sharp increase in displaced students’ relative likelihood of discipline infraction around 2005 when the hurricane occurred. Further, formal difference-in-difference analysis confirms the results. To be specific, post Katrina, displaced students’ relative likelihood of any discipline infraction has increased by 7.3% whereas the increase in the relative likelihood for status offense, offense against person, offense against property and serious crime is 4%, 1.5%, 3.8% and 2.1%, respectively. Conclusion: When disasters occur, as was the case with Hurricane Katrina, in addition to assistance for adult evacuees, governments, in cooperation with schools, should also provide aid and assistance to displaced children to support their mental health and in-school behavior. PMID:26006127
Tian, Xian-Liang; Guan, Xian
2015-05-22
The objective of this paper is to examine the impact of Hurricane Katrina on displaced students' behavioral disorder. First, we determine displaced students' likelihood of discipline infraction each year relative to non-evacuees using all K12 student records of the U.S. state of Louisiana during the period of 2000-2008. Second, we investigate the impact of hurricane on evacuee students' in-school behavior in a difference-in-difference framework. The quasi-experimental nature of the hurricane makes this framework appropriate with the advantage that the problem of endogeneity is of least concern and the causal effect of interest can be reasonably identified. Preliminary analysis demonstrates a sharp increase in displaced students' relative likelihood of discipline infraction around 2005 when the hurricane occurred. Further, formal difference-in-difference analysis confirms the results. To be specific, post Katrina, displaced students' relative likelihood of any discipline infraction has increased by 7.3% whereas the increase in the relative likelihood for status offense, offense against person, offense against property and serious crime is 4%, 1.5%, 3.8% and 2.1%, respectively. When disasters occur, as was the case with Hurricane Katrina, in addition to assistance for adult evacuees, governments, in cooperation with schools, should also provide aid and assistance to displaced children to support their mental health and in-school behavior.
Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-01-01
Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290
Joint optimization of fluence field modulation and regularization in task-driven computed tomography
NASA Astrophysics Data System (ADS)
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-03-01
Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Zavaleta-Muñiz, S A; Gonzalez-Lopez, L; Murillo-Vazquez, J D; Saldaña-Cruz, A M; Vazquez-Villegas, M L; Martín-Márquez, B T; Vasquez-Jimenez, J C; Sandoval-Garcia, F; Ruiz-Padilla, A J; Fajardo-Robledo, N S; Ponce-Guarneros, J M; Rocha-Muñoz, A D; Alcaraz-Lopez, M F; Cardona-Müller, D; Totsuka-Sutto, S E; Rubio-Arellano, E D; Gamez-Nava, J I
2016-12-19
Several interleukin 6 gene (IL6) polymorphisms are implicated in susceptibility to rheumatoid arthritis (RA). It has not yet been established with certainty if these polymorphisms are associated with the severe radiographic damage observed in some RA patients, particularly those with the development of joint bone ankylosis (JBA). The objective of the present study was to evaluate the association between severe radiographic damage in hands and the -174G/C and -572G/C IL6 polymorphisms in Mexican Mestizo people with RA. Mestizo adults with RA and long disease duration (>5 years) were classified into two groups according to the radiographic damage in their hands: a) severe radiographic damage (JBA and/or joint bone subluxations) and b) mild or moderate radiographic damage. We compared the differences in genotype and allele frequencies of -174G/C and -572G/C IL6 polymorphisms (genotyped using polymerase chain reaction-restriction fragment length polymorphism) between these two groups. Our findings indicated that the -174G/C polymorphism of IL6 is associated with severe joint radiographic damage [maximum likelihood odds ratios (MLE_OR): 8.03; 95%CI 1.22-187.06; P = 0.03], whereas the -572G/C polymorphism of IL6 exhibited no such association (MLE_OR: 1.5; 95%CI 0.52-4.5; P = 0.44). Higher anti-cyclic citrullinated peptide antibody levels were associated with more severe joint radiographic damage (P = 0.04). We conclude that there is a relevant association between the -174G/C IL6 polymorphism and severe radiographic damage. Future studies in other populations are required to confirm our findings.
JOINT AND INDIVIDUAL VARIATION EXPLAINED (JIVE) FOR INTEGRATED ANALYSIS OF MULTIPLE DATA TYPES.
Lock, Eric F; Hoadley, Katherine A; Marron, J S; Nobel, Andrew B
2013-03-01
Research in several fields now requires the analysis of datasets in which multiple high-dimensional types of data are available for a common set of objects. In particular, The Cancer Genome Atlas (TCGA) includes data from several diverse genomic technologies on the same cancerous tumor samples. In this paper we introduce Joint and Individual Variation Explained (JIVE), a general decomposition of variation for the integrated analysis of such datasets. The decomposition consists of three terms: a low-rank approximation capturing joint variation across data types, low-rank approximations for structured variation individual to each data type, and residual noise. JIVE quantifies the amount of joint variation between data types, reduces the dimensionality of the data, and provides new directions for the visual exploration of joint and individual structure. The proposed method represents an extension of Principal Component Analysis and has clear advantages over popular two-block methods such as Canonical Correlation Analysis and Partial Least Squares. A JIVE analysis of gene expression and miRNA data on Glioblastoma Multiforme tumor samples reveals gene-miRNA associations and provides better characterization of tumor types.
Wang, Lina; Li, Hao; Yang, Zhongyuan; Guo, Zhuming; Zhang, Quan
2015-07-01
This study was designed to assess the efficiency of the serum thyrotropin to thyroglobulin ratio for thyroid nodule evaluation in euthyroid patients. Cross-sectional study. Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China. Retrospective analysis was performed for 400 previously untreated cases presenting with thyroid nodules. Thyroid function was tested with commercially available radioimmunoassays. The receiver operating characteristic curves were constructed to determine cutoff values. The efficacy of the thyrotropin:thyroglobulin ratio and thyroid-stimulating hormone for thyroid nodule evaluation was evaluated in terms of sensitivity, specificity, positive predictive value, positive likelihood ratio, negative likelihood ratio, and odds ratio. In receiver operating characteristic curve analysis, the area under the curve was 0.746 for the thyrotropin:thyroglobulin ratio and 0.659 for thyroid-stimulating hormone. With a cutoff point value of 24.97 IU/g for the thyrotropin:thyroglobulin ratio, the sensitivity, specificity, positive predictive value, positive likelihood ratio, and negative likelihood ratio were 78.9%, 60.8%, 75.5%, 2.01, and 0.35, respectively. The odds ratio for the thyrotropin:thyroglobulin ratio indicating malignancy was 5.80. With a cutoff point value of 1.525 µIU/mL for thyroid-stimulating hormone, the sensitivity, specificity, positive predictive value, positive likelihood ratio, and negative likelihood ratio were 74.0%, 53.2%, 70.8%, 1.58, and 0.49, respectively. The odds ratio indicating malignancy for thyroid-stimulating hormone was 3.23. Increasing preoperative serum thyrotropin:thyroglobulin ratio is a risk factor for thyroid carcinoma, and the correlation of the thyrotropin:thyroglobulin ratio to malignancy is higher than that for serum thyroid-stimulating hormone. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
Theofilatos, Athanasios
2017-06-01
The effective treatment of road accidents and thus the enhancement of road safety is a major concern to societies due to the losses in human lives and the economic and social costs. The investigation of road accident likelihood and severity by utilizing real-time traffic and weather data has recently received significant attention by researchers. However, collected data mainly stem from freeways and expressways. Consequently, the aim of the present paper is to add to the current knowledge by investigating accident likelihood and severity by exploiting real-time traffic and weather data collected from urban arterials in Athens, Greece. Random Forests (RF) are firstly applied for preliminary analysis purposes. More specifically, it is aimed to rank candidate variables according to their relevant importance and provide a first insight on the potential significant variables. Then, Bayesian logistic regression as well finite mixture and mixed effects logit models are applied to further explore factors associated with accident likelihood and severity respectively. Regarding accident likelihood, the Bayesian logistic regression showed that variations in traffic significantly influence accident occurrence. On the other hand, accident severity analysis revealed a generally mixed influence of traffic variations on accident severity, although international literature states that traffic variations increase severity. Lastly, weather parameters did not find to have a direct influence on accident likelihood or severity. The study added to the current knowledge by incorporating real-time traffic and weather data from urban arterials to investigate accident occurrence and accident severity mechanisms. The identification of risk factors can lead to the development of effective traffic management strategies to reduce accident occurrence and severity of injuries in urban arterials. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.
Estimation of joint stiffness with a compliant load.
Ludvig, Daniel; Kearney, Robert E
2009-01-01
Joint stiffness defines the dynamic relationship between the position of the joint and the torque acting about it. It consists of two components: intrinsic and reflex stiffness. Many previous studies have investigated joint stiffness in an open-loop environment, because the current algorithm in use is an open-loop algorithm. This paper explores issues related to the estimation of joint stiffness when subjects interact with compliant loads. First, we show analytically how the bias in closed-loop estimates of joint stiffness depends on the properties of the load, the noise power, and length of the estimated impulse response functions (IRF). We then demonstrate with simulations that the open-loop analysis will fail completely for an elastic load but may succeed for an inertial load. We further show that the open-loop analysis can yield unbiased results with an inertial load and document IRF length, signal-to-noise ratio needed, and minimum inertia needed for the analysis to succeed. Thus, by using a load with a properly selected inertia, open-loop analysis can be used under closed-loop conditions.
Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.
2016-06-30
Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.
The effects of gender, family status, and race on sentencing decisions.
Freiburger, Tina L
2010-01-01
This study sought to determine the effects of family role, gender, and race on judges' sentencing decisions. To assess these effects, factorial surveys were sent to 360 Court of Common Plea judges who presided over criminal court cases in the state. Survey administration resulted in a 51% response rate. The findings indicate that defendants who were depicted as performing caretaker roles had a significantly decreased likelihood of incarceration. Further analysis found that the reduction in likelihood of incarceration for being a caretaker was larger for males than for females. Examination of the interaction of familial role with race found that familial role equally reduced the likelihood of incarceration for black and white females. Familial responsibility, however, resulted in a significantly greater decrease in likelihood of incarceration for black men than for white men. 2009 John Wiley & Sons, Ltd.
Bayesian structural equation modeling in sport and exercise psychology.
Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus
2015-08-01
Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.
Seniors, health information, and the Internet: motivation, ability, and Internet knowledge.
Sheng, Xiaojing; Simpson, Penny M
2013-10-01
Providing health information to older adults is crucial to empowering them to better control their health, and the information is readily available on the Internet. Yet, little is known about the factors that are important in affecting seniors' Internet search for health information behavior. This work addresses this research deficit by examining the role of health information orientation (HIO), eHealth literacy, and Internet knowledge (IK) in affecting the likelihood of using the Internet as a source for health information. The analysis reveals that each variable in the study is significant in affecting Internet search likelihood. Results from the analysis also demonstrate the partial mediating role of eHealth literacy and the interaction between eHealth literacy and HIO. The findings suggest that improving seniors' IK and eHealth literacy would increase their likelihood of searching for and finding health information on the Internet that might encourage better health behaviors.
A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0
Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.
2014-01-01
We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072
Phylogenetic Analyses of Meloidogyne Small Subunit rDNA.
De Ley, Irma Tandingan; De Ley, Paul; Vierstraete, Andy; Karssen, Gerrit; Moens, Maurice; Vanfleteren, Jacques
2002-12-01
Phylogenies were inferred from nearly complete small subunit (SSU) 18S rDNA sequences of 12 species of Meloidogyne and 4 outgroup taxa (Globodera pallida, Nacobbus abberans, Subanguina radicicola, and Zygotylenchus guevarai). Alignments were generated manually from a secondary structure model, and computationally using ClustalX and Treealign. Trees were constructed using distance, parsimony, and likelihood algorithms in PAUP* 4.0b4a. Obtained tree topologies were stable across algorithms and alignments, supporting 3 clades: clade I = [M. incognita (M. javanica, M. arenaria)]; clade II = M. duytsi and M. maritima in an unresolved trichotomy with (M. hapla, M. microtyla); and clade III = (M. exigua (M. graminicola, M. chitwoodi)). Monophyly of [(clade I, clade II) clade III] was given maximal bootstrap support (mbs). M. artiellia was always a sister taxon to this joint clade, while M. ichinohei was consistently placed with mbs as a basal taxon within the genus. Affinities with the outgroup taxa remain unclear, although G. pallida and S. radicicola were never placed as closest relatives of Meloidogyne. Our results show that SSU sequence data are useful in addressing deeper phylogeny within Meloidogyne, and that both M. ichinohei and M. artiellia are credible outgroups for phylogenetic analysis of speciations among the major species.
Phylogenetic Analyses of Meloidogyne Small Subunit rDNA
De Ley, Irma Tandingan; De Ley, Paul; Vierstraete, Andy; Karssen, Gerrit; Moens, Maurice; Vanfleteren, Jacques
2002-01-01
Phylogenies were inferred from nearly complete small subunit (SSU) 18S rDNA sequences of 12 species of Meloidogyne and 4 outgroup taxa (Globodera pallida, Nacobbus abberans, Subanguina radicicola, and Zygotylenchus guevarai). Alignments were generated manually from a secondary structure model, and computationally using ClustalX and Treealign. Trees were constructed using distance, parsimony, and likelihood algorithms in PAUP* 4.0b4a. Obtained tree topologies were stable across algorithms and alignments, supporting 3 clades: clade I = [M. incognita (M. javanica, M. arenaria)]; clade II = M. duytsi and M. maritima in an unresolved trichotomy with (M. hapla, M. microtyla); and clade III = (M. exigua (M. graminicola, M. chitwoodi)). Monophyly of [(clade I, clade II) clade III] was given maximal bootstrap support (mbs). M. artiellia was always a sister taxon to this joint clade, while M. ichinohei was consistently placed with mbs as a basal taxon within the genus. Affinities with the outgroup taxa remain unclear, although G. pallida and S. radicicola were never placed as closest relatives of Meloidogyne. Our results show that SSU sequence data are useful in addressing deeper phylogeny within Meloidogyne, and that both M. ichinohei and M. artiellia are credible outgroups for phylogenetic analysis of speciations among the major species. PMID:19265950
Application of decision science to resilience management in Jamaica Bay
Eaton, Mitchell; Fuller, Angela K.; Johnson, Fred A.; Hare, M. P.; Stedman, Richard C.; Sanderson, E.W.; Solecki, W. D.; Waldman, J.R.; Paris, A. S.
2016-01-01
This book highlights the growing interest in management interventions designed to enhance the resilience of the Jamaica Bay socio-ecological system. Effective management, whether the focus is on managing biological processes or human behavior or (most likely) both, requires decision makers to anticipate how the managed system will respond to interventions (i.e., via predictions or projections). In systems characterized by many interacting components and high uncertainty, making probabilistic predictions is often difficult and requires careful thinking not only about system dynamics, but also about how management objectives are specified and the analytic method used to select the preferred action(s). Developing a clear statement of the problem(s) and articulation of management objectives is often best achieved by including input from managers, scientists and other stakeholders affected by the decision through a process of joint problem framing (Marcot and others 2012; Keeney and others 1990). Using a deliberate, coherent and transparent framework for deciding among management alternatives to best meet these objectives then ensures a greater likelihood for successful intervention. Decision science provides the theoretical and practical basis for developing this framework and applying decision analysis methods for making complex decisions under uncertainty and risk.
Christensen, Ole F
2012-12-03
Single-step methods provide a coherent and conceptually simple approach to incorporate genomic information into genetic evaluations. An issue with single-step methods is compatibility between the marker-based relationship matrix for genotyped animals and the pedigree-based relationship matrix. Therefore, it is necessary to adjust the marker-based relationship matrix to the pedigree-based relationship matrix. Moreover, with data from routine evaluations, this adjustment should in principle be based on both observed marker genotypes and observed phenotypes, but until now this has been overlooked. In this paper, I propose a new method to address this issue by 1) adjusting the pedigree-based relationship matrix to be compatible with the marker-based relationship matrix instead of the reverse and 2) extending the single-step genetic evaluation using a joint likelihood of observed phenotypes and observed marker genotypes. The performance of this method is then evaluated using two simulated datasets. The method derived here is a single-step method in which the marker-based relationship matrix is constructed assuming all allele frequencies equal to 0.5 and the pedigree-based relationship matrix is constructed using the unusual assumption that animals in the base population are related and inbred with a relationship coefficient γ and an inbreeding coefficient γ / 2. Taken together, this γ parameter and a parameter that scales the marker-based relationship matrix can handle the issue of compatibility between marker-based and pedigree-based relationship matrices. The full log-likelihood function used for parameter inference contains two terms. The first term is the REML-log-likelihood for the phenotypes conditional on the observed marker genotypes, whereas the second term is the log-likelihood for the observed marker genotypes. Analyses of the two simulated datasets with this new method showed that 1) the parameters involved in adjusting marker-based and pedigree-based relationship matrices can depend on both observed phenotypes and observed marker genotypes and 2) a strong association between these two parameters exists. Finally, this method performed at least as well as a method based on adjusting the marker-based relationship matrix. Using the full log-likelihood and adjusting the pedigree-based relationship matrix to be compatible with the marker-based relationship matrix provides a new and interesting approach to handle the issue of compatibility between the two matrices in single-step genetic evaluation.
ERIC Educational Resources Information Center
Yuan, Ke-Hai
2008-01-01
In the literature of mean and covariance structure analysis, noncentral chi-square distribution is commonly used to describe the behavior of the likelihood ratio (LR) statistic under alternative hypothesis. Due to the inaccessibility of the rather technical literature for the distribution of the LR statistic, it is widely believed that the…
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
A Global Drought Observatory for Emergency Response
NASA Astrophysics Data System (ADS)
Vogt, Jürgen; de Jager, Alfred; Carrão, Hugo; Magni, Diego; Mazzeschi, Marco; Barbosa, Paulo
2016-04-01
Droughts are occurring on all continents and across all climates. While in developed countries they cause significant economic and environmental damages, in less developed countries they may cause major humanitarian catastrophes. The magnitude of the problem and the expected increase in drought frequency, extent and severity in many, often highly vulnerable regions of the world demand a change from the current reactive, crisis-management approach towards a more pro-active, risk management approach. Such approach needs adequate and timely information from global to local scales as well as adequate drought management plans. Drought information systems are important for continuous monitoring and forecasting of the situation in order to provide timely information on developing drought events and their potential impacts. Against this background, the Joint Research Centre (JRC) is developing a Global Drought Observatory (GDO) for the European Commission's humanitarian services, providing up-to-date information on droughts world-wide and their potential impacts. Drought monitoring is achieved by a combination of meteorological and biophysical indicators, while the societal vulnerability to droughts is assessed through the targeted analysis of a series of social, economic and infrastructural indicators. The combination of the information on the occurrence and severity of a drought, on the assets at risk and on the societal vulnerability in the drought affected areas results in a likelihood of impact, which is expressed by a Likelihood of Drought Impact (LDI) indicator. The location, extent and magnitude of the LDI is then further analyzed against the number of people and land use/land cover types affected in order to provide the decision bodies with information on the potential humanitarian and economic bearings in the affected countries or regions. All information is presented through web-mapping interfaces based on OGC standards and customized reports can be drawn by the user. The system will be further developed by increasing the number of sectorial impact indicators and validated against known and documented cases around the world. The poster will provide an overview on the system, the LDI and first analysis results.
Estimation of inflation parameters for Perturbed Power Law model using recent CMB measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, Suvodip; Das, Santanu; Souradeep, Tarun
2015-01-01
Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass m{sub eff} for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independentmore » parameters, namely spectral index for tensor perturbation ν{sub t} and change in spectral index for scalar perturbation ν{sub st} to explain the observed features in the scalar and tensor power spectrum of perturbation. From the recent measurements of CMB power spectra by WMAP, Planck and BICEP-2 for temperature and polarization, we estimate the feasibility of PPL model with standard ΛCDM model. Although BICEP-2 claimed a detection of r=0.2, estimates of dust contamination provided by Planck have left open the possibility that only upper bound on r will be expected in a joint analysis. As a result we consider different upper bounds on the value of r and show that PPL model can explain a lower value of tensor to scalar ratio (r<0.1 or r<0.01) for a scalar spectral index of n{sub s}=0.96 by having a non-zero value of effective mass of the inflaton field m{sup 2}{sub eff}/H{sup 2}. The analysis with WP + Planck likelihood shows a non-zero detection of m{sup 2}{sub eff}/H{sup 2} with 5.7 σ and 8.1 σ respectively for r<0.1 and r<0.01. Whereas, with BICEP-2 likelihood m{sup 2}{sub eff}/H{sup 2} = −0.0237 ± 0.0135 which is consistent with zero.« less
Analysis of Hand and Wrist Postural Synergies in Tolerance Grasping of Various Objects
Liu, Yuan; Jiang, Li; Yang, Dapeng; Liu, Hong
2016-01-01
Human can successfully grasp various objects in different acceptable relative positions between human hand and objects. This grasp functionality can be described as the grasp tolerance of human hand, which is a significant functionality of human grasp. To understand the motor control of human hand completely, an analysis of hand and wrist postural synergies in tolerance grasping of various objects is needed. Ten healthy right-handed subjects were asked to perform the tolerance grasping with right hand using 6 objects of different shapes, sizes and relative positions between human hand and objects. Subjects were wearing CyberGlove attaching motion tracker on right hand, allowing a measurement of the hand and wrist postures. Correlation analysis of joints and inter-joint/inter-finger modules were carried on to explore the coordination between joints or modules. As the correlation between hand and wrist module is not obvious in tolerance grasping, individual analysis of wrist synergies would be more practical. In this case, postural synergies of hand and wrist were then presented separately through principal component analysis (PCA), expressed through the principal component (PC) information transmitted ratio, PC elements distribution and reconstructed angle error of joints. Results on correlation comparison of different module movements can be well explained by the influence factors of the joint movement correlation. Moreover, correlation analysis of joints and modules showed the wrist module had the lowest correlation among all inter-finger and inter-joint modules. Hand and wrist postures were both sufficient to be described by a few principal components. In terms of the PC elements distribution of hand postures, compared with previous investigations, there was a greater proportion of movement in the thumb joints especially the interphalangeal (IP) and opposition rotation (ROT) joint. The research could serve to a complete understanding of hand grasp, and the design, control of the anthropomorphic hand and wrist. PMID:27580298
Corrêa, A M; Pereira, M I S; de Abreu, H K A; Sharon, T; de Melo, C L P; Ito, M A; Teodoro, P E; Bhering, L L
2016-10-17
The common bean, Phaseolus vulgaris, is predominantly grown on small farms and lacks accurate genotype recommendations for specific micro-regions in Brazil. This contributes to a low national average yield. The aim of this study was to use the methods of the harmonic mean of the relative performance of genetic values (HMRPGV) and the centroid, for selecting common bean genotypes with high yield, adaptability, and stability for the Cerrado/Pantanal ecotone region in Brazil. We evaluated 11 common bean genotypes in three trials carried out in the dry season in Aquidauana in 2013, 2014, and 2015. A likelihood ratio test detected a significant interaction between genotype x year, contributing 54% to the total phenotypic variation in grain yield. The three genotypes selected by the joint analysis of genotypic values in all years (Carioca Precoce, BRS Notável, and CNFC 15875) were the same as those recommended by the HMRPGV method. Using the centroid method, genotypes BRS Notável and CNFC 15875 were considered ideal genotypes based on their high stability to unfavorable environments and high responsiveness to environmental improvement. We identified a high association between the methods of adaptability and stability used in this study. However, the use of centroid method provided a more accurate and precise recommendation of the behavior of the evaluated genotypes.
Leg Strength Comparison between Younger and Middle-age Adults
Kim, Sukwon; Lockhart, Thurmon; Nam, Chang S.
2009-01-01
Although a risk of occupational musculoskeletal diseases has been identified with age-related strength degradation, strength measures from working group are somewhat sparse. This is especially true for the lower extremity strength measures in dynamic conditions (i.e., isokinetic). The objective of this study was to quantify the lower extremity muscle strength characteristics of three age groups (young, middle, and the elderly). Total of 42 subjects participated in the study: 14 subjects for each age group. A commercial dynamometer was used to evaluate isokinetic and isometric strength at ankle and knee joints. 2 × 2 (Age group (younger, middle-age, and older adult groups) × Gender (male and female)) between-subject design and Post-hoc analysis were performed to evaluate strength differences among three age groups. Post-hoc analysis indicated that, overall, middle-age workers’ leg strengths (i.e. ankle and knee muscles) were significantly different from younger adults while middle-age workers’ leg strengths were virtually identical to older adults’ leg strengths. These results suggested that, overall, 14 middle-age workers in the present study could be at a higher risk of musculoskeletal injuries. Future studies looking at the likelihood of musculoskeletal injuries at different work places and from different working postures at various age levels should be required to validate the current findings. The future study would be a valuable asset in finding intervention strategies such that middle-age workers could stay healthier longer. PMID:20436934
A novel hybrid joining methodology for composite to steel joints
NASA Astrophysics Data System (ADS)
Sarh, Bastian
This research has established a novel approach for designing, analyzing, and fabricating load bearing structural connections between resin infused composite materials and components made of steel or other metals or alloys. A design philosophy is proposed wherein overlapping joint sections comprised of fiber reinforced plastics (FRP's) and steel members are connected via a combination of adhesive bonding and integrally placed composite pins. A film adhesive is utilized, placed into the dry stack prior to resin infusion and is cured after infusion through either local heat elements or by placing the structure into an oven. The novel manner in which the composite pins are introduced consists of perforating the steel member with holes and placing pre-formed composite pins through them, also prior to resin infusion of the composite section. In this manner joints are co-molded structures such that secondary processing is eliminated. It is shown that such joints blend the structural benefits of adhesive and mechanically connected joints, and that the fabrication process is feasible for low-cost, large-scale production as applicable to the shipbuilding industry. Analysis procedures used for designing such joints are presented consisting of an adhesive joint design theory and a pin placement theory. These analysis tools are used in the design of specimens, specific designs are fabricated, and these evaluated through structural tests. Structural tests include quasi-static loading and low cycle fatigue evaluation. This research has thereby invented a novel philosophy on joints, created the manufacturing technique for fabricating such joints, established simple to apply analysis procedures used in the design of such joints (consisting of both an adhesive and a pin placement analysis), and has validated the methodology through specimen fabrication and testing.
Matsushita, Isao; Motomura, Hiraku; Seki, Eiko; Kimura, Tomoatsu
2017-07-01
The long-term effects of tumor necrosis factor (TNF)-blocking therapies on weight-bearing joints in patients with rheumatoid arthritis (RA) have not been fully characterized. The purpose of this study was to assess the radiographic changes of weight-bearing joints in patients with RA during 3-year of TNF-blocking therapies and to identify factors related to the progression of joint damage. Changes in clinical variables and radiological findings in 243 weight-bearing joints (63 hips, 54 knees, 71 ankles, and 55 subtalar joints) in 38 consecutive patients were investigated during three years of treatment with TNF-blocking agents. Multivariate logistic regression analysis was used to identify risk factors for the progression of weight-bearing joint damage. Seventeen (14.5%) of proximal weight-bearing joints (hips and knees) showed apparent radiographic progression during three years of treatment, whereas none of the proximal weight-bearing joints showed radiographic evidence of improvement or repair. In contrast, distal weight-bearing joints (ankle and subtalar joints) displayed radiographic progression and improvement in 20 (15.9%) and 8 (6.3%) joints, respectively. Multivariate logistic analysis for proximal weight-bearing joints identified the baseline Larsen grade (p < 0.001, OR:24.85, 95%CI: 5.07-121.79) and disease activity at one year after treatment (p = 0.003, OR:3.34, 95%CI:1.50-7.46) as independent factors associated with the progression of joint damage. On the other hand, multivariate analysis for distal weight-bearing joints identified disease activity at one year after treatment (p < 0.001, OR:2.13, 95%CI:1.43-3.18) as an independent factor related to the progression of damage. Baseline Larsen grade was strongly associated with the progression of damage in the proximal weight-bearing joints. Disease activity after treatment was an independent factor for progression of damage in proximal and distal weight-bearing joints. Early treatment with TNF-blocking agents and tight control of disease activity are necessary to prevent the progression of damage of the weight-bearing joints.
ERIC Educational Resources Information Center
Floyd, Randy G.; Bergeron, Renee; Hamilton, Gloria; Parra, Gilbert R.
2010-01-01
This study investigated the relations among executive functions and cognitive abilities through a joint exploratory factor analysis and joint confirmatory factor analysis of 25 test scores from the Delis-Kaplan Executive Function System and the Woodcock-Johnson III Tests of Cognitive Abilities. Participants were 100 children and adolescents…
2012-02-01
AFRL-RX-TY-TR-2012-0022 ANALYSIS OF COMMERCIALLY AVAILABLE HELMET AND BOOT OPTIONS FOR THE JOINT FIREFIGHTER INTEGRATED RESPONSE ENSEMBLE...Interim Technical Report 01-SEP-2010 -- 31-JAN-2011 Analysis of Commercially Available Firefighting Helmet and Boot Options for the Joint Firefighter...ensemble. A requirements correlation matrix was generated and sent to industry detailing objective and threshold measurements for both the helmet
Wan, Bing; Wang, Siqi; Tu, Mengqi; Wu, Bo; Han, Ping; Xu, Haibo
2017-03-01
The purpose of this meta-analysis was to evaluate the diagnostic accuracy of perfusion magnetic resonance imaging (MRI) as a method for differentiating glioma recurrence from pseudoprogression. The PubMed, Embase, Cochrane Library, and Chinese Biomedical databases were searched comprehensively for relevant studies up to August 3, 2016 according to specific inclusion and exclusion criteria. The quality of the included studies was assessed according to the quality assessment of diagnostic accuracy studies (QUADAS-2). After performing heterogeneity and threshold effect tests, pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were calculated. Publication bias was evaluated visually by a funnel plot and quantitatively using Deek funnel plot asymmetry test. The area under the summary receiver operating characteristic curve was calculated to demonstrate the diagnostic performance of perfusion MRI. Eleven studies covering 416 patients and 418 lesions were included in this meta-analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were 0.88 (95% confidence interval [CI] 0.84-0.92), 0.77 (95% CI 0.69-0.84), 3.93 (95% CI 2.83-5.46), 0.16 (95% CI 0.11-0.22), and 27.17 (95% CI 14.96-49.35), respectively. The area under the summary receiver operating characteristic curve was 0.8899. There was no notable publication bias. Sensitivity analysis showed that the meta-analysis results were stable and credible. While perfusion MRI is not the ideal diagnostic method for differentiating glioma recurrence from pseudoprogression, it could improve diagnostic accuracy. Therefore, further research on combining perfusion MRI with other imaging modalities is warranted.
Reliability analysis of different structure parameters of PCBA under drop impact
NASA Astrophysics Data System (ADS)
Liu, P. S.; Fan, G. M.; Liu, Y. H.
2018-03-01
The establishing process of PCBA is modelled by finite element analysis software ABAQUS. Firstly, introduce the Input-G method and the fatigue life under drop impact are introduced and the mechanism of the solder joint failure in the process of drop is analysed. The main reason of solder joint failure is that the PCB component is suffering repeated tension and compression stress during the drop impact. Finally, the equivalent stress and peel stress of different solder joint and plate-level components under different impact acceleration are also analysed. The results show that the reliability of tin-silver copper joint is better than that of tin- lead solder joint, and the fatigue life of solder joint expectancy decrease as the impact pulse amplitude increases.
2016-01-01
Abstract Microarray gene expression data sets are jointly analyzed to increase statistical power. They could either be merged together or analyzed by meta-analysis. For a given ensemble of data sets, it cannot be foreseen which of these paradigms, merging or meta-analysis, works better. In this article, three joint analysis methods, Z -score normalization, ComBat and the inverse normal method (meta-analysis) were selected for survival prognosis and risk assessment of breast cancer patients. The methods were applied to eight microarray gene expression data sets, totaling 1324 patients with two clinical endpoints, overall survival and relapse-free survival. The performance derived from the joint analysis methods was evaluated using Cox regression for survival analysis and independent validation used as bias estimation. Overall, Z -score normalization had a better performance than ComBat and meta-analysis. Higher Area Under the Receiver Operating Characteristic curve and hazard ratio were also obtained when independent validation was used as bias estimation. With a lower time and memory complexity, Z -score normalization is a simple method for joint analysis of microarray gene expression data sets. The derived findings suggest further assessment of this method in future survival prediction and cancer classification applications. PMID:26504096
Gu, Lijuan; Rosenberg, Mark W; Zeng, Juxin
2017-10-01
China's rapid socioeconomic growth in recent years and the simultaneous increase in many forms of pollution are generating contradictory pictures of residents' well-being. This paper applies multilevel analysis to the 2013 China General Social Survey data on social development and health to understand this twofold phenomenon. Multilevel models are developed to investigate the impact of socioeconomic development and environmental degradation on self-reported health (SRH) and self-reported happiness (SRHP), differentiating among lower, middle, and higher income groups. The results of the logit multilevel analysis demonstrate that income, jobs, and education increased the likelihood of rating SRH and SRHP positively for the lower and middle groups but had little or no effect on the higher income group. Having basic health insurance had an insignificant effect on health but increased the likelihood of happiness among the lower income group. Provincial-level pollutants were associated with a higher likelihood of good health for all income groups, and community-level industrial pollutants increased the likelihood of good health for the lower and middle income groups. Measures of community-level pollution were robust predictors of the likelihood of unhappiness among the lower and middle income groups. Environmental hazards had a mediating effect on the relationship between socioeconomic development and health, and socioeconomic development strengthened the association between environmental hazards and happiness. These outcomes indicate that the complex interconnections among socioeconomic development and environmental degradation have differential effects on well-being among different income groups in China.
Draborg, Eva; Andersen, Christian Kronborg
2006-01-01
Health technology assessment (HTA) has been used as input in decision making worldwide for more than 25 years. However, no uniform definition of HTA or agreement on assessment methods exists, leaving open the question of what influences the choice of assessment methods in HTAs. The objective of this study is to analyze statistically a possible relationship between methods of assessment used in practical HTAs, type of assessed technology, type of assessors, and year of publication. A sample of 433 HTAs published by eleven leading institutions or agencies in nine countries was reviewed and analyzed by multiple logistic regression. The study shows that outsourcing of HTA reports to external partners is associated with a higher likelihood of using assessment methods, such as meta-analysis, surveys, economic evaluations, and randomized controlled trials; and with a lower likelihood of using assessment methods, such as literature reviews and "other methods". The year of publication was statistically related to the inclusion of economic evaluations and shows a decreasing likelihood during the year span. The type of assessed technology was related to economic evaluations with a decreasing likelihood, to surveys, and to "other methods" with a decreasing likelihood when pharmaceuticals were the assessed type of technology. During the period from 1989 to 2002, no major developments in assessment methods used in practical HTAs were shown statistically in a sample of 433 HTAs worldwide. Outsourcing to external assessors has a statistically significant influence on choice of assessment methods.
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Joint source based analysis of multiple brain structures in studying major depressive disorder
NASA Astrophysics Data System (ADS)
Ramezani, Mahdi; Rasoulian, Abtin; Hollenstein, Tom; Harkness, Kate; Johnsrude, Ingrid; Abolmaesumi, Purang
2014-03-01
We propose a joint Source-Based Analysis (jSBA) framework to identify brain structural variations in patients with Major Depressive Disorder (MDD). In this framework, features representing position, orientation and size (i.e. pose), shape, and local tissue composition are extracted. Subsequently, simultaneous analysis of these features within a joint analysis method is performed to generate the basis sources that show signi cant di erences between subjects with MDD and those in healthy control. Moreover, in a cross-validation leave- one-out experiment, we use a Fisher Linear Discriminant (FLD) classi er to identify individuals within the MDD group. Results show that we can classify the MDD subjects with an accuracy of 76% solely based on the information gathered from the joint analysis of pose, shape, and tissue composition in multiple brain structures.
Camomilla, Valentina; Cereatti, Andrea; Cutti, Andrea Giovanni; Fantozzi, Silvia; Stagni, Rita; Vannozzi, Giuseppe
2017-08-18
Quantitative gait analysis can provide a description of joint kinematics and dynamics, and it is recognized as a clinically useful tool for functional assessment, diagnosis and intervention planning. Clinically interpretable parameters are estimated from quantitative measures (i.e. ground reaction forces, skin marker trajectories, etc.) through biomechanical modelling. In particular, the estimation of joint moments during motion is grounded on several modelling assumptions: (1) body segmental and joint kinematics is derived from the trajectories of markers and by modelling the human body as a kinematic chain; (2) joint resultant (net) loads are, usually, derived from force plate measurements through a model of segmental dynamics. Therefore, both measurement errors and modelling assumptions can affect the results, to an extent that also depends on the characteristics of the motor task analysed (i.e. gait speed). Errors affecting the trajectories of joint centres, the orientation of joint functional axes, the joint angular velocities, the accuracy of inertial parameters and force measurements (concurring to the definition of the dynamic model), can weigh differently in the estimation of clinically interpretable joint moments. Numerous studies addressed all these methodological aspects separately, but a critical analysis of how these aspects may affect the clinical interpretation of joint dynamics is still missing. This article aims at filling this gap through a systematic review of the literature, conducted on Web of Science, Scopus and PubMed. The final objective is hence to provide clear take-home messages to guide laboratories in the estimation of joint moments for the clinical practice.
Joint Blind Source Separation by Multi-set Canonical Correlation Analysis
Li, Yi-Ou; Adalı, Tülay; Wang, Wei; Calhoun, Vince D
2009-01-01
In this work, we introduce a simple and effective scheme to achieve joint blind source separation (BSS) of multiple datasets using multi-set canonical correlation analysis (M-CCA) [1]. We first propose a generative model of joint BSS based on the correlation of latent sources within and between datasets. We specify source separability conditions, and show that, when the conditions are satisfied, the group of corresponding sources from each dataset can be jointly extracted by M-CCA through maximization of correlation among the extracted sources. We compare source separation performance of the M-CCA scheme with other joint BSS methods and demonstrate the superior performance of the M-CCA scheme in achieving joint BSS for a large number of datasets, group of corresponding sources with heterogeneous correlation values, and complex-valued sources with circular and non-circular distributions. We apply M-CCA to analysis of functional magnetic resonance imaging (fMRI) data from multiple subjects and show its utility in estimating meaningful brain activations from a visuomotor task. PMID:20221319
Yoon, Jung-Ro; Yang, Se-Hyun; Shin, Young-Soo
2018-06-01
Many studies have found associations between laboratory biomarkers and periprosthetic joint infection (PJI), but it remains unclear whether these biomarkers are clinically useful in ruling out PJI. This meta-analysis compared the performance of interleukin-6 (IL-6) versus procalcitonin (PCT) for the diagnosis of PJI. In this meta-analysis, we reviewed studies that evaluated IL-6 or/and PCT as a diagnostic biomarker for PJI and provided sufficient data to permit sensitivity and specificity analyses for each test. The major databases MEDLINE, EMBASE, the Cochrane Library, Web of Science, and SCOPUS were searched for appropriate studies from the earliest available date of indexing through February 28, 2017. No restrictions were placed on language of publication. We identified 18 studies encompassing a total of 1,835 subjects; 16 studies reported on IL-6 and 6 studies reported on PCT. The area under the curve (AUC) was 0.93 (95% CI, 0.91-0.95) for IL-6 and 0.83 (95% CI, 0.79-0.86) for PCT. The pooled sensitivity was 0.83 (95% CI, 0.74-0.89) for IL-6 and 0.58 (95% CI, 0.31-0.81) for PCT. The pooled specificity was 0.91 (95% CI, 0.84-0.95) for IL-6 and 0.95 (95% CI, 0.63-1.00) for PCT. Both the IL-6 and PCT tests had a high positive likelihood ratio (LR); 9.3 (95% CI, 5.3-16.2) and 12.4 (95% CI, 1.7-89.8), respectively, making them excellent rule-in tests for the diagnosis of PJI. The pooled negative LR for IL-6 was 0.19 (95% CI, 0.12-0.29), making it suitable as a rule-out test, whereas the pooled negative LR for PCT was 0.44 (95% CI, 0.25-0.78), making it unsuitable as a rule-out diagnostic tool. Based on the results of the present meta-analysis, IL-6 has higher diagnostic value than PCT for the diagnosis of PJI. Moreover, the specificity of the IL-6 test is higher than its sensitivity. Conversely, PCT is not recommended for use as a rule-out diagnostic tool.
NASA Astrophysics Data System (ADS)
Weerathunga, Thilina Shihan
2017-08-01
Gravitational waves are a fundamental prediction of Einstein's General Theory of Relativity. The first experimental proof of their existence was provided by the Nobel Prize winning discovery by Taylor and Hulse of orbital decay in a binary pulsar system. The first detection of gravitational waves incident on earth from an astrophysical source was announced in 2016 by the LIGO Scientific Collaboration, launching the new era of gravitational wave (GW) astronomy. The signal detected was from the merger of two black holes, which is an example of sources called Compact Binary Coalescences (CBCs). Data analysis strategies used in the search for CBC signals are derivatives of the Maximum-Likelihood (ML) method. The ML method applied to data from a network of geographically distributed GW detectors--called fully coherent network analysis--is currently the best approach for estimating source location and GW polarization waveforms. However, in the case of CBCs, especially for lower mass systems (O(1M solar masses)) such as double neutron star binaries, fully coherent network analysis is computationally expensive. The ML method requires locating the global maximum of the likelihood function over a nine dimensional parameter space, where the computation of the likelihood at each point requires correlations involving O(104) to O(106) samples between the data and the corresponding candidate signal waveform template. Approximations, such as semi-coherent coincidence searches, are currently used to circumvent the computational barrier but incur a concomitant loss in sensitivity. We explored the effectiveness of Particle Swarm Optimization (PSO), a well-known algorithm in the field of swarm intelligence, in addressing the fully coherent network analysis problem. As an example, we used a four-detector network consisting of the two LIGO detectors at Hanford and Livingston, Virgo and Kagra, all having initial LIGO noise power spectral densities, and show that PSO can locate the global maximum with less than 240,000 likelihood evaluations for a component mass range of 1.0 to 10.0 solar masses at a realistic coherent network signal to noise ratio of 9.0. Our results show that PSO can successfully deliver a fully-coherent all-sky search with < (1/10 ) the number of likelihood evaluations needed for a grid-based search. Used as a follow-up step, the savings in the number of likelihood evaluations may also reduce latency in obtaining ML estimates of source parameters in semi-coherent searches.
Testing students' e-learning via Facebook through Bayesian structural equation modeling.
Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad
2017-01-01
Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.
Lehmann, A; Scheffler, Ch; Hermanussen, M
2010-02-01
Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.
Testing students’ e-learning via Facebook through Bayesian structural equation modeling
Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad
2017-01-01
Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. PMID:28886019
[Range of Hip Joint Motion and Weight of Lower Limb Function under 3D Dynamic Marker].
Xia, Q; Zhang, M; Gao, D; Xia, W T
2017-12-01
To explore the range of reasonable weight coefficient of hip joint in lower limb function. When the hip joints of healthy volunteers under normal conditions or fixed at three different positions including functional, flexed and extension positions, the movements of lower limbs were recorded by LUKOtronic motion capture and analysis system. The degree of lower limb function loss was calculated using Fugl-Meyer lower limb function assessment form when the hip joints were fixed at the aforementioned positions. One-way analysis of variance and Tamhane's T2 method were used to proceed statistics analysis and calculate the range of reasonable weight coefficient of hip joint. There were significant differences between the degree of lower limb function loss when the hip joints fixed at flexed and extension positions and at functional position. While the differences between the degree of lower limb function loss when the hip joints fixed at flexed position and extension position had no statistical significance. In 95% confidence interval, the reasonable weight coefficient of hip joint in lower limb function was between 61.05% and 73.34%. Expect confirming the reasonable weight coefficient, the effects of functional and non-functional positions on the degree of lower limb function loss should also be considered for the assessment of hip joint function loss. Copyright© by the Editorial Department of Journal of Forensic Medicine
NASA Astrophysics Data System (ADS)
Haas, Edwin; Klatt, Steffen; Kraus, David; Werner, Christian; Ruiz, Ignacio Santa Barbara; Kiese, Ralf; Butterbach-Bahl, Klaus
2014-05-01
Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional and national scales and are outlined as the most advanced methodology (Tier 3) for national emission inventory in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems like arable land and grasslands and are thus thought to be widely applicable at various spatial and temporal scales. The high complexity of ecosystem processes mirrored by such models requires a large number of model parameters. Many of those parameters are lumped parameters describing simultaneously the effect of environmental drivers on e.g. microbial community activity and individual processes. Thus, the precise quantification of true parameter states is often difficult or even impossible. As a result model uncertainty is not solely originating from input uncertainty but also subject to parameter-induced uncertainty. In this study we quantify regional parameter-induced model uncertainty on nitrous oxide (N2O) emissions and nitrate (NO3) leaching from arable soils of Saxony (Germany) using the biogeochemical model LandscapeDNDC. For this we calculate a regional inventory using a joint parameter distribution for key parameters describing microbial C and N turnover processes as obtained by a Bayesian calibration study. We representatively sampled 400 different parameter vectors from the discrete joint parameter distribution comprising approximately 400,000 parameter combinations and used these to calculate 400 individual realizations of the regional inventory. The spatial domain (represented by 4042 polygons) is set up with spatially explicit soil and climate information and a region-typical 3-year crop rotation consisting of winter wheat, rape- seed, and winter barley. Average N2O emission from arable soils in the state of Saxony across all 400 realizations was 1.43 ± 1.25 [kg N / ha] with a median value of 1.05 [kg N / ha]. Using the default IPCC emission factor approach (Tier 1) for direct emissions reveal a higher average N2O emission of 1.51 [kg N / ha] due to fertilizer use. In the regional uncertainty quantification the 20% likelihood range for N2O emissions is 0.79 - 1.37 [kg N / ha] (50% likelihood: 0.46 - 2.05 [kg N / ha]; 90% likelihood: 0.11 - 4.03 [kg N / ha]). Respective quantities were calculated for nitrate leaching. The method has proven its applicability to quantify parameter-induced uncertainty of simulated regional greenhouse gas emission and nitrate leaching inventories using process based biogeochemical models.
Augmented reality environment for temporomandibular joint motion analysis.
Wagner, A; Ploder, O; Zuniga, J; Undt, G; Ewers, R
1996-01-01
The principles of interventional video tomography were applied for the real-time visualization of temporomandibular joint movements in an augmented reality environment. Anatomic structures were extracted in three dimensions from planar cephalometric radiographic images. The live-image fusion of these graphic anatomic structures with real-time position data of the mandible and the articular fossa was performed with a see-through, head-mounted display and an electromagnetic tracking system. The dynamic fusion of radiographic images of the temporomandibular joint to anatomic temporomandibular joint structures in motion created a new modality for temporomandibular joint motion analysis. The advantages of the method are its ability to accurately examine the motion of the temporomandibular joint in three dimensions without restraining the subject and its ability to simultaneously determine the relationship of the bony temporomandibular joint and supporting structures (ie, occlusion, muscle function, etc) during movement before and after treatment.
Fast maximum likelihood estimation of mutation rates using a birth-death process.
Wu, Xiaowei; Zhu, Hongxiao
2015-02-07
Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.
Likelihood-based confidence intervals for estimating floods with given return periods
NASA Astrophysics Data System (ADS)
Martins, Eduardo Sávio P. R.; Clarke, Robin T.
1993-06-01
This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.
ERIC Educational Resources Information Center
Lee, Woong-Kyu
2012-01-01
The principal objective of this study was to gain insight into attitude changes occurring during IT acceptance from the perspective of elaboration likelihood model (ELM). In particular, the primary target of this study was the process of IT acceptance through an education program. Although the Internet and computers are now quite ubiquitous, and…
Phoebe L. Zarnetske; Thomas C., Jr. Edwards; Gretchen G. Moisen
2007-01-01
Estimating species likelihood of occurrence across extensive landscapes is a powerful management tool. Unfortunately, available occurrence data for landscape-scale modeling is often lacking and usually only in the form of observed presences. Ecologically based pseudo-absence points were generated from within habitat envelopes to accompany presence-only data in habitat...
ERIC Educational Resources Information Center
Raley, R. Kelly; Bratter, Jenifer
2004-01-01
Using the 1987-1988 and 1992-1994 waves of the National Survey of Families and Households, the authors measure the association between Wave 1 responses to 12 questions on whom respondents would be "most willing to marry" and the likelihood of marriage by Wave 2. Preliminary analysis indicated that some questions about partner preferences…
Li, Chunming; Liu, Yajun; Zhang, Weiyuan
2015-01-01
Objective To explore the joint and independent effects of gestational weight gain (GWG) and pre-pregnancy body mass index (BMI) on pregnancy outcomes in a population of Chinese Han women and to evaluate pregnant women’s adherence to the 2009 Institute of Medicine (IOM) gestational weight gain guidelines. Methods This was a multicenter, retrospective cohort study of 48,867 primiparous women from mainland China who had a full-term singleton birth between January 1, 2011 and December 30, 2011. The independent associations of pre-pregnancy BMI, GWG and categories of combined pre-pregnancy BMI and GWG with outcomes of interest were examined using an adjusted multivariate regression model. In addition, women with pre-pregnancy hypertension were excluded from the analysis of the relationship between GWG and delivery of small-for-gestational-age (SGA) infants, and women with gestational diabetes (GDM) were excluded from the analysis of the relationship between GWG and delivery of large-for-gestational-age (LGA) infants. Results Only 36.8% of the women had a weight gain that was within the recommended range; 25% and 38.2% had weight gains that were below and above the recommended range, respectively. The contribution of GWG to the risk of adverse maternal and fetal outcomes was modest. Women with excessive GWG had an increased likelihood of gestational hypertension (adjusted OR 2.55; 95% CI = 1.92–2.80), postpartum hemorrhage (adjusted OR 1.30; 95% CI = 1.17–1.45), cesarean section (adjusted OR 1.31; 95% CI = 1.18–1.36) and delivery of an LGA infant (adjusted OR 2.1; 95% CI = 1.76–2.26) compared with women with normal weight gain. Conversely, the incidence of GDM (adjusted OR 1.64; 95% CI = 1.20–1.85) and SGA infants (adjusted OR 1.51; 95% CI = 1.32–1.72) was increased in the group of women with inadequate GWG. Moreover, in the obese women, excessive GWG was associated with an apparent increased risk of delivering an LGA infant. In the women who were underweight, poor weight gain was associated with an increased likelihood of delivering an SGA infant. After excluding the mothers with GDM or gestational hypertension, the ORs for delivery of LGA and SGA infants decreased for women with high GWG and increased for women with low GWG. Conclusions GWG above the recommended range is common in this population and is associated with multiple unfavorable outcomes independent of pre-pregnancy BMI. Obese women may benefit from avoiding weight gain above the range recommended by the 2009 IOM. Underweight women should avoid low GWG to prevent delivering an SGA infant. Pregnant women should therefore be monitored to comply with the IOM recommendations and should have a balanced weight gain that is within a range based on their pre-pregnancy BMI. PMID:26313941
Li, Chunming; Liu, Yajun; Zhang, Weiyuan
2015-01-01
To explore the joint and independent effects of gestational weight gain (GWG) and pre-pregnancy body mass index (BMI) on pregnancy outcomes in a population of Chinese Han women and to evaluate pregnant women's adherence to the 2009 Institute of Medicine (IOM) gestational weight gain guidelines. This was a multicenter, retrospective cohort study of 48,867 primiparous women from mainland China who had a full-term singleton birth between January 1, 2011 and December 30, 2011. The independent associations of pre-pregnancy BMI, GWG and categories of combined pre-pregnancy BMI and GWG with outcomes of interest were examined using an adjusted multivariate regression model. In addition, women with pre-pregnancy hypertension were excluded from the analysis of the relationship between GWG and delivery of small-for-gestational-age (SGA) infants, and women with gestational diabetes (GDM) were excluded from the analysis of the relationship between GWG and delivery of large-for-gestational-age (LGA) infants. Only 36.8% of the women had a weight gain that was within the recommended range; 25% and 38.2% had weight gains that were below and above the recommended range, respectively. The contribution of GWG to the risk of adverse maternal and fetal outcomes was modest. Women with excessive GWG had an increased likelihood of gestational hypertension (adjusted OR 2.55; 95% CI = 1.92-2.80), postpartum hemorrhage (adjusted OR 1.30; 95% CI = 1.17-1.45), cesarean section (adjusted OR 1.31; 95% CI = 1.18-1.36) and delivery of an LGA infant (adjusted OR 2.1; 95% CI = 1.76-2.26) compared with women with normal weight gain. Conversely, the incidence of GDM (adjusted OR 1.64; 95% CI = 1.20-1.85) and SGA infants (adjusted OR 1.51; 95% CI = 1.32-1.72) was increased in the group of women with inadequate GWG. Moreover, in the obese women, excessive GWG was associated with an apparent increased risk of delivering an LGA infant. In the women who were underweight, poor weight gain was associated with an increased likelihood of delivering an SGA infant. After excluding the mothers with GDM or gestational hypertension, the ORs for delivery of LGA and SGA infants decreased for women with high GWG and increased for women with low GWG. GWG above the recommended range is common in this population and is associated with multiple unfavorable outcomes independent of pre-pregnancy BMI. Obese women may benefit from avoiding weight gain above the range recommended by the 2009 IOM. Underweight women should avoid low GWG to prevent delivering an SGA infant. Pregnant women should therefore be monitored to comply with the IOM recommendations and should have a balanced weight gain that is within a range based on their pre-pregnancy BMI.
Uncertainty analysis of signal deconvolution using a measured instrument response function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartouni, E. P.; Beeman, B.; Caggiano, J. A.
2016-10-05
A common analysis procedure minimizes the ln-likelihood that a set of experimental observables matches a parameterized model of the observation. The model includes a description of the underlying physical process as well as the instrument response function (IRF). Here, we investigate the National Ignition Facility (NIF) neutron time-of-flight (nTOF) spectrometers, the IRF is constructed from measurements and models. IRF measurements have a finite precision that can make significant contributions to the uncertainty estimate of the physical model’s parameters. Finally, we apply a Bayesian analysis to properly account for IRF uncertainties in calculating the ln-likelihood function used to find the optimummore » physical parameters.« less
Meta-analysis genomewide association of pork quality traits: ultimate pH and shear force
USDA-ARS?s Scientific Manuscript database
It is common practice to perform genome-wide association analysis (GWA) using a genomic evaluation model of a single population. Joint analysis of several populations is more difficult. An alternative to joint analysis could be the meta-analysis (MA) of several GWA from independent genomic evaluatio...
Reusable Solid Rocket Motor Nozzle Joint-4 Thermal Analysis
NASA Technical Reports Server (NTRS)
Clayton, J. Louie
2001-01-01
This study provides for development and test verification of a thermal model used for prediction of joint heating environments, structural temperatures and seal erosions in the Space Shuttle Reusable Solid Rocket Motor (RSRM) Nozzle Joint-4. The heating environments are a result of rapid pressurization of the joint free volume assuming a leak path has occurred in the filler material used for assembly gap close out. Combustion gases flow along the leak path from nozzle environment to joint O-ring gland resulting in local heating to the metal housing and erosion of seal materials. Analysis of this condition was based on usage of the NASA Joint Pressurization Routine (JPR) for environment determination and the Systems Improved Numerical Differencing Analyzer (SINDA) for structural temperature prediction. Model generated temperatures, pressures and seal erosions are compared to hot fire test data for several different leak path situations. Investigated in the hot fire test program were nozzle joint-4 O-ring erosion sensitivities to leak path width in both open and confined joint geometries. Model predictions were in generally good agreement with the test data for the confined leak path cases. Worst case flight predictions are provided using the test-calibrated model. Analysis issues are discussed based on model calibration procedures.
NASA Astrophysics Data System (ADS)
Paranjpe, Nikhil; Alamir, Mohammed; Alonayni, Abdullah; Asmatulu, Eylem; Rahman, Muhammad M.; Asmatulu, Ramazan
2018-03-01
Adhesives are widely utilized materials in aviation, automotive, energy, defense, and marine industries. Adhesive joints are gradually supplanting mechanical fasteners because they are lightweight structures, thus making the assembly lighter and easier. They also act as a sealant to prevent a structural joint from galvanic corrosion and leakages. Adhesive bonds provide high joint strength because of the fact that the load is distributed uniformly on the joint surface, while in mechanical joints, the load is concentrated at one point, thus leading to stress at that point and in turn causing joint failures. This research concentrated on the analysis of bond strength and failure loads in adhesive joint of composite-to-composite surfaces. Different durations of plasma along with the detergent cleaning were conducted on the composite surfaces prior to the adhesive applications and curing processes. The joint strength of the composites increased about 34% when the surface was plasma treated for 12 minutes. It is concluded that the combination of different surface preparations, rather than only one type of surface treatment, provides an ideal joint quality for the composites.
Statistical inference of static analysis rules
NASA Technical Reports Server (NTRS)
Engler, Dawson Richards (Inventor)
2009-01-01
Various apparatus and methods are disclosed for identifying errors in program code. Respective numbers of observances of at least one correctness rule by different code instances that relate to the at least one correctness rule are counted in the program code. Each code instance has an associated counted number of observances of the correctness rule by the code instance. Also counted are respective numbers of violations of the correctness rule by different code instances that relate to the correctness rule. Each code instance has an associated counted number of violations of the correctness rule by the code instance. A respective likelihood of the validity is determined for each code instance as a function of the counted number of observances and counted number of violations. The likelihood of validity indicates a relative likelihood that a related code instance is required to observe the correctness rule. The violations may be output in order of the likelihood of validity of a violated correctness rule.
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The increased risk of joint venture promotes social cooperation.
Wu, Te; Fu, Feng; Zhang, Yanling; Wang, Long
2013-01-01
The joint venture of many members is common both in animal world and human society. In these public enterprizes, highly cooperative groups are more likely to while low cooperative groups are still possible but not probable to succeed. Existent literature mostly focuses on the traditional public goods game, in which cooperators create public wealth unconditionally and benefit all group members unbiasedly. We here institute a model addressing this public goods dilemma with incorporating the public resource foraging failure risk. Risk-averse individuals tend to lead a autarkic life, while risk-preferential ones tend to participate in the risky public goods game. For participants, group's success relies on its cooperativeness, with increasing contribution leading to increasing success likelihood. We introduce a function with one tunable parameter to describe the risk removal pattern and study in detail three representative classes. Analytical results show that the widely replicated population dynamics of cyclical dominance of loner, cooperator and defector disappear, while most of the time loners act as savors while eventually they also disappear. Depending on the way that group's success relies on its cooperativeness, either cooperators pervade the entire population or they coexist with defectors. Even in the later case, cooperators still hold salient superiority in number as some defectors also survive by parasitizing. The harder the joint venture succeeds, the higher level of cooperation once cooperators can win the evolutionary race. Our work may enrich the literature concerning the risky public goods games.
The Increased Risk of Joint Venture Promotes Social Cooperation
Wu, Te; Fu, Feng; Zhang, Yanling; Wang, Long
2013-01-01
The joint venture of many members is common both in animal world and human society. In these public enterprizes, highly cooperative groups are more likely to while low cooperative groups are still possible but not probable to succeed. Existent literature mostly focuses on the traditional public goods game, in which cooperators create public wealth unconditionally and benefit all group members unbiasedly. We here institute a model addressing this public goods dilemma with incorporating the public resource foraging failure risk. Risk-averse individuals tend to lead a autarkic life, while risk-preferential ones tend to participate in the risky public goods game. For participants, group's success relies on its cooperativeness, with increasing contribution leading to increasing success likelihood. We introduce a function with one tunable parameter to describe the risk removal pattern and study in detail three representative classes. Analytical results show that the widely replicated population dynamics of cyclical dominance of loner, cooperator and defector disappear, while most of the time loners act as savors while eventually they also disappear. Depending on the way that group's success relies on its cooperativeness, either cooperators pervade the entire population or they coexist with defectors. Even in the later case, cooperators still hold salient superiority in number as some defectors also survive by parasitizing. The harder the joint venture succeeds, the higher level of cooperation once cooperators can win the evolutionary race. Our work may enrich the literature concerning the risky public goods games. PMID:23750204
Deng, Wenping; Zhang, Kui; Liu, Sanzhen; Zhao, Patrick; Xu, Shizhong; Wei, Hairong
2018-04-30
Joint reconstruction of multiple gene regulatory networks (GRNs) using gene expression data from multiple tissues/conditions is very important for understanding common and tissue/condition-specific regulation. However, there are currently no computational models and methods available for directly constructing such multiple GRNs that not only share some common hub genes but also possess tissue/condition-specific regulatory edges. In this paper, we proposed a new graphic Gaussian model for joint reconstruction of multiple gene regulatory networks (JRmGRN), which highlighted hub genes, using gene expression data from several tissues/conditions. Under the framework of Gaussian graphical model, JRmGRN method constructs the GRNs through maximizing a penalized log likelihood function. We formulated it as a convex optimization problem, and then solved it with an alternating direction method of multipliers (ADMM) algorithm. The performance of JRmGRN was first evaluated with synthetic data and the results showed that JRmGRN outperformed several other methods for reconstruction of GRNs. We also applied our method to real Arabidopsis thaliana RNA-seq data from two light regime conditions in comparison with other methods, and both common hub genes and some conditions-specific hub genes were identified with higher accuracy and precision. JRmGRN is available as a R program from: https://github.com/wenpingd. hairong@mtu.edu. Proof of theorem, derivation of algorithm and supplementary data are available at Bioinformatics online.
Likelihoods for fixed rank nomination networks
HOFF, PETER; FOSDICK, BAILEY; VOLFOVSKY, ALEX; STOVEL, KATHERINE
2014-01-01
Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586
Evaluation of joint findings with gait analysis in children with hemophilia.
Cayir, Atilla; Yavuzer, Gunes; Sayli, Revide Tülin; Gurcay, Eda; Culha, Vildan; Bozkurt, Murat
2014-01-01
Hemophilic arthropathy due to recurrent joint bleeding leads to physical, psychological and socioeconomic problems in children with hemophilia and reduces their quality of life. The purpose of this study was to evaluate joint damage through various parameters and to determine functional deterioration in the musculoskeletal system during walking using kinetic and kinematic gait analysis. Physical examination and kinetic and kinematic gait analysis findings of 19 hemophilic patients aged 7-20 years were compared with those of age, sex and leg length matched controls. Stride time was longer in the hemophilia group (p=0.001) compared to the age matched healthy control group, while hip, knee and ankle joint rotation angles were more limited (p=0.001, p=0.035 and p=0.001, respectively). In the hemophilia group, the extensor moment of the knee joint in the stance phase was less than that in the control group (p=0.001). Stride time was longer in the severe hemophilia group compared to the mild-moderate hemophilia and control groups (p=0.011 and p=0.001, respectively). Rotation angle of the ankle was wider in the control group compared to the other two groups (p=0.001 for both). Rotation angle of the ankle joint was narrower in the severe hemophilia group compared to the others (p=0.001 for each). Extensor moment of the knee joint was greater in the control group compared to the other two groups (p=0.003 and p=0.001, respectively). Walking velocity was higher in the control group compared to the severe hemophilia group. Kinetic and kinematic gait analysis has the sensitivity to detect minimal changes in biomechanical parameters. Gait analysis can be used as a reliable method to detect early joint damage.
Analyzing Personalized Policies for Online Biometric Verification
Sadhwani, Apaar; Yang, Yan; Wein, Lawrence M.
2014-01-01
Motivated by India’s nationwide biometric program for social inclusion, we analyze verification (i.e., one-to-one matching) in the case where we possess similarity scores for 10 fingerprints and two irises between a resident’s biometric images at enrollment and his biometric images during his first verification. At subsequent verifications, we allow individualized strategies based on these 12 scores: we acquire a subset of the 12 images, get new scores for this subset that quantify the similarity to the corresponding enrollment images, and use the likelihood ratio (i.e., the likelihood of observing these scores if the resident is genuine divided by the corresponding likelihood if the resident is an imposter) to decide whether a resident is genuine or an imposter. We also consider two-stage policies, where additional images are acquired in a second stage if the first-stage results are inconclusive. Using performance data from India’s program, we develop a new probabilistic model for the joint distribution of the 12 similarity scores and find near-optimal individualized strategies that minimize the false reject rate (FRR) subject to constraints on the false accept rate (FAR) and mean verification delay for each resident. Our individualized policies achieve the same FRR as a policy that acquires (and optimally fuses) 12 biometrics for each resident, which represents a five (four, respectively) log reduction in FRR relative to fingerprint (iris, respectively) policies previously proposed for India’s biometric program. The mean delay is sec for our proposed policy, compared to 30 sec for a policy that acquires one fingerprint and 107 sec for a policy that acquires all 12 biometrics. This policy acquires iris scans from 32–41% of residents (depending on the FAR) and acquires an average of 1.3 fingerprints per resident. PMID:24787752
Analyzing personalized policies for online biometric verification.
Sadhwani, Apaar; Yang, Yan; Wein, Lawrence M
2014-01-01
Motivated by India's nationwide biometric program for social inclusion, we analyze verification (i.e., one-to-one matching) in the case where we possess similarity scores for 10 fingerprints and two irises between a resident's biometric images at enrollment and his biometric images during his first verification. At subsequent verifications, we allow individualized strategies based on these 12 scores: we acquire a subset of the 12 images, get new scores for this subset that quantify the similarity to the corresponding enrollment images, and use the likelihood ratio (i.e., the likelihood of observing these scores if the resident is genuine divided by the corresponding likelihood if the resident is an imposter) to decide whether a resident is genuine or an imposter. We also consider two-stage policies, where additional images are acquired in a second stage if the first-stage results are inconclusive. Using performance data from India's program, we develop a new probabilistic model for the joint distribution of the 12 similarity scores and find near-optimal individualized strategies that minimize the false reject rate (FRR) subject to constraints on the false accept rate (FAR) and mean verification delay for each resident. Our individualized policies achieve the same FRR as a policy that acquires (and optimally fuses) 12 biometrics for each resident, which represents a five (four, respectively) log reduction in FRR relative to fingerprint (iris, respectively) policies previously proposed for India's biometric program. The mean delay is [Formula: see text] sec for our proposed policy, compared to 30 sec for a policy that acquires one fingerprint and 107 sec for a policy that acquires all 12 biometrics. This policy acquires iris scans from 32-41% of residents (depending on the FAR) and acquires an average of 1.3 fingerprints per resident.
Balzer, Laura; Staples, Patrick; Onnela, Jukka-Pekka; DeGruttola, Victor
2017-04-01
Several cluster-randomized trials are underway to investigate the implementation and effectiveness of a universal test-and-treat strategy on the HIV epidemic in sub-Saharan Africa. We consider nesting studies of pre-exposure prophylaxis within these trials. Pre-exposure prophylaxis is a general strategy where high-risk HIV- persons take antiretrovirals daily to reduce their risk of infection from exposure to HIV. We address how to target pre-exposure prophylaxis to high-risk groups and how to maximize power to detect the individual and combined effects of universal test-and-treat and pre-exposure prophylaxis strategies. We simulated 1000 trials, each consisting of 32 villages with 200 individuals per village. At baseline, we randomized the universal test-and-treat strategy. Then, after 3 years of follow-up, we considered four strategies for targeting pre-exposure prophylaxis: (1) all HIV- individuals who self-identify as high risk, (2) all HIV- individuals who are identified by their HIV+ partner (serodiscordant couples), (3) highly connected HIV- individuals, and (4) the HIV- contacts of a newly diagnosed HIV+ individual (a ring-based strategy). We explored two possible trial designs, and all villages were followed for a total of 7 years. For each village in a trial, we used a stochastic block model to generate bipartite (male-female) networks and simulated an agent-based epidemic process on these networks. We estimated the individual and combined intervention effects with a novel targeted maximum likelihood estimator, which used cross-validation to data-adaptively select from a pre-specified library the candidate estimator that maximized the efficiency of the analysis. The universal test-and-treat strategy reduced the 3-year cumulative HIV incidence by 4.0% on average. The impact of each pre-exposure prophylaxis strategy on the 4-year cumulative HIV incidence varied by the coverage of the universal test-and-treat strategy with lower coverage resulting in a larger impact of pre-exposure prophylaxis. Offering pre-exposure prophylaxis to serodiscordant couples resulted in the largest reductions in HIV incidence (2% reduction), and the ring-based strategy had little impact (0% reduction). The joint effect was larger than either individual effect with reductions in the 7-year incidence ranging from 4.5% to 8.8%. Targeted maximum likelihood estimation, data-adaptively adjusting for baseline covariates, substantially improved power over the unadjusted analysis, while maintaining nominal confidence interval coverage. Our simulation study suggests that nesting a pre-exposure prophylaxis study within an ongoing trial can lead to combined intervention effects greater than those of universal test-and-treat alone and can provide information about the efficacy of pre-exposure prophylaxis in the presence of high coverage of treatment for HIV+ persons.
ERIC Educational Resources Information Center
Tek, Saime
2010-01-01
Joint attention (JA), which occurs when two individuals focus on the same object or event, plays a critical role in social and language development. Two major kinds of joint attention have been observed: response to joint attention (RJA), in which children follow the attentional focus of their social partners, and initiation of joint attention…
Joint attention in Down syndrome: A meta-analysis.
Hahn, Laura J; Loveall, Susan J; Savoy, Madison T; Neumann, Allie M; Ikuta, Toshikazu
2018-07-01
Some studies have indicated that joint attention may be a relative strength in Down syndrome (DS), but other studies have not. To conduct a meta-analysis of joint attention in DS to more conclusively determine if this is a relative strength or weakness when compared to children with typical development (TD), developmental disabilities (DD), and autism spectrum disorder (ASD). Journal articles published before September 13, 2016, were identified by using the search terms "Down syndrome" and "joint attention" or "coordinating attention". Identified studies were reviewed and coded for inclusion criteria, descriptive information, and outcome variables. Eleven studies (553 participants) met inclusion criteria. Children with DS showed similar joint attention as TD children and higher joint attention than children with DD and ASD. Meta-regression revealed a significant association between age and joint attention effect sizes in the DS vs. TD contrast. Joint attention appears to not be a weakness for children with DS, but may be commensurate with developmental level. Joint attention may be a relative strength in comparison to other skills associated with the DS behavioral phenotype. Early interventions for children with DS may benefit from leveraging joint attention skills. Copyright © 2018 Elsevier Ltd. All rights reserved.
Structural analysis of Aircraft fuselage splice joint
NASA Astrophysics Data System (ADS)
Udaya Prakash, R.; Kumar, G. Raj; Vijayanandh, R.; Senthil Kumar, M.; Ramganesh, T.
2016-09-01
In Aviation sector, composite materials and its application to each component are one of the prime factors of consideration due to the high strength to weight ratio, design flexibility and non-corrosive so that the composite materials are widely used in the low weight constructions and also it can be treated as a suitable alternative to metals. The objective of this paper is to estimate and compare the suitability of a composite skin joint in an aircraft fuselage with different joints by simulating the displacement, normal stress, vonmises stress and shear stress with the help of numerical solution methods. The reference Z-stringer component of this paper is modeled by CATIA and numerical simulation is carried out by ANSYS has been used for splice joint presents in the aircraft fuselage with three combinations of joints such as riveted joint, bonded joint and hybrid joint. Nowadays the stringers are using to avoid buckling of fuselage skin, it has joined together by rivets and they are connected end to end by splice joint. Design and static analysis of three-dimensional models of joints such as bonded, riveted and hybrid are carried out and results are compared.
Parent-Child Communication and Marijuana Initiation: Evidence Using Discrete-Time Survival Analysis
Nonnemaker, James M.; Silber-Ashley, Olivia; Farrelly, Matthew C.; Dench, Daniel
2012-01-01
This study supplements existing literature on the relationship between parent-child communication and adolescent drug use by exploring whether parental and/or adolescent recall of specific drug-related conversations differentially impact youth's likelihood of initiating marijuana use. Using discrete-time survival analysis, we estimated the hazard of marijuana initiation using a logit model to obtain an estimate of the relative risk of initiation. Our results suggest that parent-child communication about drug use is either not protective (no effect) or—in the case of youth reports of communication—potentially harmful (leading to increased likelihood of marijuana initiation). PMID:22958867
NASA Technical Reports Server (NTRS)
Verhage, Joseph M.; Bower, Mark V.; Gilbert, Paul A. (Technical Monitor)
2001-01-01
The focus of this study is on the suitability in the application of classical laminate theory analysis tools for filament wound pressure vessels with adhesive laminated joints in particular: pressure vessel wall performance, joint stiffness and failure prediction. Two 18-inch diameter 12-ply filament wound pressure vessels were fabricated. One vessel was fabricated with a 24-ply pyramid laminated adhesive double strap butt joint. The second vessel was fabricated with the same number of plies in an inverted pyramid joint. Results from hydrostatic tests are presented. Experimental results were used as input to the computer programs GENLAM and Laminate, and the output compared to test. By using the axial stress resultant, the classical laminate theory results show a correlation within 1% to the experimental results in predicting the pressure vessel wall pressure performance. The prediction of joint stiffness for the two adhesive joints in the axial direction is within 1% of the experimental results. The calculated hoop direction joint stress resultant is 25% less than the measured resultant for both joint configurations. A correction factor is derived and used in the joint analysis. The correction factor is derived from the hoop stress resultant from the tank wall performance investigation. The vessel with the pyramid joint is determined to have failed in the joint area at a hydrostatic pressure 33% value below predicted failure. The vessel with the inverted pyramid joint failed in the wall acreage at a hydrostatic pressure within 10% of the actual failure pressure.
Analysis of Contraction Joint Width Influence on Load Stress of Pavement Panels
NASA Astrophysics Data System (ADS)
Gao, Wei; Cui, Wei; Sun, Wei
2018-05-01
The width of transverse contraction joint of the cement road varies with temperatures, which leads to changes in load transmission among plates of the road surface and affects load stress of the road plates. Three-dimensional element analysis software EverFE is used to address the relation between the contraction joint width and road surface load stress, revealing the impact of reducing contraction joint width. The results could be of critical value in maintaining road functions and extending the service life of cement road surfaces.
The likelihood ratio as a random variable for linked markers in kinship analysis.
Egeland, Thore; Slooten, Klaas
2016-11-01
The likelihood ratio is the fundamental quantity that summarizes the evidence in forensic cases. Therefore, it is important to understand the theoretical properties of this statistic. This paper is the last in a series of three, and the first to study linked markers. We show that for all non-inbred pairwise kinship comparisons, the expected likelihood ratio in favor of a type of relatedness depends on the allele frequencies only via the number of alleles, also for linked markers, and also if the true relationship is another one than is tested for by the likelihood ratio. Exact expressions for the expectation and variance are derived for all these cases. Furthermore, we show that the expected likelihood ratio is a non-increasing function if the recombination rate increases between 0 and 0.5 when the actual relationship is the one investigated by the LR. Besides being of theoretical interest, exact expressions such as obtained here can be used for software validation as they allow to verify the correctness up to arbitrary precision. The paper also presents results and advice of practical importance. For example, we argue that the logarithm of the likelihood ratio behaves in a fundamentally different way than the likelihood ratio itself in terms of expectation and variance, in agreement with its interpretation as weight of evidence. Equipped with the results presented and freely available software, one may check calculations and software and also do power calculations.
Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay
2013-12-01
Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.
ANA testing in the presence of acute and chronic infections.
Litwin, Christine M; Binder, Steven R
2016-01-01
Autoantibody testing is performed to help diagnose patients who have clinical symptoms suggestive of possible autoimmune diseases. Antinuclear antibodies (ANA) are present in many systemic autoimmune conditions such as systemic lupus erythematosus (SLE). However, a positive ANA test may also be seen with non-autoimmune inflammatory diseases, including both acute and chronic infections. When the ANA test is used as an initial screen in patients with non-specific clinical symptoms, such as fever, joint pain, myalgias, fatigue, rash, or anemia, the likelihood of a positive result due to infection will increase, especially in children. This article identifies acute and chronic infectious diseases that are likely to produce a positive ANA result and summarizes recent literature addressing both the causes and consequences of these findings.
MCMC multilocus lod scores: application of a new approach.
George, Andrew W; Wijsman, Ellen M; Thompson, Elizabeth A
2005-01-01
On extended pedigrees with extensive missing data, the calculation of multilocus likelihoods for linkage analysis is often beyond the computational bounds of exact methods. Growing interest therefore surrounds the implementation of Monte Carlo estimation methods. In this paper, we demonstrate the speed and accuracy of a new Markov chain Monte Carlo method for the estimation of linkage likelihoods through an analysis of real data from a study of early-onset Alzheimer's disease. For those data sets where comparison with exact analysis is possible, we achieved up to a 100-fold increase in speed. Our approach is implemented in the program lm_bayes within the framework of the freely available MORGAN 2.6 package for Monte Carlo genetic analysis (http://www.stat.washington.edu/thompson/Genepi/MORGAN/Morgan.shtml).
Inference from Samples of DNA Sequences Using a Two-Locus Model
Griffiths, Robert C.
2011-01-01
Abstract Performing inference on contemporary samples of DNA sequence data is an important and challenging task. Computationally intensive methods such as importance sampling (IS) are attractive because they make full use of the available data, but in the presence of recombination the large state space of genealogies can be prohibitive. In this article, we make progress by developing an efficient IS proposal distribution for a two-locus model of sequence data. We show that the proposal developed here leads to much greater efficiency, outperforming existing IS methods that could be adapted to this model. Among several possible applications, the algorithm can be used to find maximum likelihood estimates for mutation and crossover rates, and to perform ancestral inference. We illustrate the method on previously reported sequence data covering two loci either side of the well-studied TAP2 recombination hotspot. The two loci are themselves largely non-recombining, so we obtain a gene tree at each locus and are able to infer in detail the effect of the hotspot on their joint ancestry. We summarize this joint ancestry by introducing the gene graph, a summary of the well-known ancestral recombination graph. PMID:21210733
A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test
NASA Astrophysics Data System (ADS)
Becker, D.; Cain, S.
Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.