Sample records for likelihood method mlm

  1. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    NASA Astrophysics Data System (ADS)

    Lowell, A. W.; Boggs, S. E.; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C.; Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y.; Jean, P.; von Ballmoos, P.; Lin, C.-H.; Amman, M.

    2017-10-01

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ˜21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.

  2. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowell, A. W.; Boggs, S. E; Chiu, C. L.

    2017-10-20

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. Wemore » find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.« less

  3. Diagnostic accuracy of clinical examination features for identifying large rotator cuff tears in primary health care

    PubMed Central

    Cadogan, Angela; McNair, Peter; Laslett, Mark; Hing, Wayne; Taylor, Stephen

    2013-01-01

    Objectives: Rotator cuff tears are a common and disabling complaint. The early diagnosis of medium and large size rotator cuff tears can enhance the prognosis of the patient. The aim of this study was to identify clinical features with the strongest ability to accurately predict the presence of a medium, large or multitendon (MLM) rotator cuff tear in a primary care cohort. Methods: Participants were consecutively recruited from primary health care practices (n = 203). All participants underwent a standardized history and physical examination, followed by a standardized X-ray series and diagnostic ultrasound scan. Clinical features associated with the presence of a MLM rotator cuff tear were identified (P<0.200), a logistic multiple regression model was derived for identifying a MLM rotator cuff tear and thereafter diagnostic accuracy was calculated. Results: A MLM rotator cuff tear was identified in 24 participants (11.8%). Constant pain and a painful arc in abduction were the strongest predictors of a MLM tear (adjusted odds ratio 3.04 and 13.97 respectively). Combinations of ten history and physical examination variables demonstrated highest levels of sensitivity when five or fewer were positive [100%, 95% confidence interval (CI): 0.86–1.00; negative likelihood ratio: 0.00, 95% CI: 0.00–0.28], and highest specificity when eight or more were positive (0.91, 95% CI: 0.86–0.95; positive likelihood ratio 4.66, 95% CI: 2.34–8.74). Discussion: Combinations of patient history and physical examination findings were able to accurately detect the presence of a MLM rotator cuff tear. These findings may aid the primary care clinician in more efficient and accurate identification of rotator cuff tears that may require further investigation or orthopedic consultation. PMID:24421626

  4. Polarimetric Analysis of the Long Duration Gamma-Ray Burst GRB 160530A With the Balloon Borne Compton Spectrometer and Imager

    NASA Astrophysics Data System (ADS)

    Lowell, A. W.; Boggs, S. E.; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C.; Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y.; Jean, P.; von Ballmoos, P.; Lin, C.-H.; Amman, M.

    2017-10-01

    A long duration gamma-ray burst, GRB 160530A, was detected by the Compton Spectrometer and Imager (COSI) during the 2016 COSI Super Pressure Balloon campaign. As a Compton telescope, COSI is inherently sensitive to the polarization of gamma-ray sources in the energy range 0.2-5.0 MeV. We measured the polarization of GRB 160530A using (1) a standard method (SM) based on fitting the distribution of azimuthal scattering angles with a modulation curve and (2) an unbinned, maximum likelihood method (MLM). In both cases, the measured polarization level was below the 99% confidence minimum detectable polarization levels of 72.3% ± 0.8% (SM) and 57.5% ± 0.8% (MLM). Therefore, COSI did not detect polarized gamma-ray emission from this burst. Our most constraining 90% confidence upper limit on the polarization level was 46% (MLM).

  5. The weighted function method: A handy tool for flood frequency analysis or just a curiosity?

    NASA Astrophysics Data System (ADS)

    Bogdanowicz, Ewa; Kochanek, Krzysztof; Strupczewski, Witold G.

    2018-04-01

    The idea of the Weighted Function (WF) method for estimation of Pearson type 3 (Pe3) distribution introduced by Ma in 1984 has been revised and successfully applied for shifted inverse Gaussian (IGa3) distribution. Also the conditions of WF applicability to a shifted distribution have been formulated. The accuracy of WF flood quantiles for both Pe3 and IGa3 distributions was assessed by Monte Caro simulations under the true and false distribution assumption versus the maximum likelihood (MLM), moment (MOM) and L-moments (LMM) methods. Three datasets of annual peak flows of Polish catchments serve the case studies to compare the results of the WF, MOM, MLM and LMM performance for the real flood data. For the hundred-year flood the WF method revealed the explicit superiority only over the MLM surpassing the MOM and especially LMM both for the true and false distributional assumption with respect to relative bias and relative mean root square error values. Generally, the WF method performs well and for hydrological sample size and constitutes good alternative for the estimation of the flood upper quantiles.

  6. Polarimetric Analysis of the Long Duration Gamma-Ray Burst GRB 160530A With the Balloon Borne Compton Spectrometer and Imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowell, A. W.; Boggs, S. E; Chiu, C. L.

    2017-10-20

    A long duration gamma-ray burst, GRB 160530A, was detected by the Compton Spectrometer and Imager (COSI) during the 2016 COSI Super Pressure Balloon campaign. As a Compton telescope, COSI is inherently sensitive to the polarization of gamma-ray sources in the energy range 0.2–5.0 MeV. We measured the polarization of GRB 160530A using (1) a standard method (SM) based on fitting the distribution of azimuthal scattering angles with a modulation curve and (2) an unbinned, maximum likelihood method (MLM). In both cases, the measured polarization level was below the 99% confidence minimum detectable polarization levels of 72.3% ± 0.8% (SM) andmore » 57.5% ± 0.8% (MLM). Therefore, COSI did not detect polarized gamma-ray emission from this burst. Our most constraining 90% confidence upper limit on the polarization level was 46% (MLM).« less

  7. Leadership and management in UK medical school curricula.

    PubMed

    Jefferies, Richard; Sheriff, Ibrahim H N; Matthews, Jacob H; Jagger, Olivia; Curtis, Sarah; Lees, Peter; Spurgeon, Peter C; Fountain, Daniel Mark; Oldman, Alex; Habib, Ali; Saied, Azam; Court, Jessica; Giannoudi, Marilena; Sayma, Meelad; Ward, Nicholas; Cork, Nick; Olatokun, Olamide; Devine, Oliver; O'Connell, Paul; Carr, Phoebe; Kotronias, Rafail Angelos; Gardiner, Rebecca; Buckle, Rory T; Thomson, Ross J; Williams, Sarah; Nicholson, Simon J; Goga, Usman

    2016-10-10

    Purpose Although medical leadership and management (MLM) is increasingly being recognised as important to improving healthcare outcomes, little is understood about current training of medical students in MLM skills and behaviours in the UK. The paper aims to discuss these issues. Design/methodology/approach This qualitative study used validated structured interviews with expert faculty members from medical schools across the UK to ascertain MLM framework integration, teaching methods employed, evaluation methods and barriers to improvement. Findings Data were collected from 25 of the 33 UK medical schools (76 per cent response rate), with 23/25 reporting that MLM content is included in their curriculum. More medical schools assessed MLM competencies on admission than at any other time of the curriculum. Only 12 schools had evaluated MLM teaching at the time of data collection. The majority of medical schools reported barriers, including overfilled curricula and reluctance of staff to teach. Whilst 88 per cent of schools planned to increase MLM content over the next two years, there was a lack of consensus on proposed teaching content and methods. Research limitations/implications There is widespread inclusion of MLM in UK medical schools' curricula, despite the existence of barriers. This study identified substantial heterogeneity in MLM teaching and assessment methods which does not meet students' desired modes of delivery. Examples of national undergraduate MLM teaching exist worldwide, and lessons can be taken from these. Originality/value This is the first national evaluation of MLM in undergraduate medical school curricula in the UK, highlighting continuing challenges with executing MLM content despite numerous frameworks and international examples of successful execution.

  8. Multilevel modeling: overview and applications to research in counseling psychology.

    PubMed

    Kahn, Jeffrey H

    2011-04-01

    Multilevel modeling (MLM) is rapidly becoming the standard method of analyzing nested data, for example, data from students within multiple schools, data on multiple clients seen by a smaller number of therapists, and even longitudinal data. Although MLM analyses are likely to increase in frequency in counseling psychology research, many readers of counseling psychology journals have had only limited exposure to MLM concepts. This paper provides an overview of MLM that blends mathematical concepts with examples drawn from counseling psychology. This tutorial is intended to be a first step in learning about MLM; readers are referred to other sources for more advanced explorations of MLM. In addition to being a tutorial for understanding and perhaps even conducting MLM analyses, this paper reviews recent research in counseling psychology that has adopted a multilevel framework, and it provides ideas for MLM approaches to future research in counseling psychology. 2011 APA, all rights reserved

  9. Best method for right atrial volume assessment by two-dimensional echocardiography: validation with magnetic resonance imaging.

    PubMed

    Ebtia, Mahasti; Murphy, Darra; Gin, Kenneth; Lee, Pui K; Jue, John; Nair, Parvathy; Mayo, John; Barnes, Marion E; Thompson, Darby J S; Tsang, Teresa S M

    2015-05-01

    Echocardiographic methods for estimating right atrial (RA) volume have not been standardized. Our aim was to evaluate two-dimensional (2D) echocardiographic methods of RA volume assessment, using RA volume by magnetic resonance imaging (MRI) as the reference. Right atrial volume was assessed in 51 patients (mean age 63 ± 14 years, 33 female) who underwent comprehensive 2D echocardiography and cardiac MRI for clinically indicated reasons. Echocardiographic RA volume methods included (1) biplane area length, using four-chamber view twice (biplane 4C-4C); (2) biplane area length, using four-chamber and subcostal views (biplane 4C-subcostal); and (3) single plane Simpson's method of disks (Simpson's). Echocardiographic RA volumes as well as linear RA major and minor dimensions were compared to RA volume by MRI using correlation and Bland-Altman methods, and evaluated for inter-observer reproducibility and accuracy in discriminating RA enlargement. All echocardiography volumetric methods performed well compared to MRI, with Pearson's correlation of 0.98 and concordance correlation ≥0.91 for each. For bias and limits of agreement, biplane 4C-4C (bias -4.81 mL/m(2) , limits of agreement ±9.8 mL/m(2) ) and Simpson's (bias -5.15 mL/m(2) , limits of agreement ±10.1 mL/m(2) ) outperformed biplane 4C-subcostal (bias -8.36 mL/m(2) , limits of agreement ±12.5 mL/m(2) ). Accuracy for discriminating RA enlargement was higher for all volumetric methods than for linear measurements. Inter-observer variability was satisfactory across all methods. Compared to MRI, biplane 4C-4C and single plane Simpson's are highly accurate and reproducible 2D echocardiography methods for estimating RA volume. Linear dimensions are inaccurate and should be abandoned. © 2014, Wiley Periodicals, Inc.

  10. Classification of cassava genotypes based on qualitative and quantitative data.

    PubMed

    Oliveira, E J; Oliveira Filho, O S; Santos, V S

    2015-02-02

    We evaluated the genetic variation of cassava accessions based on qualitative (binomial and multicategorical) and quantitative traits (continuous). We characterized 95 accessions obtained from the Cassava Germplasm Bank of Embrapa Mandioca e Fruticultura; we evaluated these accessions for 13 continuous, 10 binary, and 25 multicategorical traits. First, we analyzed the accessions based only on quantitative traits; next, we conducted joint analysis (qualitative and quantitative traits) based on the Ward-MLM method, which performs clustering in two stages. According to the pseudo-F, pseudo-t2, and maximum likelihood criteria, we identified five and four groups based on quantitative trait and joint analysis, respectively. The smaller number of groups identified based on joint analysis may be related to the nature of the data. On the other hand, quantitative data are more subject to environmental effects in the phenotype expression; this results in the absence of genetic differences, thereby contributing to greater differentiation among accessions. For most of the accessions, the maximum probability of classification was >0.90, independent of the trait analyzed, indicating a good fit of the clustering method. Differences in clustering according to the type of data implied that analysis of quantitative and qualitative traits in cassava germplasm might explore different genomic regions. On the other hand, when joint analysis was used, the means and ranges of genetic distances were high, indicating that the Ward-MLM method is very useful for clustering genotypes when there are several phenotypic traits, such as in the case of genetic resources and breeding programs.

  11. Comparative study of Misgav-Ladach and Pfannenstiel-Kerr cesarean techniques: a randomized controlled trial.

    PubMed

    Naki, Mehmet Murat; Api, Oluş; Celik, Hasniye; Kars, Bülent; Yaşar, Esra; Unal, Orhan

    2011-02-01

    To compare Pfannenstiel-Kerr (PKM) and Misgav-Ladach (MLM) methods in terms of operation-related features and neonatal outcome in primary cesarean deliveries. A total of 180 pregnant women randomized into PKM (n = 90) or MLM (n = 90) groups were included in this study. Primary outcome measures were total operative and extraction times, Apgar score, blood loss, wound complications, and the suture use. Secondary outcome measures were wound seroma and infection incidence, time of bowel restitution, and the perceived pain. Total operation and extraction times were significantly shorter and less suture material was used in the MLM group than the PKM group (p < 0.001). Initially, higher scores obtained for 6 h-VAS in the MLM group (p < 0.05) were normalized after 24 h of the operation. PKM and MLM were similar in terms of preoperative and postoperative levels of hemoglobin and hematocrit, wound complication, bowel restitution, fever, seroma, infection, wound dehiscence and the need for transfusion, antibiotic, and analgesics. The operation-related morbidity of the MLM and PKM for primary C/S seem to be comparable; however, the MLM seems to be superior in terms of operation time and the amount of suture usage but inferior in pain scores in the early postoperative period.

  12. Multilevel Modeling: Overview and Applications to Research in Counseling Psychology

    ERIC Educational Resources Information Center

    Kahn, Jeffrey H.

    2011-01-01

    Multilevel modeling (MLM) is rapidly becoming the standard method of analyzing nested data, for example, data from students within multiple schools, data on multiple clients seen by a smaller number of therapists, and even longitudinal data. Although MLM analyses are likely to increase in frequency in counseling psychology research, many readers…

  13. A General Multilevel SEM Framework for Assessing Multilevel Mediation

    ERIC Educational Resources Information Center

    Preacher, Kristopher J.; Zyphur, Michael J.; Zhang, Zhen

    2010-01-01

    Several methods for testing mediation hypotheses with 2-level nested data have been proposed by researchers using a multilevel modeling (MLM) paradigm. However, these MLM approaches do not accommodate mediation pathways with Level-2 outcomes and may produce conflated estimates of between- and within-level components of indirect effects. Moreover,…

  14. Networking health: multi-level marketing of health products in Ghana.

    PubMed

    Droney, Damien

    2016-01-01

    Multi-level marketing (MLM0), a business model in which product distributors are compensated for enrolling further distributors as well as for selling products, has experienced dramatic growth in recent decades, especially in the so-called global South. This paper argues that the global success of MLM is due to its involvement in local health markets. While MLM has been subject to a number of critiques, few have analyzed the explicit health claims of MLM distributors. The majority of the products distributed through MLM are health products, which are presented as offering transformative health benefits. Based on interviews with MLM distributors in Ghana, but focusing on the experiences of one woman, this paper shows that MLM companies become intimately entwined with Ghanaian quests for health by providing their distributors with the materials to become informal health experts, allowing their distributors to present their products as medicines, and presenting MLM as an avenue to middle class cosmopolitanism. Ghanaian distributors promote MLM products as medically powerful, and the distribution of these products as an avenue to status and profit. As a result, individuals seeking health become a part of ethically questionable forms of medical provision based on the exploitation of personal relationships. The success of MLM therefore suggests that the health industry is at the forefront of transnational corporations' extraction of value from informal economies, drawing on features of health markets to monetize personal relationships.

  15. Computed tomography guided percutaneous injection of a mixture of lipiodol and methylene blue in rabbit lungs: evaluation of localization ability for video-assisted thoracoscopic surgery.

    PubMed

    Jin, Kwang Nam; Lee, Kyung Won; Kim, Tae Jung; Song, Yong Sub; Kim, Dong Il

    2014-01-01

    Preoperative localization is necessary prior to video assisted thoracoscopic surgery for the detection of small or deeply located lung nodules. We compared the localization ability of a mixture of lipiodol and methylene blue (MLM) (0.6 mL, 1:5) to methylene blue (0.5 mL) in rabbit lungs. CT-guided percutaneous injections were performed in 21 subjects with MLM and methylene blue. We measured the extent of staining on freshly excised lung and evaluated the subjective localization ability with 4 point scales at 6 and 24 hr after injections. For MLM, radio-opacity was evaluated on the fluoroscopy. We considered score 2 (acceptable) or 3 (excellent) as appropriate for localization. The staining extent of MLM was significantly smaller than methylene blue (0.6 vs 1.0 cm, P<0.001). MLM showed superior staining ability over methylene blue (2.8 vs 2.2, P=0.010). Excellent staining was achieved in 17 subjects (81%) with MLM and 8 (38%) with methylene blue (P=0.011). An acceptable or excellent radio-opacity of MLM was found in 13 subjects (62%). An appropriate localization rate of MLM was 100% with the use of the directly visible ability and radio-opacity of MLM. MLM provides a superior pulmonary localization ability over methylene blue.

  16. Computed Tomography Guided Percutaneous Injection of a Mixture of Lipiodol and Methylene Blue in Rabbit Lungs: Evaluation of Localization Ability for Video-Assisted Thoracoscopic Surgery

    PubMed Central

    Jin, Kwang Nam; Kim, Tae Jung; Song, Yong Sub; Kim, Dong Il

    2014-01-01

    Preoperative localization is necessary prior to video assisted thoracoscopic surgery for the detection of small or deeply located lung nodules. We compared the localization ability of a mixture of lipiodol and methylene blue (MLM) (0.6 mL, 1:5) to methylene blue (0.5 mL) in rabbit lungs. CT-guided percutaneous injections were performed in 21 subjects with MLM and methylene blue. We measured the extent of staining on freshly excised lung and evaluated the subjective localization ability with 4 point scales at 6 and 24 hr after injections. For MLM, radio-opacity was evaluated on the fluoroscopy. We considered score 2 (acceptable) or 3 (excellent) as appropriate for localization. The staining extent of MLM was significantly smaller than methylene blue (0.6 vs 1.0 cm, P<0.001). MLM showed superior staining ability over methylene blue (2.8 vs 2.2, P=0.010). Excellent staining was achieved in 17 subjects (81%) with MLM and 8 (38%) with methylene blue (P=0.011). An acceptable or excellent radio-opacity of MLM was found in 13 subjects (62%). An appropriate localization rate of MLM was 100% with the use of the directly visible ability and radio-opacity of MLM. MLM provides a superior pulmonary localization ability over methylene blue. PMID:24431917

  17. Impact of Atrial Fibrillation Ablation on Left Ventricular Filling Pressure and Left Atrial Remodeling

    PubMed Central

    dos Santos, Simone Nascimento; Henz, Benhur Davi; Zanatta, André Rodrigues; Barreto, José Roberto; Loureiro, Kelly Bianca; Novakoski, Clarissa; dos Santos, Marcus Vinícius Nascimento; Giuseppin, Fabio F.; Oliveira, Edna Maria; Leite, Luiz Roberto

    2014-01-01

    Background Left ventricular (LV) diastolic dysfunction is associated with new-onset atrial fibrillation (AF), and the estimation of elevated LV filling pressures by E/e' ratio is related to worse outcomes in patients with AF. However, it is unknown if restoring sinus rhythm reverses this process. Objective To evaluate the impact of AF ablation on estimated LV filling pressure. Methods A total of 141 patients underwent radiofrequency (RF) ablation to treat drug-refractory AF. Transthoracic echocardiography was performed 30 days before and 12 months after ablation. LV functional parameters, left atrial volume index (LAVind), and transmitral pulsed and mitral annulus tissue Doppler (e' and E/e') were assessed. Paroxysmal AF was present in 18 patients, persistent AF was present in 102 patients, and long-standing persistent AF in 21 patients. Follow-up included electrocardiographic examination and 24-h Holter monitoring at 3, 6, and 12 months after ablation. Results One hundred seventeen patients (82.9%) were free of AF during the follow-up (average, 18 ± 5 months). LAVind reduced in the successful group (30.2 mL/m2 ± 10.6 mL/m2 to 22.6 mL/m2 ± 1.1 mL/m2, p < 0.001) compared to the non-successful group (37.7 mL/m2 ± 14.3 mL/m2 to 37.5 mL/m2 ± 14.5 mL/m2, p = ns). Improvement of LV filling pressure assessed by a reduction in the E/e' ratio was observed only after successful ablation (11.5 ± 4.5 vs. 7.1 ± 3.7, p < 0.001) but not in patients with recurrent AF (12.7 ± 4.4 vs. 12 ± 3.3, p = ns). The success rate was lower in the long-standing persistent AF patient group (57% vs. 87%, p = 0.001). Conclusion Successful AF ablation is associated with LA reverse remodeling and an improvement in LV filling pressure. PMID:25590928

  18. Effect of lipiodol and methylene blue on the thoracoscopic preoperative positioning.

    PubMed

    Zhang, Chuan-Yu; Yu, Hua-Long; Liu, Shi-He; Jiang, Gang; Wang, Yong-Jie

    2015-01-01

    The aim of this study was to compare and analyze the site-specific accuracy of mixture of lipiodol and methylene blue (MLM) (0.6 ml, 1:5) and pure methylene blue (0.5 ml) on the rabbit lungs. In this study, CT-guided percutaneous injection of MLM and methylene blue. Compare the staining degree by biopsy of lung tissue. Use 4 points system to evaluate the site-specific accuracy at 6h and 24 h after injection. For MLM, evaluate its radiopacity by radiation. When evaluate the positioning, 2 points mean acceptable, 3 points mean excellent. The results indicated that the staining range of MLM is obvious less than that of methylene blue (0.6 vs. 1.0 cm, P<0.01), but the staining capacity of MLM is higher than that of methylene blue (2.8 vs. 2.2, P = 0.01). About the staining abilities which are evaluated as excellent, MLM group accounts for 81%, methylene blue group accounts for 38% (P = 0.011). About the radiopacity which are evaluated as acceptable or excellent, MLM group accounts for 62%. With good direct vision, the suitable positioning rate of MLM can be 100%, which is better than that of methylene blue. In conclusion, percutaneous injection of MLM can be used to lung positioning. The result shows that use MLM is better than only using methylene blue. But it is necessary to do the investigation in human beings in order to confirm the feasibility of its clinical application.

  19. Effect of lipiodol and methylene blue on the thoracoscopic preoperative positioning

    PubMed Central

    Zhang, Chuan-Yu; Yu, Hua-Long; Liu, Shi-He; Jiang, Gang; Wang, Yong-Jie

    2015-01-01

    The aim of this study was to compare and analyze the site-specific accuracy of mixture of lipiodol and methylene blue (MLM) (0.6 ml, 1:5) and pure methylene blue (0.5 ml) on the rabbit lungs. In this study, CT-guided percutaneous injection of MLM and methylene blue. Compare the staining degree by biopsy of lung tissue. Use 4 points system to evaluate the site-specific accuracy at 6h and 24 h after injection. For MLM, evaluate its radiopacity by radiation. When evaluate the positioning, 2 points mean acceptable, 3 points mean excellent. The results indicated that the staining range of MLM is obvious less than that of methylene blue (0.6 vs. 1.0 cm, P<0.01), but the staining capacity of MLM is higher than that of methylene blue (2.8 vs. 2.2, P = 0.01). About the staining abilities which are evaluated as excellent, MLM group accounts for 81%, methylene blue group accounts for 38% (P = 0.011). About the radiopacity which are evaluated as acceptable or excellent, MLM group accounts for 62%. With good direct vision, the suitable positioning rate of MLM can be 100%, which is better than that of methylene blue. In conclusion, percutaneous injection of MLM can be used to lung positioning. The result shows that use MLM is better than only using methylene blue. But it is necessary to do the investigation in human beings in order to confirm the feasibility of its clinical application. PMID:26221301

  20. MLM Builder: An Integrated Suite for Development and Maintenance of Arden Syntax Medical Logic Modules

    PubMed Central

    Sailors, R. Matthew

    1997-01-01

    The Arden Syntax specification for sharable computerized medical knowledge bases has not been widely utilized in the medical informatics community because of a lack of tools for developing Arden Syntax knowledge bases (Medical Logic Modules). The MLM Builder is a Microsoft Windows-hosted CASE (Computer Aided Software Engineering) tool designed to aid in the development and maintenance of Arden Syntax Medical Logic Modules (MLMs). The MLM Builder consists of the MLM Writer (an MLM generation tool), OSCAR (an anagram of Object-oriented ARden Syntax Compiler), a test database, and the MLManager (an MLM management information system). Working together, these components form a self-contained, unified development environment for the creation, testing, and maintenance of Arden Syntax Medical Logic Modules.

  1. Multivariate approach in popcorn genotypes using the Ward-MLM strategy: morpho-agronomic analysis and incidence of Fusarium spp.

    PubMed

    Kurosawa, R N F; do Amaral Junior, A T; Silva, F H L; Dos Santos, A; Vivas, M; Kamphorst, S H; Pena, G F

    2017-02-08

    The multivariate analyses are useful tools to estimate the genetic variability between accessions. In the breeding programs, the Ward-Modified Location Model (MLM) multivariate method has been a powerful strategy to quantify variability using quantitative and qualitative variables simultaneously. The present study was proposed in view of the dearth of information about popcorn breeding programs under a multivariate approach using the Ward-MLM methodology. The objective of this study was thus to estimate the genetic diversity among 37 genotypes of popcorn aiming to identify divergent groups associated with morpho-agronomic traits and traits related to resistance to Fusarium spp. To this end, 7 qualitative and 17 quantitative variables were analyzed. The experiment was conducted in 2014, at Universidade Estadual do Norte Fluminense, located in Campos dos Goytacazes, RJ, Brazil. The Ward-MLM strategy allowed the identification of four groups as follows: Group I with 10 genotypes, Group II with 11 genotypes, Group III with 9 genotypes, and Group IV with 7 genotypes. Group IV was distant in relation to the other groups, while groups I, II, and III were near. The crosses between genotypes from the other groups with those of group IV allow an exploitation of heterosis. The Ward-MLM strategy provided an appropriate grouping of genotypes; ear weight, ear diameter, and grain yield were the traits that most contributed to the analysis of genetic diversity.

  2. Predicting Mouse Liver Microsomal Stability with “Pruned” Machine Learning Models and Public Data

    PubMed Central

    Perryman, Alexander L.; Stratton, Thomas P.; Ekins, Sean; Freundlich, Joel S.

    2015-01-01

    Purpose Mouse efficacy studies are a critical hurdle to advance translational research of potential therapeutic compounds for many diseases. Although mouse liver microsomal (MLM) stability studies are not a perfect surrogate for in vivo studies of metabolic clearance, they are the initial model system used to assess metabolic stability. Consequently, we explored the development of machine learning models that can enhance the probability of identifying compounds possessing MLM stability. Methods Published assays on MLM half-life values were identified in PubChem, reformatted, and curated to create a training set with 894 unique small molecules. These data were used to construct machine learning models assessed with internal cross-validation, external tests with a published set of antitubercular compounds, and independent validation with an additional diverse set of 571 compounds (PubChem data on percent metabolism). Results “Pruning” out the moderately unstable/moderately stable compounds from the training set produced models with superior predictive power. Bayesian models displayed the best predictive power for identifying compounds with a half-life ≥1 hour. Conclusions Our results suggest the pruning strategy may be of general benefit to improve test set enrichment and provide machine learning models with enhanced predictive value for the MLM stability of small organic molecules. This study represents the most exhaustive study to date of using machine learning approaches with MLM data from public sources. PMID:26415647

  3. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    PubMed

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The proportionality of bias and number of measurement occasions should be considered when MLM-UN is used. The good news is that this proportionality can be compensated by means of large sample sizes. Accordingly, MLM-UN can be recommended even for small sample sizes for about three measurement occasions and for large sample sizes for about nine measurement occasions.

  4. Using multiple group modeling to test moderators in meta-analysis.

    PubMed

    Schoemann, Alexander M

    2016-12-01

    Meta-analysis is a popular and flexible analysis that can be fit in many modeling frameworks. Two methods of fitting meta-analyses that are growing in popularity are structural equation modeling (SEM) and multilevel modeling (MLM). By using SEM or MLM to fit a meta-analysis researchers have access to powerful techniques associated with SEM and MLM. This paper details how to use one such technique, multiple group analysis, to test categorical moderators in meta-analysis. In a multiple group meta-analysis a model is fit to each level of the moderator simultaneously. By constraining parameters across groups any model parameter can be tested for equality. Using multiple groups to test for moderators is especially relevant in random-effects meta-analysis where both the mean and the between studies variance of the effect size may be compared across groups. A simulation study and the analysis of a real data set are used to illustrate multiple group modeling with both SEM and MLM. Issues related to multiple group meta-analysis and future directions for research are discussed. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Defining the structure of undergraduate medical leadership and management teaching and assessment in the UK.

    PubMed

    Stringfellow, Thomas D; Rohrer, Rebecca M; Loewenthal, Lola; Gorrard-Smith, Connor; Sheriff, Ibrahim H N; Armit, Kirsten; Lees, Peter D; Spurgeon, Peter C

    2014-10-10

    Abstract Medical leadership and management (MLM) skills are essential in preventing failings of healthcare; it is unknown how these attitudes can be developed during undergraduate medical education. This paper aims to quantify interest in MLM and recommends preferred methods of teaching and assessment at UK medical schools. Two questionnaires were developed, one sent to all UK medical school faculties, to assess executed and planned curriculum changes, and the other sent to medical students nationally to assess their preferences for teaching and assessment. Forty-eight percent of UK medical schools and 260 individual student responses were recorded. Student responses represented 60% of UK medical schools. 65% of schools valued or highly valued the importance of teaching MLM topics, compared with 93.2% of students. Students' favoured teaching methods were seminars or lectures (89.4%) and audit and quality improvement (QI) projects (77.8%). Medical schools preferred portfolio entries (55%) and presentations (35%) as assessment methods, whilst simulation exercises (76%) and audit reports (61%) were preferred by students. Preferred methods encompass experiential learning or simulation and a greater emphasis should be placed on encouraging student audit and QI projects. The curriculum changes necessary could be achieved via further integration into future editions of Tomorrow's Doctors.

  6. [Reliability and validity of Meaningful Life Measure-Chinese Revised in Chinese college students].

    PubMed

    Xiao, Rong; Lai, Qiao-Zhen; Yang, Jia-Ping

    2016-04-20

    To test the reliability and validity of Meaningful Life Measure-Chinese Revised (MLM-CR) in Chinese college students. A total of 1035 college students were evaluated with MLM-CR, Satisfaction with Life Scale (SWLS), Purpose in Life (PIL) and Patient Health Questionnaire-2 (PHQ-2), and 120 of the students were examined with PIL-SF twice. All the items in MLM-CR had good discrimination indexes (r=0.753-0.838, P<0.001). Confirmatory factor analysis confirmed the hypothesized five-factor model of MLM-CR (Χ 2 /df=3.4, GFI=0.946, AGFI=0.924, RMR=0.069, NFI=0.953, CFI=0.966, RMSEA=0.048). The total internal consistency reliability of MLM-CR was 0.942, and the alpha coefficients of the 5 dimensions ranged from 0.782 to 0.877; the total split-half reliability was 0.920, and the split-half reliability of the 5 dimensions ranged from 0.752 to 0.830; the total test-retest reliability was 0.871, and the test-retest reliability of the 5 dimensions ranged from 0.783 to 0.805. The criterion validity of MLM-CR in correlation with SWLS, PIL and PHQ-2 was 0.66, 0.755 and -0.388, respectively (P<0.01). The Average score of MLM-CR of the college students was 5.20∓0.90, and the scores were significantly higher in female students than in the male students (P<0.001). MLM-CR has good psychometric properties for application in comprehensive evaluation of personal meaning in life.

  7. Using Multilevel Modeling in Counseling Research

    ERIC Educational Resources Information Center

    Lynch, Martin F.

    2012-01-01

    This conceptual and practical overview of multilevel modeling (MLM) for researchers in counseling and development provides guidelines on setting up SPSS to perform MLM and an example of how to present the findings. It also provides a discussion on how counseling and developmental researchers can use MLM to address their own research questions.…

  8. Robust tests for multivariate factorial designs under heteroscedasticity.

    PubMed

    Vallejo, Guillermo; Ato, Manuel

    2012-06-01

    The question of how to analyze several multivariate normal mean vectors when normality and covariance homogeneity assumptions are violated is considered in this article. For the two-way MANOVA layout, we address this problem adapting results presented by Brunner, Dette, and Munk (BDM; 1997) and Vallejo and Ato (modified Brown-Forsythe [MBF]; 2006) in the context of univariate factorial and split-plot designs and a multivariate version of the linear model (MLM) to accommodate heterogeneous data. Furthermore, we compare these procedures with the Welch-James (WJ) approximate degrees of freedom multivariate statistics based on ordinary least squares via Monte Carlo simulation. Our numerical studies show that of the methods evaluated, only the modified versions of the BDM and MBF procedures were robust to violations of underlying assumptions. The MLM approach was only occasionally liberal, and then by only a small amount, whereas the WJ procedure was often liberal if the interactive effects were involved in the design, particularly when the number of dependent variables increased and total sample size was small. On the other hand, it was also found that the MLM procedure was uniformly more powerful than its most direct competitors. The overall success rate was 22.4% for the BDM, 36.3% for the MBF, and 45.0% for the MLM.

  9. The Dubious Benefits of Multi-Level Modeling

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2007-01-01

    This paper presents an argument against the wider adoption of complex forms of data analysis, using multi-level modeling (MLM) as an extended case study. MLM was devised to overcome some deficiencies in existing datasets, such as the bias caused by clustering. The paper suggests that MLM has an unclear theoretical and empirical basis, has not led…

  10. The recovery of 13C-labeled oleic acid in rat lymph after administration of long chain triacylglycerols or specific structured triacylglycerols.

    PubMed

    Vistisen, Bodil; Mu, Huiling; Høy, Carl-Erik

    2006-09-01

    Consumption of specific structured triacylglycerols, MLM (M = medium chain fatty acid, L = long chain fatty acid), delivers fast energy and long chain fatty acids to the organism. The purpose of the present study was to compare lymphatic absorption of (13)C-labeled MLM and (13)C-labeled LLL in rats. Stable isotope labeling enables the separation of the endogenous and exogenous fatty acids. Lymph was collected during 24 h following administration of MLM or LLL. Lymph fatty acid composition and (13)C-enrichment were determined and quantified by gas chromatography combustion isotope ratio mass spectrometry. The recovery of 18:1n-9 was higher after administration of LLL compared with MLM (58.1% +/- 7.4% and 29.1% +/- 3.9%, respectively, P < 0.001). This may be due to a higher chylomicron formation stimulated by a higher amount of long chain fatty acids in the intestine after LLL compared with MLM administration. This was confirmed by the tendencies of higher lymphatic transport of endogenous fatty acids. The study revealed a higher lymphatic recovery of the administered long chain fatty acids after LLL compared with MLM consumption.

  11. Use of Cardiac Computed Tomography for Ventricular Volumetry in Late Postoperative Patients with Tetralogy of Fallot.

    PubMed

    Kim, Ho Jin; Mun, Da Na; Goo, Hyun Woo; Yun, Tae-Jin

    2017-04-01

    Cardiac computed tomography (CT) has emerged as an alternative to magnetic resonance imaging (MRI) for ventricular volumetry. However, the clinical use of cardiac CT requires external validation. Both cardiac CT and MRI were performed prior to pulmonary valve implantation (PVI) in 11 patients (median age, 19 years) who had undergone total correction of tetralogy of Fallot during infancy. The simplified contouring method (MRI) and semiautomatic 3-dimensional region-growing method (CT) were used to measure ventricular volumes. All volumetric indices measured by CT and MRI generally correlated well with each other, except for the left ventricular end-systolic volume index (LV-ESVI), which showed the following correlations with the other indices: the right ventricular end-diastolic volume index (RV-EDVI) (r=0.88, p<0.001), the right ventricular end-systolic volume index (RV-ESVI) (r=0.84, p=0.001), the left ventricular end-diastolic volume index (LV-EDVI) (r=0.90, p=0.001), and the LV-ESVI (r=0.55, p=0.079). While the EDVIs measured by CT were significantly larger than those measured by MRI (median RV-EDVI: 197 mL/m 2 vs. 175 mL/m 2 , p=0.008; median LV-EDVI: 94 mL/m 2 vs. 92 mL/m 2 , p=0.026), no significant differences were found for the RV-ESVI or LV-ESVI. The EDVIs measured by cardiac CT were greater than those measured by MRI, whereas the ESVIs measured by CT and MRI were comparable. The volumetric characteristics of these 2 diagnostic modalities should be taken into account when indications for late PVI after tetralogy of Fallot repair are assessed.

  12. Induction and treatment of anergy in murine leprosy

    PubMed Central

    Juarez-Ortega, Mario; Hernandez, Víctor G; Arce-Paredes, Patricia; Villanueva, Enrique B; Aguilar-Santelises, Miguel; Rojas-Espinosa, Oscar

    2015-01-01

    Leprosy is a disease consisting of a spectrum of clinical, bacteriological, histopathological and immunological manifestations. Tuberculoid leprosy is frequently recognized as the benign polar form of the disease, while lepromatous leprosy is regarded as the malignant form. The different forms of leprosy depend on the genetic and immunological characteristics of the patient and on the characteristics of the leprosy bacillus. The malignant manifestations of lepromatous leprosy result from the mycobacterial-specific anergy that develops in this form of the disease. Using murine leprosy as a model of anergy in this study, we first induced the development of anergy to Mycobacterium lepraemurium (MLM) in mice and then attempted to reverse it by the administration of dialysable leucocyte extracts (DLE) prepared from healthy (HLT), BCG-inoculated and MLM-inoculated mice. Mice inoculated with either MLM or BCG developed a robust cell-mediated immune response (CMI) that was temporary in the MLM-inoculated group and long-lasting in the BCG-inoculated group. DLE were prepared from the spleens of MLM- and BCG-inoculated mice at the peak of CMI. Independent MLM intradermally-inoculated groups were treated every other day with HLT-DLE, BCG-DLE or MLM-DLE, and the effect was documented for 98 days. DLE administered at a dose of 1.0 U (1 × 106 splenocytes) did not affect the evolution of leprosy, while DLE given at a dose of 0.1 U showed beneficial effects regardless of the DLE source. The dose but not the specificity of DLE was the determining factor for reversing anergy. PMID:25529580

  13. Optimized multiple linear mappings for single image super-resolution

    NASA Astrophysics Data System (ADS)

    Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo

    2017-12-01

    Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.

  14. Iterative Usage of Fixed and Random Effect Models for Powerful and Efficient Genome-Wide Association Studies

    PubMed Central

    Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu

    2016-01-01

    False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793

  15. Microbial Load Monitor

    NASA Technical Reports Server (NTRS)

    Gibson, S. F.; Royer, E. R.

    1979-01-01

    The Microbial Load Monitor (MLM) is an automated and computerized system for detection and identification of microorganisms. Additionally, the system is designed to enumerate and provide antimicrobic susceptibility profiles for medically significant bacteria. The system is designed to accomplish these tasks in a time of 13 hours or less versus the traditional time of 24 hours for negatives and 72 hours or more for positives usually required for standard microbiological analysis. The MLM concept differs from other methods of microbial detection in that the system is designed to accept raw untreated clinical samples and to selectively identify each group or species that may be present in a polymicrobic sample.

  16. Applications of Genomic Selection in Breeding Wheat for Rust Resistance.

    PubMed

    Ornella, Leonardo; González-Camacho, Juan Manuel; Dreisigacker, Susanne; Crossa, Jose

    2017-01-01

    There are a lot of methods developed to predict untested phenotypes in schemes commonly used in genomic selection (GS) breeding. The use of GS for predicting disease resistance has its own particularities: (a) most populations shows additivity in quantitative adult plant resistance (APR); (b) resistance needs effective combinations of major and minor genes; and (c) phenotype is commonly expressed in ordinal categorical traits, whereas most parametric applications assume that the response variable is continuous and normally distributed. Machine learning methods (MLM) can take advantage of examples (data) that capture characteristics of interest from an unknown underlying probability distribution (i.e., data-driven). We introduce some state-of-the-art MLM capable to predict rust resistance in wheat. We also present two parametric R packages for the reader to be able to compare.

  17. Controlling contamination in Mo/Si multilayer mirrors by Si surface capping modifications

    NASA Astrophysics Data System (ADS)

    Malinowski, Michael E.; Steinhaus, Chip; Clift, W. Miles; Klebanoff, Leonard E.; Mrowka, Stanley; Soufli, Regina

    2002-07-01

    The performance of Mo/Si multilayer mirrors (MLMs) used to reflect UV (EUV) radiation in an EUV + hydrocarbon (NC) vapor environment can be improved by optimizing the silicon capping layer thickness on the MLM in order to minimize the initial buildup of carbon on MLMs. Carbon buildup is undesirable since it can absorb EUV radiation and reduce MLM reflectivity. A set of Mo/Si MLMs deposited on Si wafers was fabricated such that each MLM had a different Si capping layer thickness ranging form 2 nm to 7 nm. Samples from each MLM wafer were exposed to a combination of EUV light + (HC) vapors at the Advanced Light Source (ALS) synchrotron in order to determine if the Si capping layer thickness affected the carbon buildup on the MLMs. It was found that the capping layer thickness had a major influence on this 'carbonizing' tendency, with the 3 nm layer thickness providing the best initial resistance to carbonizing and accompanying EUV reflectivity loss in the MLM. The Si capping layer thickness deposited on a typical EUV optic is 4.3 nm. Measurements of the absolute reflectivities performed on the Calibration and Standards beamline at the ALS indicated the EUV reflectivity of the 3 nm-capped MLM was actually slightly higher than that of the normal, 4 nm Si-capped sample. These results show that he use of a 3 nm capping layer represents an improvement over the 4 nm layer since the 3 nm has both a higher absolute reflectivity and better initial resistance to carbon buildup. The results also support the general concept of minimizing the electric field intensity at the MLM surface to minimize photoelectron production and, correspondingly, carbon buildup in a EUV + HC vapor environment.

  18. Use of molecular oxygen to reduce EUV-induced carbon contamination of optics

    NASA Astrophysics Data System (ADS)

    Malinowski, Michael E.; Grunow, Philip A.; Steinhaus, Chip; Clift, W. Miles; Klebanoff, Leonard E.

    2001-08-01

    Carbon deposition and removal experiments on Mo/Si multilayer mirror (MLM) samples were performed using extreme ultraviolet (EUV) light on Beamline 12.0.1.2 of the Advanced Light Source, Lawrence Berkeley National Laboratory (LBNL). Carbon (C) was deposited onto Mo/Si multilayer mirror (MLM) samples when hydrocarbon vapors where intentionally introduced into the MLM test chamber in the presence of EUV at 13.44 nm (92.3eV). The carbon deposits so formed were removed by molecular oxygen + EUV. The MLM reflectivities and photoemission were measured in-situ during these carbon deposition and cleaning procedures. Auger Electron Spectroscopy (AES) sputter-through profiling of the samples was performed after experimental runs to help determine C layer thickness and the near-surface compositional-depth profiles of all samples studied. EUV powers were varied from ~0.2mW/mm2 to 3mW/mm2(at 13.44 nm) during both deposition and cleaning experiments and the oxygen pressure ranged from ~5x10-5 to 5x10-4 Torr during the cleaning experiments. C deposition rates as high as ~8nm/hr were observed, while cleaning rates as high as ~5nm/hr could be achieved when the highest oxygen pressure were used. A limited set of experiments involving intentional oxygen-only exposure of the MLM samples showed that slow oxidation of the MLM surface could occur.

  19. Generation of development environments for the Arden Syntax.

    PubMed Central

    Bång, M.; Eriksson, H.

    1997-01-01

    Providing appropriate development environments for specialized languages requires a significant development and maintenance effort. Specialized environments are therefore expensive when compared to their general-language counterparts. The Arden Syntax for Medical Logic Modules (MLM) is a standardized language for representing medical knowledge. We have used PROTEGE-II, a knowledge-engineering environment, to generate a number of experimental development environments for the Arden Syntax. MEDAILLE is the resulting MLM editor, which provides a user-friendly environment that allows users to create and modify MLM definitions. Although MEDAILLE is a generated editor, it has similar functionality, while reducing the programming effort, as compared to other MLM editors developed using traditional programming techniques. We discuss how developers can use PROTEGE-II to generate development environments for other standardized languages and for general programming languages. PMID:9357639

  20. A Corrected Formulation of the Multilayer Model (MLM) for Inferring Gaseous Dry Deposition to Vegetated Surfaces

    NASA Technical Reports Server (NTRS)

    Saylor, Rick D.; Wolfe, Glenn M.; Meyers, Tilden P.; Hicks, Bruce B.

    2014-01-01

    The Multilayer Model (MLM) has been used for many years to infer dry deposition fluxes from measured trace species concentrations and standard meteorological measurements for national networks in the U.S., including the U.S. Environmental Protection Agency's Clean Air Status and Trends Network (CASTNet). MLM utilizes a resistance analogy to calculate deposition velocities appropriate for whole vegetative canopies, while employing a multilayer integration to account for vertically varying meteorology, canopy morphology and radiative transfer within the canopy. However, the MLM formulation, as it was originally presented and as it has been subsequently employed, contains a non-physical representation related to the leaf-level quasi-laminar boundary layer resistance that affects the calculation of the total canopy resistance. In this note, the non-physical representation of the canopy resistance as originally formulated in MLM is discussed and a revised, physically consistent, formulation is suggested as a replacement. The revised canopy resistance formulation reduces estimates of HNO3 deposition velocities by as much as 38% during mid-day as compared to values generated by the original formulation. Inferred deposition velocities for SO2 and O3 are not significantly altered by the change in formulation (less than 3%). Inferred deposition loadings of oxidized and total nitrogen from CASTNet data may be reduced by 10-20% and 5-10%, respectively, for the Eastern U. S. when employing the revised formulation of MLM as compared to the original formulation.

  1. [Extracellular fluid, plasma and interstitial volume in cirrhotic patients without clinical edema or ascites].

    PubMed

    Noguera Viñas, E C; Hames, W; Mothe, G; Barrionuevo, M P

    1989-01-01

    Extracellular fluid volume (E.C.F.) and plasma volume (P.V.), were measured with sodium sulfate labeled with 35I and 131I human serum albumin, respectively, by the dilution technique in control subjects and in cirrhotic patients without clinical ascites or edema, renal or hepatic failure, gastrointestinal bleeding or diuretics. Results are expressed as mean +/- DS in both ml/m2 and ml/kg. In normal subjects E.C.F. (n = 8) was 7,533 +/- 817 ml/m2 (201.3 +/- 182 ml/kg), P.V. (n = 11) 1,767 +/- 337 ml/m2 (47.2 +/- 9.3 ml/kg), and interstitial fluid (I.S.F.) (n = 7) 5,758 +/- 851 ml/m2 (Table 2). In cirrhotic patients E.C.F. (n = 11) was 10,318 +/- 2,980 ml/m2 (261.7 +/- 76.8 ml/kg), P.V. (n = 12) 2,649 +/- 558 ml/m2 (67.7 +/- 15.6 ml/kg) and I.S.F. (n = 11) 7,866 +/- 2,987 ml/m2 (Table 3). Cirrhotic patients compared with normal subjects have hypervolemia due to a significant E.C.F. and P.V. expansion (p less than 0.02 and less than 0.001 respectively) (Fig. 1). Reasons for E.C.F. and P.V. abnormalities in cirrhotic patients may reflect urinary sodium retention related to portal hipertension which stimulates aldosterone release or enhanced renal tubular sensitivity to the hormone. However, it is also possible that these patients, in the presence of hypoalbuminemia (Table 1), have no clinical edema or ascites due to increased glomerular filtration, suppressed release of vasopressin, increased natriuretic factor, and urinary prostaglandin excretion, in response to the intravascular expansion, all of which increased solute and water delivery to the distal nephron and improved renal water excretion. We conclude that in our clinical experience cirrhotic patients without ascites or edema have hypervolemia because of a disturbance in E.C.F.

  2. Simplified single plane echocardiography is comparable to conventional biplane two-dimensional echocardiography in the evaluation of left atrial volume: a study validated by three-dimensional echocardiography in 143 individuals.

    PubMed

    Vieira-Filho, Normando G; Mancuso, Frederico J N; Oliveira, Wercules A A; Gil, Manuel A; Fischer, Cláudio H; Moises, Valdir A; Campos, Orlando

    2014-03-01

    The left atrial volume index (LAVI) is a biomarker of diastolic dysfunction and a predictor of cardiovascular events. Three-dimensional echocardiography (3DE) is highly accurate for LAVI measurements but is not widely available. Furthermore, biplane two-dimensional echocardiography (B2DE) may occasionally not be feasible due to a suboptimal two-chamber apical view. Simplified single plane two-dimensional echocardiography (S2DE) could overcome these limitations. We aimed to compare the reliability of S2DE with other validated echocardiographic methods in the measurement of the LAVI. We examined 143 individuals (54 ± 13 years old; 112 with heart disease and 31 healthy volunteers; all with sinus rhythm, with a wide range of LAVI). The results for all the individuals were compared with B2DE-derived LAVIs and validated using 3DE. The LAVIs, as determined using S2DE (32.7 ± 13.1 mL/m(2)), B2DE (31.9 ± 12.7 mL/m(2)), and 3DE (33.1 ± 13.4 mL/m(2)), were not significantly different from each other (P = 0.85). The S2DE-derived LAVIs correlated significantly with those obtained using both B2DE (r = 0.98; P < 0.001) and 3DE (r = 0.93; P < 0.001). The mean difference between the S2DE and B2DE measurements was <1.0 mL/m(2). Using the American Society of Echocardiography criteria for grading LAVI enlargement (normal, mild, moderate, severe), we observed an excellent agreement between the S2DE- and B2DE-derived classifications (κ = 0.89; P < 0.001). S2DE is a simple, rapid, and reliable method for LAVI measurement that may expand the use of this important biomarker in routine echocardiographic practice. © 2013, Wiley Periodicals, Inc.

  3. Regularization of Mars Reconnaissance Orbiter CRISM along-track oversampled hyperspectral imaging observations of Mars

    NASA Astrophysics Data System (ADS)

    Kreisch, C. D.; O'Sullivan, J. A.; Arvidson, R. E.; Politte, D. V.; He, L.; Stein, N. T.; Finkel, J.; Guinness, E. A.; Wolff, M. J.; Lapôtre, M. G. A.

    2017-01-01

    Mars Reconnaissance Orbiter Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) hyperspectral image data have been acquired in an along-track oversampled (ATO) mode with the intent of processing the data to better than the nominal ∼18 m/pixel ground resolution. We have implemented an iterative maximum log-likelihood method (MLM) that utilizes the instrument spectral and spatial transfer functions and includes a penalty function to regularize the data. Products are produced both in sensor space and as projected hyperspectral image cubes at 12 m/pixel. Preprocessing steps include retrieval of surface single scattering albedos (SSA) using the Hapke Function and DISORT-based radiative modeling of atmospheric gases and aerosols. Resultant SSA cubes are despiked to remove extrema and tested to ensure that the remaining data are Poisson-distributed, an underlying assumption for the MLM algorithm implementation. Two examples of processed ATO data sets are presented. ATO0002EC79 covers the route taken by the Curiosity rover during its initial ascent of Mount Sharp in Gale Crater. SSA data are used to model mineral abundances and grain sizes predicted to be present in the Namib barchan sand dune sampled and analyzed by Curiosity. CRISM based results compare favorably to in situ results derived from Curiosity's measurement campaign. ATO0002DDF9 covers Marathon Valley on the Cape Tribulation rim segment of Endeavour Crater. SSA spectra indicate the presence of a minor component of Fe3+ and Mg2+ smectites on the valley floor and walls. Localization to 12 m/pixel provided the detailed spatial information needed for the Opportunity rover to traverse to and characterize those outcrops that have the deepest absorptions. The combination of orbital and rover-based data show that the smectite-bearing outcrops in Marathon Valley are impact breccias that are basaltic in composition and that have been isochemically altered in a low water to rock environment.

  4. Lymphatic recovery of exogenous oleic acid in rats on long chain or specific structured triacylglycerol diets.

    PubMed

    Vistisen, Bodil; Mu, Huiling; Høy, Carl-Erik

    2006-09-01

    Specific structured triacylglycerols, MLM (M = medium-chain fatty acid, L = long-chain fatty acid), rapidly deliver energy and long-chain fatty acids to the body and are used for longer periods in human enteral feeding. In the present study rats were fed diets of 10 wt% MLM or LLL (L = oleic acid [18:1 n-9], M = caprylic acid [8:01) for 2 wk. Then lymph was collected 24 h following administration of a single bolus of 13C-labeled MLM or LLL. The total lymphatic recovery of exogenous 18:1 n-9 24 h after administration of a single bolus of MLM or LLL was similar in rats on the LLL diet (43% and 45%, respectively). However, the recovery of exogenous 18:1 n-9 was higher after a single bolus of MLM compared with a bolus of LLL in rats on the MLM diet (40% and 24%, respectively, P = 0.009). The recovery of lymphatic 18:1 n-9 of the LLL bolus tended to depend on the diet triacylglycerol structure and composition (P= 0.07). This study demonstrated that with a diet containing specific structured triacylglycerol, the lymphatic recovery of 18:1 n-9 after a single bolus of fat was dependent on the triacylglycerol structure of the bolus. This indicates that the lymphatic recovery of long-chain fatty acids from a single meal depends on the overall long-chain fatty acid composition of the habitual diet. This could have implications for enteral feeding for longer periods.

  5. Detecting Intervention Effects in a Cluster-Randomized Design Using Multilevel Structural Equation Modeling for Binary Responses

    PubMed Central

    Cho, Sun-Joo; Preacher, Kristopher J.; Bottge, Brian A.

    2015-01-01

    Multilevel modeling (MLM) is frequently used to detect group differences, such as an intervention effect in a pre-test–post-test cluster-randomized design. Group differences on the post-test scores are detected by controlling for pre-test scores as a proxy variable for unobserved factors that predict future attributes. The pre-test and post-test scores that are most often used in MLM are summed item responses (or total scores). In prior research, there have been concerns regarding measurement error in the use of total scores in using MLM. To correct for measurement error in the covariate and outcome, a theoretical justification for the use of multilevel structural equation modeling (MSEM) has been established. However, MSEM for binary responses has not been widely applied to detect intervention effects (group differences) in intervention studies. In this article, the use of MSEM for intervention studies is demonstrated and the performance of MSEM is evaluated via a simulation study. Furthermore, the consequences of using MLM instead of MSEM are shown in detecting group differences. Results of the simulation study showed that MSEM performed adequately as the number of clusters, cluster size, and intraclass correlation increased and outperformed MLM for the detection of group differences. PMID:29881032

  6. Detecting Intervention Effects in a Cluster-Randomized Design Using Multilevel Structural Equation Modeling for Binary Responses.

    PubMed

    Cho, Sun-Joo; Preacher, Kristopher J; Bottge, Brian A

    2015-11-01

    Multilevel modeling (MLM) is frequently used to detect group differences, such as an intervention effect in a pre-test-post-test cluster-randomized design. Group differences on the post-test scores are detected by controlling for pre-test scores as a proxy variable for unobserved factors that predict future attributes. The pre-test and post-test scores that are most often used in MLM are summed item responses (or total scores). In prior research, there have been concerns regarding measurement error in the use of total scores in using MLM. To correct for measurement error in the covariate and outcome, a theoretical justification for the use of multilevel structural equation modeling (MSEM) has been established. However, MSEM for binary responses has not been widely applied to detect intervention effects (group differences) in intervention studies. In this article, the use of MSEM for intervention studies is demonstrated and the performance of MSEM is evaluated via a simulation study. Furthermore, the consequences of using MLM instead of MSEM are shown in detecting group differences. Results of the simulation study showed that MSEM performed adequately as the number of clusters, cluster size, and intraclass correlation increased and outperformed MLM for the detection of group differences.

  7. Left Atrial Size, Chemosensitivity, and Central Sleep Apnea in Heart Failure

    PubMed Central

    Calvin, Andrew D.; Somers, Virend K.; Johnson, Bruce D.; Scott, Christopher G.

    2014-01-01

    BACKGROUND: Central sleep apnea (CSA) is common among patients with heart failure (HF) and is promoted by elevated CO2 chemosensitivity. Left atrial size is a marker of the hemodynamic severity of HF. The aim of this study was to determine if left atrial size predicts chemosensitivity to CO2 and CSA in patients with HF. METHODS: Patients with HF with left ventricular ejection fraction ≤ 35% underwent polysomnography for detection of CSA, echocardiography, and measurement of CO2 chemosensitivity. CSA was defined as an apnea-hypopnea index (AHI) ≥ 15/h with ≥ 50% central apneic events. The relation of clinical and echocardiographic parameters to chemosensitivity and CSA were evaluated by linear regression, estimation of ORs, and receiver operator characteristics. RESULTS: Of 46 subjects without OSA who had complete data for analysis, 25 had CSA. The only parameter that significantly correlated with chemosensitivity was left atrial volume index (LAVI) (r = 0.40, P < .01). LAVI was greater in those with CSA than those without CSA (59.2 mL/m2 vs 36.4 mL/m2, P < .001) and significantly correlated with log-transformed AHI (r = 0.46, P = .001). LAVI was the best predictor of CSA (area under the curve = 0.83). A LAVI ≤ 33 mL/m2 was associated with 22% risk for CSA, while LAVI ≥ 53 mL/m2 was associated with 92% risk for CSA. CONCLUSIONS: Increased LAVI is associated with heightened CO2 chemosensitivity and greater frequency of CSA. LAVI may be useful to guide referral for polysomnography for detection of CSA in patients with HF. PMID:24522490

  8. The effect of n-hexane on the gonad toxicity of female mice.

    PubMed

    Liu, Jin; Huang, Hui Ling; Pang, Fen; Zhang, Wen Chang

    2012-04-01

    To investigate the toxic effects of n-hexane on the Ganod of female mice. n-Hexane was administered to four groups of mice by inhalation at doses of 0, 3.0, 15.1, and 75.8 mL/m3 respectivelyfor five weeks. Each group consisted of 10 mice, of which half were injected in first with 10 IU of pregnant mare serum gonadotrophin (PMSG) on the 33rd days, and then with 10 IU of human chorionic gonadotrophin (HCG) 48 hrs later. After the treatment, mouse sera were sampled and ovulating hormone (LH), follicle-stimulating hormone (FSH), estradiol (E2), and progesterone (P4) levels were measured by electrochemiluminescence immunoassays (ECLIA). In each group, the right ovaries of the non-super-ovulated mice were stained with hematoxylin and eosin while ovaries on the left side were prepared with the TUNEL method in order to detect apoptotic cells. The duration of the diestrus stage decreased significantly (P < 0.05) in the 75.8 mL/m3 group. All super-ovulated mice in each treatment group produced fewer eggs than those in the control group (P < 0.05). The number of follicles in ovaries in the 75.8 mL/m3 group was smaller compared with the control group (P < 0.05).The serum P4 levels in each treatment group were lower than those in the control group (F = 6.196, P < 0.01). The cell apoptotic rate in the 75.8 mL/m3 group was higher (P < 0.05). n-Hexane may have directly mediated via alterations hormone secretion and promoted granulosal cell apoptotic, which may be one of the important mechanisms for n-hexane induced mouse ovary impairment.

  9. Single-solute and bisolute sorption of phenol and trichloroethylene from aqueous solution onto modified montmorillonite and application of sorption models.

    PubMed

    Wu, C D; Wang, L; Hu, C X; He, M H

    2013-01-01

    The single-solute and bisolute sorption behaviour of phenol and trichloroethylene, two organic compounds with different structures, onto cetyltrimethylammonium bromide (CTAB)-montmorillonite was studied. The monolayer Langmuir model (MLM) and empirical Freundlich model (EFM) were applied to the single-solute sorption of phenol or trichloroethylene from water onto monolayer or multilayer CTAB-montmorillonite. The parameters contained in the MLM and EFM were determined for each solute by fitting to the single-solute isotherm data, and subsequently utilized in binary sorption. The extended Langmuir model (ELM) coupled with the single-solute MLM and the ideal adsorbed solution theory (IAST) coupled with the single-solute EFM were used to predict the binary sorption of phenol and trichloroethylene onto CTAB-montmorillonite. It was found that the EFM was better than the MLM at describing single-solute sorption from water onto CTAB-montmorillonite, and the IAST was better than the ELM at describing the binary sorption from water onto CTAB-montmorillonite.

  10. Dendritic growth model of multilevel marketing

    NASA Astrophysics Data System (ADS)

    Pang, James Christopher S.; Monterola, Christopher P.

    2017-02-01

    Biologically inspired dendritic network growth is utilized to model the evolving connections of a multilevel marketing (MLM) enterprise. Starting from agents at random spatial locations, a network is formed by minimizing a distance cost function controlled by a parameter, termed the balancing factor bf, that weighs the wiring and the path length costs of connection. The paradigm is compared to an actual MLM membership data and is shown to be successful in statistically capturing the membership distribution, better than the previously reported agent based preferential attachment or analytic branching process models. Moreover, it recovers the known empirical statistics of previously studied MLM, specifically: (i) a membership distribution characterized by the existence of peak levels indicating limited growth, and (ii) an income distribution obeying the 80 - 20 Pareto principle. Extensive types of income distributions from uniform to Pareto to a "winner-take-all" kind are also modeled by varying bf. Finally, the robustness of our dendritic growth paradigm to random agent removals is explored and its implications to MLM income distributions are discussed.

  11. Estimation and Partitioning of Heritability in Human Populations using Whole Genome Analysis Methods

    PubMed Central

    Vinkhuyzen, Anna AE; Wray, Naomi R; Yang, Jian; Goddard, Michael E; Visscher, Peter M

    2014-01-01

    Understanding genetic variation of complex traits in human populations has moved from the quantification of the resemblance between close relatives to the dissection of genetic variation into the contributions of individual genomic loci. But major questions remain unanswered: how much phenotypic variation is genetic, how much of the genetic variation is additive and what is the joint distribution of effect size and allele frequency at causal variants? We review and compare three whole-genome analysis methods that use mixed linear models (MLM) to estimate genetic variation, using the relationship between close or distant relatives based on pedigree or SNPs. We discuss theory, estimation procedures, bias and precision of each method and review recent advances in the dissection of additive genetic variation of complex traits in human populations that are based upon the application of MLM. Using genome wide data, SNPs account for far more of the genetic variation than the highly significant SNPs associated with a trait, but they do not account for all of the genetic variance estimated by pedigree based methods. We explain possible reasons for this ‘missing’ heritability. PMID:23988118

  12. USING METEOROLOGICAL MODEL OUTPUT AS A SURROGATE FOR ON-SITE OBSERVATIONS TO PREDICT DEPOSITION VELOCITY

    EPA Science Inventory

    The National Oceanic and Atmospheric Administration's Multi-Layer Model (NOAA-MLM) is used by several operational dry deposition networks for estimating the deposition velocity of O , SO , HNO , and particles. The NOAA-MLM requires hourly values of meteorological variables and...

  13. Predicting Mouse Liver Microsomal Stability with "Pruned" Machine Learning Models and Public Data.

    PubMed

    Perryman, Alexander L; Stratton, Thomas P; Ekins, Sean; Freundlich, Joel S

    2016-02-01

    Mouse efficacy studies are a critical hurdle to advance translational research of potential therapeutic compounds for many diseases. Although mouse liver microsomal (MLM) stability studies are not a perfect surrogate for in vivo studies of metabolic clearance, they are the initial model system used to assess metabolic stability. Consequently, we explored the development of machine learning models that can enhance the probability of identifying compounds possessing MLM stability. Published assays on MLM half-life values were identified in PubChem, reformatted, and curated to create a training set with 894 unique small molecules. These data were used to construct machine learning models assessed with internal cross-validation, external tests with a published set of antitubercular compounds, and independent validation with an additional diverse set of 571 compounds (PubChem data on percent metabolism). "Pruning" out the moderately unstable / moderately stable compounds from the training set produced models with superior predictive power. Bayesian models displayed the best predictive power for identifying compounds with a half-life ≥1 h. Our results suggest the pruning strategy may be of general benefit to improve test set enrichment and provide machine learning models with enhanced predictive value for the MLM stability of small organic molecules. This study represents the most exhaustive study to date of using machine learning approaches with MLM data from public sources.

  14. Enrichment of statistical power for genome-wide association studies

    USDA-ARS?s Scientific Manuscript database

    The inheritance of most human diseases and agriculturally important traits is controlled by many genes with small effects. Identifying these genes, while simultaneously controlling false positives, is challenging. Among available statistical methods, the mixed linear model (MLM) has been the most fl...

  15. Microbial load monitor

    NASA Technical Reports Server (NTRS)

    Caplin, R. S.; Royer, E. R.

    1978-01-01

    Attempts are made to provide a total design of a Microbial Load Monitor (MLM) system flight engineering model. Activities include assembly and testing of Sample Receiving and Card Loading Devices (SRCLDs), operator related software, and testing of biological samples in the MLM. Progress was made in assembling SRCLDs with minimal leaks and which operate reliably in the Sample Loading System. Seven operator commands are used to control various aspects of the MLM such as calibrating and reading the incubating reading head, setting the clock and reading time, and status of Card. Testing of the instrument, both in hardware and biologically, was performed. Hardware testing concentrated on SRCLDs. Biological testing covered 66 clinical and seeded samples. Tentative thresholds were set and media performance listed.

  16. Relationship between CHA2DS2-VASc score and atrial electromechanical function in patients with paroxysmal atrial fibrillation: A pilot study.

    PubMed

    Vatan, Mehmet Bülent; Yılmaz, Sabiye; Ağaç, Mustafa Tarık; Çakar, Mehmet Akif; Erkan, Hakan; Aksoy, Murat; Demirtas, Saadet; Varım, Ceyhun; Akdemir, Ramazan; Gündüz, Hüseyin

    2015-11-01

    CHA2DS2-VASc score is the most widely preferred method for prediction of stroke risk in patients with atrial fibrillation. We hypothesized that CHA2DS2-VASc score may represent atrial remodeling status, and therefore echocardiographic evaluation of left atrial electromechanical remodeling can be used to identify patients with high risk. A total of 65 patients who had documented diagnosis of paroxysmal atrial fibrillation (PAF) were divided into three risk groups according to the CHA2DS2-VASc score: patients with low risk (score=0, group 1), with moderate risk (score=1, group 2), and with high risk score (score ≥2, group 3). We compared groups according to atrial electromechanical intervals and left atrium mechanical functions. Atrial electromechanical intervals including inter-atrial and intra-atrial electromechanical delay were not different between groups. However, parameters reflecting atrial mechanical functions including LA phasic volumes (Vmax, Vmin and Vp) were significantly higher in groups 2 and 3 compared with group 1. Likewise, LA passive emptying volume (LATEV) in the groups 2 and 3 was significantly higher than low-risk group (14.12±8.13ml/m(2), 22.36±8.78ml/m(2), 22.89±7.23ml/m(2), p: 0.031). Univariate analysis demonstrated that Vmax, Vmin and Vp were significantly correlated with CHA2DS2-VASc score (r=0.428, r=0.456, r=0.451 and p<0.001). Also, LATEV (r=0.397, p=0.016) and LA active emptying volume (LAAEV) (r=0.281, p=0.023) were positively correlated with CHA2DS2-VASc score. In the ROC analysis, Vmin≥11ml/m(2) has the highest predictive value for CHA2DS2-VASc score ≥2 (88% sensitivity and 89% specificity; ROC area 0.88, p<0.001, CI [0.76-0.99]). Echocardiographic evaluation of left atrial electromechanical function might represent a useful method to identify patients with high risk. Copyright © 2015 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  17. Machine Learning Model Analysis and Data Visualization with Small Molecules Tested in a Mouse Model of Mycobacterium tuberculosis Infection (2014–2015)

    PubMed Central

    2016-01-01

    The renewed urgency to develop new treatments for Mycobacterium tuberculosis (Mtb) infection has resulted in large-scale phenotypic screening and thousands of new active compounds in vitro. The next challenge is to identify candidates to pursue in a mouse in vivo efficacy model as a step to predicting clinical efficacy. We previously analyzed over 70 years of this mouse in vivo efficacy data, which we used to generate and validate machine learning models. Curation of 60 additional small molecules with in vivo data published in 2014 and 2015 was undertaken to further test these models. This represents a much larger test set than for the previous models. Several computational approaches have now been applied to analyze these molecules and compare their molecular properties beyond those attempted previously. Our previous machine learning models have been updated, and a novel aspect has been added in the form of mouse liver microsomal half-life (MLM t1/2) and in vitro-based Mtb models incorporating cytotoxicity data that were used to predict in vivo activity for comparison. Our best Mtbin vivo models possess fivefold ROC values > 0.7, sensitivity > 80%, and concordance > 60%, while the best specificity value is >40%. Use of an MLM t1/2 Bayesian model affords comparable results for scoring the 60 compounds tested. Combining MLM stability and in vitroMtb models in a novel consensus workflow in the best cases has a positive predicted value (hit rate) > 77%. Our results indicate that Bayesian models constructed with literature in vivoMtb data generated by different laboratories in various mouse models can have predictive value and may be used alongside MLM t1/2 and in vitro-based Mtb models to assist in selecting antitubercular compounds with desirable in vivo efficacy. We demonstrate for the first time that consensus models of any kind can be used to predict in vivo activity for Mtb. In addition, we describe a new clustering method for data visualization and apply this to the in vivo training and test data, ultimately making the method accessible in a mobile app. PMID:27335215

  18. Analyzing Longitudinal Data with Multilevel Models: An Example with Individuals Living with Lower Extremity Intra-articular Fractures

    PubMed Central

    Kwok, Oi-Man; Underhill, Andrea T.; Berry, Jack W.; Luo, Wen; Elliott, Timothy R.; Yoon, Myeongsun

    2008-01-01

    The use and quality of longitudinal research designs has increased over the past two decades, and new approaches for analyzing longitudinal data, including multi-level modeling (MLM) and latent growth modeling (LGM), have been developed. The purpose of this paper is to demonstrate the use of MLM and its advantages in analyzing longitudinal data. Data from a sample of individuals with intra-articular fractures of the lower extremity from the University of Alabama at Birmingham’s Injury Control Research Center is analyzed using both SAS PROC MIXED and SPSS MIXED. We start our presentation with a discussion of data preparation for MLM analyses. We then provide example analyses of different growth models, including a simple linear growth model and a model with a time-invariant covariate, with interpretation for all the parameters in the models. More complicated growth models with different between- and within-individual covariance structures and nonlinear models are discussed. Finally, information related to MLM analysis such as online resources is provided at the end of the paper. PMID:19649151

  19. Is multi-level marketing of nutrition supplements a legal and an ethical practice?

    PubMed

    Cardenas, Diana; Fuchs-Tarlovsky, Vanessa

    2018-06-01

    Multi-level marketing (MLM) of nutrition products has experienced dramatic growth in recent decades. 'Wellness' is the second most popular niche in the MLM industry and represents 35% of sales among all the products in 2016. This category includes dietary supplements, weight management and sports nutrition products. The aim of this paper is to analyse whether this practice is legal and ethical. An analysis of available documentary information about the legal aspects of Multi-level marketing business was performed. Ethical reflexion was based on the "principlism" approach. We argue that, while being a controversial business model, MLM is not fraudulent from a legal point of view. However, it is an unethical strategy obviating all the principles of beneficence, nonmaleficence and autonomy. What is at stake is the possible economic scam and the potential harm those products could cause due to unproven efficacy, exceeding daily nutrient requirements and potential toxicity. The sale of dietary and nutrition supplements products by physicians and dieticians presents a conflict of interests that can undermine the primary obligation of physicians to serve the interests of their patients before their own. While considering that MLM of dietary supplements and other nutrition products are a legal business strategy, we affirm that it is an unethical practice. MLM products that have nutritional value or promoted as remedies may be unnecessary and intended for conditions that are unsuitable for self-prescription as well. Copyright © 2018 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.

  20. Matrix metalloproteinases and their tissue inhibitor after reperfused ST-elevation myocardial infarction treated with doxycycline. Insights from the TIPTOP trial.

    PubMed

    Cerisano, Giampaolo; Buonamici, Piergiovanni; Gori, Anna Maria; Valenti, Renato; Sciagrà, Roberto; Giusti, Betti; Sereni, Alice; Raspanti, Silvia; Colonna, Paolo; Gensini, Gian Franco; Abbate, Rosanna; Schulz, Richard; Antoniucci, David

    2015-10-15

    The TIPTOP (Early Short-term Doxycycline Therapy In Patients with Acute Myocardial Infarction and Left Ventricular Dysfunction to Prevent The Ominous Progression to Adverse Remodelling) trial demonstrated that a timely, short-term therapy with doxycycline is able to reduce LV dilation, and both infarct size and severity in patients treated with primary percutaneous intervention (pPCI) for a first ST-elevation myocardial infarction (STEMI) and left ventricular (LV) dysfunction. In this secondary, pre-defined analysis of the TIPTOP trial we evaluated the relationship between doxycycline and plasma levels of matrix metalloproteinases (MMPs) and their tissue inhibitors (TIMPs). In 106 of the 110 (96%) patients enrolled in the TIPTOP trial, plasma MMPs and TIMPs were measured at baseline, and at post-STEMI days 1, 7, 30 and 180. To evaluate the remodeling process, 2D-Echo studies were performed at baseline and at 6months. A (99m)Tc-SPECT was performed to evaluate the 6-month infarct size and severity. Doxycycline therapy was independently related to higher plasma TIMP-2 levels at day 7 (p<0.05). Plasma TIMP-2 levels above the median value at day 7 were correlated with the 6-month smaller infarct size (3% [0%-16%] vs. 12% [0%-30%], p=0.002) and severity (0.55 [0.44-0.64] vs. 0.45 [0.29-0.60], p=0.002), and LV dilation (-1ml/m(2) [from -7ml/m(2) to 9ml/m(2)] vs. 3ml/m(2) [from -2ml/m(2) to 19ml/m(2)], p=0.04), compared to their counterpart. In this clinical setting, doxycycline therapy results in higher plasma levels of TIMP-2 which, in turn, inversely correlate with 6month infarct size and severity as well as LV dilation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. "Pulmonary valve replacement diminishes the presence of restrictive physiology and reduces atrial volumes": a prospective study in Tetralogy of Fallot patients.

    PubMed

    Pijuan-Domenech, Antonia; Pineda, Victor; Castro, Miguel Angel; Sureda-Barbosa, Carlos; Ribera, Aida; Cruz, Luz M; Ferreira-Gonzalez, Ignacio; Dos-Subirà, Laura; Subirana-Domènech, Teresa; Garcia-Dorado, David; Casaldàliga-Ferrer, Jaume

    2014-11-15

    Pulmonary valve replacement (PVR) reduces right ventricular (RV) volumes in the setting of long-term pulmonary regurgitation after Tetralogy of Fallot (ToF) repair; however, little is known of its effect on RV diastolic function. Right atrial volumes may reflect the burden of RV diastolic dysfunction. The objective of this paper is to evaluate the clinical, echocardiographic, biochemical and cardiac magnetic resonance (CMR) variables, focusing particularly on right atrial response and right ventricular diastolic function prior to and after elective PVR in adult patients with ToF. This prospective study was conducted from January 2009 to April 2013 in consecutive patients > 18 years of age who had undergone ToF repair in childhood and were accepted for elective PVR. Twenty patients (mean age: 35 years; 70% men) agreed to enter the study. PVR was performed with a bioporcine prosthesis. Concomitant RV reduction was performed in all cases when technically possible. Pulmonary end-diastolic forward flow (EDFF) decreased significantly from 5.4 ml/m(2) to 0.3 ml/m(2) (p < 0.00001), and right atrial four-chamber echocardiographic measurements and volumes by 25% (p = 0.0024): mean indexed diastolic/systolic atrial volumes prior to surgery were 43 ml/m(2) (SD+/-4.6)/63 ml/m(2) (SD+/-5.5), and dropped to 33 ml/m(2) (SD+/-3)/46 ml/m(2) (SD+/-2.55) post-surgery. All patients presented right ventricular diastolic and systolic volume reductions, with a mean volume reduction of 35% (p < 0.00001). Right ventricular diastolic dysfunction was common in a population of severely dilated RV patients long term after ToF repair. Right ventricular diastolic parameters improved as did right atrial volumes in keeping with the known reduction in RV volumes, after PVR. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Multivariate mixed linear model analysis of longitudinal data: an information-rich statistical technique for analyzing disease resistance data

    USDA-ARS?s Scientific Manuscript database

    The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...

  3. Estimation of Gutenberg-Richter b-value using instrumental earthquake catalog from the southern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Lee, H.; Sheen, D.; Kim, S.

    2013-12-01

    The b-value in Gutenberg-Richter relation is an important parameter widely used not only in the interpretation of regional tectonic structure but in the seismic hazard analysis. In this study, we tested four methods for estimating the stable b-value in a small number of events using Monte-Carlo method. One is the Least-Squares method (LSM) which minimizes the observation error. Others are based on the Maximum Likelihood method (MLM) which maximizes the likelihood function: Utsu's (1965) method for continuous magnitudes and an infinite maximum magnitude, Page's (1968) for continuous magnitudes and a finite maximum magnitude, and Weichert's (1980) for interval magnitude and a finite maximum magnitude. A synthetic parent population of the earthquake catalog of million events from magnitude 2.0 to 7.0 with interval of 0.1 was generated for the Monte-Carlo simulation. The sample, the number of which was increased from 25 to 1000, was extracted from the parent population randomly. The resampling procedure was applied 1000 times with different random seed numbers. The mean and the standard deviation of the b-value were estimated for each sample group that has the same number of samples. As expected, the more samples were used, the more stable b-value was obtained. However, in a small number of events, the LSM gave generally low b-value with a large standard deviation while other MLMs gave more accurate and stable values. It was found that Utsu (1965) gives the most accurate and stable b-value even in a small number of events. It was also found that the selection of the minimum magnitude could be critical for estimating the correct b-value for Utsu's (1965) method and Page's (1968) if magnitudes were binned into an interval. Therefore, we applied Utsu (1965) to estimate the b-value using two instrumental earthquake catalogs, which have events occurred around the southern part of the Korean Peninsula from 1978 to 2011. By a careful choice of the minimum magnitude, the b-values of the earthquake catalogs of the Korea Meteorological Administration and Kim (2012) are estimated to be 0.72 and 0.74, respectively.

  4. JIMM: the next step for mission-level models

    NASA Astrophysics Data System (ADS)

    Gump, Jamieson; Kurker, Robert G.; Nalepka, Joseph P.

    2001-09-01

    The (Simulation Based Acquisition) SBA process is one in which the planning, design, and test of a weapon system or other product is done through the more effective use of modeling and simulation, information technology, and process improvement. This process results in a product that is produced faster, cheaper, and more reliably than its predecessors. Because the SBA process requires realistic and detailed simulation conditions, it was necessary to develop a simulation tool that would provide a simulation environment acceptable for doing SBA analysis. The Joint Integrated Mission Model (JIMM) was created to help define and meet the analysis, test and evaluation, and training requirements of a Department of Defense program utilizing SBA. Through its generic nature of representing simulation entities, its data analysis capability, and its robust configuration management process, JIMM can be used to support a wide range of simulation applications as both a constructive and a virtual simulation tool. JIMM is a Mission Level Model (MLM). A MLM is capable of evaluating the effectiveness and survivability of a composite force of air and space systems executing operational objectives in a specific scenario against an integrated air and space defense system. Because MLMs are useful for assessing a system's performance in a realistic, integrated, threat environment, they are key to implementing the SBA process. JIMM is a merger of the capabilities of one legacy model, the Suppressor MLM, into another, the Simulated Warfare Environment Generator (SWEG) MLM. By creating a more capable MLM, JIMM will not only be a tool to support the SBA initiative, but could also provide the framework for the next generation of MLMs.

  5. Risk factors for loss of residual renal function in children treated with chronic peritoneal dialysis.

    PubMed

    Ha, Il-Soo; Yap, Hui K; Munarriz, Reyner L; Zambrano, Pedro H; Flynn, Joseph T; Bilge, Ilmay; Szczepanska, Maria; Lai, Wai-Ming; Antonio, Zenaida L; Gulati, Ashima; Hooman, Nakysa; van Hoeck, Koen; Higuita, Lina M S; Verrina, Enrico; Klaus, Günter; Fischbach, Michel; Riyami, Mohammed A; Sahpazova, Emilja; Sander, Anja; Warady, Bradley A; Schaefer, Franz

    2015-09-01

    In dialyzed patients, preservation of residual renal function is associated with better survival, lower morbidity, and greater quality of life. To analyze the evolution of residual diuresis over time, we prospectively monitored urine output in 401 pediatric patients in the global IPPN registry who commenced peritoneal dialysis (PD) with significant residual renal function. Associations of patient characteristics and time-variant covariates with daily urine output and the risk of developing oligoanuria (under 100 ml/m(2)/day) were analyzed by mixed linear modeling and Cox regression analysis including time-varying covariates. With an average loss of daily urine volume of 130 ml/m(2) per year, median time to oligoanuria was 48 months. Residual diuresis significantly subsided more rapidly in children with glomerulopathies, lower diuresis at start of PD, high ultrafiltration volume, and icodextrin use. Administration of diuretics significantly reduced oligoanuria risk, whereas the prescription of renin-angiotensin system antagonists significantly increased the risk oligoanuria. Urine output on PD was significantly associated in a negative manner with glomerulopathies (-584 ml/m(2)) and marginally with the use of icodextrin (-179 ml/m(2)) but positively associated with the use of biocompatible PD fluid (+111 ml/m(2)). Children in both Asia and North America had consistently lower urine output compared with those in Europe perhaps due to regional variances in therapy. Thus, in children undergoing PD, residual renal function depends strongly on the cause of underlying kidney disease and may be modifiable by diuretic therapy, peritoneal ultrafiltration, and choice of PD fluid.

  6. [Preoperative and follow-up cardiac magnetic resonance imaging of candidates for surgical ventricular restoration].

    PubMed

    Rodríguez Masi, M; Martín Lores, I; Bustos García de Castro, A; Cabeza Martínez, B; Maroto Castellanos, L; Gómez de Diego, J; Ferreirós Domínguez, J

    2016-01-01

    To assess pre and post-operative cardiac MRI (CMR) findings in patients with left endoventriculoplasty repair for ventricular aneurysm due to ischemic heart disease. Data were retrospectively gathered on 21 patients with diagnosis of ventricular aneurysm secondary to ischemic heart disease undergoing left endoventriculoplasty repair between January 2007 and March 2013. Pre and post-operative CMR was performed in 12 patients. The following data were evaluated in pre-operative and post-operative CMR studies: quantitative analysis of left ventricular ejection fraction (LVEF), left ventricular end-diastolic (LVEDV) and end-systolic (LVESV) volume index, presence of valvular disease and intracardiac thrombi. The time between surgery and post-operative CRM studies was 3-24 months. Significant differences were found in the pre and post-operative LVEF, LVEDV and LVESV data. EF showed a median increase of 10% (IQR 2-15) (p=0.003). The LVEDV showed a median decrease of 38 ml/m(2) (IQR 18-52) (p=0.006) and the LVESV showed a median decrease of 45 ml/m(2) (IQR:12-60) (p=0.008). Post-operative ventricular volume reduction was significantly higher in those patients with preoperative LVESV >110 ml/m(2) (59 ml/m(2) and 12 ml/m(2), p=0.006). In patients with ischemic heart disease that are candidates for left endoventriculoplasty, CMR is a reliable non-invasive and reproducible technique for the evaluation of the scar before the surgery and the ventricular volumes and its evolution after endoventricular surgical repair. Copyright © 2014 SERAM. Published by Elsevier España, S.L.U. All rights reserved.

  7. Determinants of Reverse Remodeling of the Left Atrium Following Transaortic Myectomy.

    PubMed

    Nguyen, Anita; Schaff, Hartzell V; Nishimura, Rick A; Dearani, Joseph A; Geske, Jeffrey B; Lahr, Brian D; Ommen, Steve R

    2018-04-18

    In patients with hypertrophic cardiomyopathy (HCM), enlargement of the left atrium (LA) is associated with increased morbidity and mortality due to risk of atrial fibrillation (AF), stroke, and heart failure. In this study, we investigated whether LA reverse remodeling occurs following septal myectomy. Between March 2007 and July 2015, 656 patients underwent myectomy at our institution and had pre- and postoperative transthoracic echocardiographic (TTE) recording of LA volume index (LAVI). We reviewed clinical and echocardiographic data of these patients, and assessed for changes over time by comparing pre- and postoperative measurements. The median age was 56 (46, 65) years, and 370 (56%) were male. New York Heart Association Class III/IV dyspnea was present in 581 (89%). Preoperative TTE showed LAVI of 48 (38, 60) mL/m 2 . In patients with history of AF, preoperative LAVI was 57 (45, 68) mL/m 2 , and in those without AF, LAVI measured 45 (37, 57) mL/m 2 (p < 0.001). All patients underwent transaortic septal myectomy. Early postoperative TTE (4 [3, 5] days) demonstrated LAVI of 43 (36, 52) mL/m 2 (p < 0.001), and late postoperative TTE (2.0 [1.1, 4.1] years) showed LAVI of 38 (29, 47) mL/m 2 (p < 0.001). Preoperative LAVI was associated with late development of AF (p = 0.002). Left atrial volume decreases significantly following surgical relief of left ventricular outflow tract obstruction. Early changes likely reflect lower LA pressure due to gradient relief and abolishment of mitral regurgitation, and late reduction suggests continued reverse remodeling. Copyright © 2018. Published by Elsevier Inc.

  8. DNA methylation profiling of esophageal adenocarcinoma using Methylation Ligation-dependent Macroarray (MLM).

    PubMed

    Guilleret, Isabelle; Losi, Lorena; Chelbi, Sonia T; Fonda, Sergio; Bougel, Stéphanie; Saponaro, Sara; Gozzi, Gaia; Alberti, Loredana; Braunschweig, Richard; Benhattar, Jean

    2016-10-14

    Most types of cancer cells are characterized by aberrant methylation of promoter genes. In this study, we described a rapid, reproducible, and relatively inexpensive approach allowing the detection of multiple human methylated promoter genes from many tissue samples, without the need of bisulfite conversion. The Methylation Ligation-dependent Macroarray (MLM), an array-based analysis, was designed in order to measure methylation levels of 58 genes previously described as putative biomarkers of cancer. The performance of the design was proven by screening the methylation profile of DNA from esophageal cell lines, as well as microdissected formalin-fixed and paraffin-embedded (FFPE) tissues from esophageal adenocarcinoma (EAC). Using the MLM approach, we identified 32 (55%) hypermethylated promoters in EAC, and not or rarely methylated in normal tissues. Among them, 21promoters were found aberrantly methylated in more than half of tumors. Moreover, seven of them (ADAMTS18, APC, DKK2, FOXL2, GPX3, TIMP3 and WIF1) were found aberrantly methylated in all or almost all the tumor samples, suggesting an important role for these genes in EAC. In addition, dysregulation of the Wnt pathway with hypermethylation of several Wnt antagonist genes was frequently observed. MLM revealed a homogeneous pattern of methylation for a majority of tumors which were associated with an advanced stage at presentation and a poor prognosis. Interestingly, the few tumors presenting less methylation changes had a lower pathological stage. In conclusion, this study demonstrated the feasibility and accuracy of MLM for DNA methylation profiling of FFPE tissue samples. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Hemodynamic response to exercise as measured by the solar IKG impedance cardiography module and correlation with metabolic variables.

    PubMed

    Ziegeler, Stephan; Grundmann, Ulrich; Fuerst, Oliver; Raddatz, Alexander; Kreuer, Sascha

    2007-02-01

    Impedance Cardiography (ICG) has been shown to be a feasible and accurate method for non-invasive measurement of cardiac index (CI). Aim of this investigation was the correlation of hemodynamic variables under exercise as measured by a specific ICG-monitor (Solar IKG-Modul, Version 3.0, GE-Healthcare, Freiburg, Germany) with metabolic variables. Ten healthy volunteers were included in the investigation doing ergometer exercise (5 min equilibration followed by 5 min each at 50, 75, 100 and 125 W). Hemodynamic parameters were obtained by ICG. Metabolic variables were assessed by indirect calorimetry with the Deltatrac II Metabolic monitor using a helmet system for spontaneous respiration. CI increased throughout exercise (baseline: 3.0 +/- 0.4 l/min/m(2); 125 W: 4.8 +/- 0.5 l/min/m(2)). Heart rate (baseline: 87.2 +/- 13.4 bpm; 125 W: 152.7 +/- 22.4 bpm) and contractility (velocity index) (baseline: 48.9 +/- 9.3/1000 s; 125 W: 70.5 +/- 10.0/1000 s) showed a continuous rise while the stroke index decreased after an initial rise (baseline: 35.0 +/- 4.6 ml/m(2); 50 W: 37.6 +/- 4.9 ml/m(2); 75 W: 41.2 +/- 5.9 ml/m(2); 125 W: 32.3 +/- 6.1 ml/m(2)). VO(2) (baseline: 335.2 +/- 84.1 ml/min; 125 W: 1298.9 +/- 282.3 ml/min) and VCO(2)(baseline: 255.4 +/- 74.5 ml/min; 125 W: 1342.5 +/- 282.5 ml/min) increased throughout exercise. There was a good correlation in the individual fits between hemodynamic and metabolic variables. CI in healthy volunteers, as measured by the Solar IKG-Modul, correlates well with O(2)-consumption and CO(2)-production in individual subjects, thus indicating the metabolic needs under exercise conditions in healthy individuals.

  10. Right atrial volume by cardiovascular magnetic resonance predicts mortality in patients with heart failure with reduced ejection fraction

    PubMed Central

    Ivanov, Alexander; Mohamed, Ambreen; Asfour, Ahmed; Ho, Jean; Khan, Saadat A.; Chen, Onn; Klem, Igor; Ramasubbu, Kumudha; Brener, Sorin J.; Heitner, John F.

    2017-01-01

    Background Right Atrial Volume Index (RAVI) measured by echocardiography is an independent predictor of morbidity in patients with heart failure (HF) with reduced ejection fraction (HFrEF). The aim of this study is to evaluate the predictive value of RAVI assessed by cardiac magnetic resonance (CMR) for all-cause mortality in patients with HFrEF and to assess its additive contribution to the validated Meta-Analysis Global Group in Chronic heart failure (MAGGIC) score. Methods and results We identified 243 patients (mean age 60 ± 15; 33% women) with left ventricular ejection fraction (LVEF) ≤ 35% measured by CMR. Right atrial volume was calculated based on area in two- and four -chamber views using validated equation, followed by indexing to body surface area. MAGGIC score was calculated using online calculator. During mean period of 2.4 years 33 patients (14%) died. The mean RAVI was 53 ± 26 ml/m2; significantly larger in patients with than without an event (78.7±29 ml/m2 vs. 48±22 ml/m2, p<0.001). RAVI (per ml/m2) was an independent predictor of mortality [HR = 1.03 (1.01–1.04), p = 0.001]. RAVI has a greater discriminatory ability than LVEF, left atrial volume index and right ventricular ejection fraction (RVEF) (C-statistic 0.8±0.08 vs 0.55±0.1, 0.62±0.11, 0.68±0.11, respectively, all p<0.02). The addition of RAVI to the MAGGIC score significantly improves risk stratification (integrated discrimination improvement 13%, and category-free net reclassification improvement 73%, both p<0.001). Conclusion RAVI by CMR is an independent predictor of mortality in patients with HFrEF. The addition of RAVI to MAGGIC score improves mortality risk stratification. PMID:28369148

  11. Distributed meandering waveguides (DMWs) for novel photonic circuits (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Dag, Ceren B.; Anil, Mehmet Ali; Serpengüzel, Ali

    2017-05-01

    Meandering waveguide distributed feedback structures are novel integrated photonic lightwave and microwave circuit elements. Meandering waveguide distributed feedback structures with a variety of spectral responses can be designed for a variety of lightwave and microwave circuit element functions. Distributed meandering waveguide (DMW) structures [1] show a variety of spectral behaviors with respect to the number of meandering loop mirrors (MLMs) [2] used in their composition as well as their internal coupling constants (Cs). DMW spectral behaviors include Fano resonances, coupled resonator induced transparency (CRIT), notch, add-drop, comb, and hitless filters. What makes the DMW special is the self-coupling property intrinsic to the DMW's nature. The basic example of DMW's nature is motivated through the analogy between the so-called symmetric meandering resonator (SMR), which consists of two coupled MLMs, and the resonator enhanced Mach-Zehnder interferometer (REMZI) [3]. A SMR shows the same spectral characteristics of Fano resonances with its self-coupling property, similar to the single, distributed and binary self coupled optical waveguide (SCOW) resonators [4]. So far DMWs have been studied for their electric field intensity, phase [5] and phasor responses [6]. The spectral analysis is performed using the coupled electric field analysis and the generalization of single meandering loop mirrors to multiple meandering distributed feedback structures is performed with the transfer matrix method. The building block of the meandering waveguide structures, the meandering loop mirror (MLM), is the integrated analogue of the fiber optic loop mirrors. The meandering resonator (MR) is composed of two uncoupled MLM's. The meandering distributed feedback (MDFB) structure is the DFB of the MLM. The symmetric MR (SMR) is composed of two coupled MLM's, and has the characteristics of a Fano resonator in the general case, and tunable power divider or tunable hitless filter in special cases. The antisymmetric MR (AMR) is composed of two coupled MLM's. The AMR has the characteristics of an add-drop filter in the general case, and coupled resonator induced transparency (CRIT) filter in a special case. The symmetric MDFB (SMDFB) is composed of multiple coupled MLM's. The antisymmetric MDFB (AMDFB) is composed of multiple coupled MLM's. The SMDFB and AMDFB can be utilized as band-pass, Fano, or Lorentzian filters, or Rabi splitters. Distributed meandering waveguide elements with extremely rich spectral and phase responses can be designed with creative combinations of distributed meandering waveguides structures for various novel photonic circuits. References [1 ] C. B. Dağ, M. A. Anıl, and A. Serpengüzel, "Meandering Waveguide Distributed Feedback Lightwave Circuits," J. Lightwave Technol, vol. 33, no. 9, pp. 1691-1702, May 2015. [2] N. J. Doran and D. Wood, "Nonlinear-optical loop mirror," Opt. Lett. vol. 13, no. 1, pp. 56-58, Jan. 1988. [3] L. Zhou and A. W. Poon, "Fano resonance-based electrically reconfigurable add-drop filters in silicon microring resonator-coupled Mach-Zehnder interferometers," Opt. Lett. vol. 32, no. 7, pp. 781-783, Apr. 2007. [4] Z. Zou, L. Zhou, X. Sun, J. Xie, H. Zhu, L. Lu, X. Li, and J. Chen, "Tunable two-stage self-coupled optical waveguide resonators," Opt. Lett. vol. 38, no. 8, pp. 1215-1217, Apr. 2013. [5] C. B. Dağ, M. A. Anıl, and A. Serpengüzel, "Novel distributed feedback lightwave circuit elements," in Proc. SPIE, San Francisco, 2015, vol. 9366, p. 93660A. [6] C. B. Dağ, M. A. Anıl, and A. Serpengüzel, "Meandering Waveguide Distributed Feedback Lightwave Elements: Phasor Diagram Analysis," in Proc. PIERS, Prague, 1986-1990 (2015).

  12. Structured triglyceride vehicles for oral delivery of halofantrine: examination of intestinal lymphatic transport and bioavailability in conscious rats.

    PubMed

    Holm, René; Porter, Christopher J H; Müllertz, Anette; Kristensen, Henning G; Charman, William N

    2002-09-01

    To compare the influence of triglyceride vehicle intramolecular structure on the intestinal lymphatic transport and systemic absorption of halofantrine in conscious rats. Conscious, lymph cannulated and nonlymph cannulated rats were dosed orally with three structurally different triglycerides; sunflower oil, and two structured triglycerides containing different proportion and position of medium-(M) and long-chain (L) fatty acids on the glycerol backbone. The two structured triglycerides were abbreviated MLM and LML to reflect the structural position on the glycerol. The concentration of halofantrine in blood and lymph samples was analyzed by HPLC. Both the lymphatic transport and the total absorption of halofantrine were enhanced by the use the MLM triglyceride. The estimated total absorption of halofantrine in the lymph cannulated animals was higher than in the nonlymph cannulated animals, and this was most pronounced for the animals dosed with the structured triglycerides. Using MLM as vehicle increases the portal absorption of halofantrine and results in similar lymphatic transport levels when compared to sunflower oil. Total absorption when assessed as absorption in the blood plus lymphatic transport for halofantrine after administration in the MLM triglyceride was higher than after administration in sunflower oil.

  13. Mixed Model Association with Family-Biased Case-Control Ascertainment.

    PubMed

    Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L

    2017-01-05

    Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  14. Follow-Up Study of Former Materials/Logistics Management Students at Harper College, 1990-1995. Volume XXIV, Number 14.

    ERIC Educational Resources Information Center

    Lucas, John A.; Magad, Eugene

    In fall 1995, William Rainey Harper College in Illinois conducted a study of former students in the Materials/Logistics Management (MLM) program to determine their evaluation of their educational experiences in the program. The sample consisted of 298 former MLM students from 1990 to 1995, including 119 students who had earned 48 credit hours but…

  15. Honeycomb vs. Foam: Evaluating Potential Upgrades to ISS Module Shielding

    NASA Technical Reports Server (NTRS)

    Ryan, Shannon J.; Christiansen, Eric L.

    2009-01-01

    The presence of honeycomb cells in a dual-wall structure is advantageous for mechanical performance and low weight in spacecraft primary structures but detrimental for shielding against impact of micrometeoroid and orbital debris particles (MMOD). The presence of honeycomb cell walls acts to restrict the expansion of projectile and bumper fragments, resulting in the impact of a more concentrated (and thus lethal) fragment cloud upon the shield rear wall. The Multipurpose Laboratory Module (MLM) is a Russian research module scheduled for launch and ISS assembly in 2011 (currently under review). Baseline shielding of the MLM is expected to be predominantly similar to that of the existing Functional Energy Block (FGB), utilizing a baseline triple wall configuration with honeycomb sandwich panels for the dual bumpers and a thick monolithic aluminum pressure wall. The MLM module is to be docked to the nadir port of the Zvezda service module and, as such, is subject to higher debris flux than the FGB module (which is aligned along the ISS flight vector). Without upgrades to inherited shielding, the MLM penetration risk is expected to be significantly higher than that of the FGB module. Open-cell foam represents a promising alternative to honeycomb as a sandwich panel core material in spacecraft primary structures as it provides comparable mechanical performance with a minimal increase in weight while avoiding structural features (i.e. channeling cells) detrimental to MMOD shielding performance. In this study, the effect of replacing honeycomb sandwich panel structures with metallic open-cell foam structures on MMOD shielding performance is assessed for an MLM-representative configuration. A number of hypervelocity impact tests have been performed on both the baseline honeycomb configuration and upgraded foam configuration, and differences in target damage, failure limits, and derived ballistic limit equations are discussed.

  16. Parameter estimation for the 4-parameter Asymmetric Exponential Power distribution by the method of L-moments using R

    USGS Publications Warehouse

    Asquith, William H.

    2014-01-01

    The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.

  17. Dose assessment of 2% chlorhexidine acetate for canine superficial pyoderma.

    PubMed

    Murayama, Nobuo; Terada, Yuri; Okuaki, Mio; Nagata, Masahiko

    2011-10-01

    The dose of 2% chlorhexidine acetate (2CA; Nolvasan(®) Surgical Scrub; Fort Dodge Animal Health, Fort Dodge, IA, USA) for canine superficial pyoderma was evaluated. The first trial compared three doses (group 1, 57 mL/m(2) body surface area; group 2, 29 mL/m(2) body surface area; and group 3, 19 mL/m(2) body surface area) in a randomized, double-blind, controlled fashion. Twenty-seven dogs with superficial pyoderma were treated with 2CA at the allocated doses every 2 days for 1 week. The owners and investigators subjectively evaluated the dogs, and investigators scored skin lesions, including erythema, papules/pustules, alopecia and scales, on a 0-4 scale. There were no significant differences in response between the treatment groups. The second trial established a practical dose-measuring method for 2CA. Sixty-eight owners were asked to apply 2CA on their palm in an amount corresponding to a Japanese ¥500 coin, 26.5 mm in diameter. This yielded an average dose of 0.90±0.40 mL. Mathematically, the doses used in groups 1, 2 and 3 can be represented as one coin per approximately one-, two- and three-hand-sized lesions, respectively. The results therefore suggest that owners instructed to apply one coin of the product per two-hand-sized areas of superficial pyoderma would use the range of doses evaluated in this trial. © 2011 The Authors. Veterinary Dermatology. © 2011 ESVD and ACVD.

  18. Patterns of Therapist Variability: Therapist Effects and the Contribution of Patient Severity and Risk

    ERIC Educational Resources Information Center

    Saxon, David; Barkham, Michael

    2012-01-01

    Objective: To investigate the size of therapist effects using multilevel modeling (MLM), to compare the outcomes of therapists identified as above and below average, and to consider how key variables--in particular patient severity and risk and therapist caseload--contribute to therapist variability and outcomes. Method: We used a large…

  19. Alternative Methods for Assessing Mediation in Multilevel Data: The Advantages of Multilevel SEM

    ERIC Educational Resources Information Center

    Preacher, Kristopher J.; Zhang, Zhen; Zyphur, Michael J.

    2011-01-01

    Multilevel modeling (MLM) is a popular way of assessing mediation effects with clustered data. Two important limitations of this approach have been identified in prior research and a theoretical rationale has been provided for why multilevel structural equation modeling (MSEM) should be preferred. However, to date, no empirical evidence of MSEM's…

  20. Gender Differences in Scientific Literacy of HKPISA 2006: A Multidimensional Differential Item Functioning and Multilevel Mediation Study

    NASA Astrophysics Data System (ADS)

    Wong, Kwan Yin

    The aim of this study is to investigate the effect of gender differences of 15-year-old students on scientific literacy and their impacts on students’ motivation to pursue science education and careers (Future-oriented Science Motivation) in Hong Kong. The data for this study was collected from the Program for International Student Assessment in Hong Kong (HKPISA). It was carried out in 2006. A total of 4,645 students were randomly selected from 146 secondary schools including government, aided and private schools by two-stage stratified sampling method for the assessment. HKPISA 2006, like most of other large-scale international assessments, presents its assessment frameworks in multidimensional subscales. To fulfill the requirements of this multidimensional assessment framework, this study deployed new approaches to model and investigate gender differences in cognitive and affective latent traits of scientific literacy by using multidimensional differential item functioning (MDIF) and multilevel mediation (MLM). Compared with mean score difference t-test, MDIF improves the precision of each subscales measure at item level and the gender differences in science performance can be accurately estimated. In the light of Eccles et al (1983) Expectancy-value Model of Achievement-related Choices (Eccles’ Model), MLM examines the pattern of gender effects on Future-oriented Science Motivation mediated through cognitive and affective factors. As for MLM investigation, Single-Group Confirmatory Factor Analysis (Single-Group CFA) was used to confirm the applicability and validity of six affective factors which was, originally prepared by OECD. These six factors are Science Self-concept, Personal Value of Science, Interest in Science Learning, Enjoyment of Science Learning, Instrumental Motivation to Learn Science and Future-oriented Science Motivation. Then, Multiple Group CFA was used to verify measurement invariance of these factors across gender groups. The results of Single-Group CFA confirmed that five out of the six affective factors except Interest in Science Learning had strong psychometric properties in the context of Hong Kong. Multiple-group CFA results also confirmed measurement invariance of these factors across gender groups. The findings of this study suggest that 15-year-old school boys consistently outperformed girls in most of the cognitive dimensions except identifying scientific issues. Similarly, boys have higher affective learning outcomes than girls. The effect sizes of gender differences in affective learning outcomes are relatively larger than that of cognitive one. The MLM study reveals that gender effects on Future-oriented Science Motivation mediate through affective factors including Science Self-concept, Enjoyment of Science Learning, Interest in Science Learning, Instrumental Motivation to Learn Science and Personal Value of Science. Girls are significantly affected by the negative impacts of these mediating factors and thus Future-oriented Science Motivation. The MLM results were consistent with the predications by Eccles’ Model. Overall, the CFA and MLM results provide strong support for cross-cultural validity of Eccles’ Model. In light of our findings, recommendations to reduce the gender differences in science achievement and Future-oriented Science Motivation are made for science education participants, teachers, parents, curriculum leaders, examination bodies and policy makers.

  1. Right ventricular volumes and function in thalassemia major patients in the absence of myocardial iron overload

    PubMed Central

    2010-01-01

    Aim We aimed to define reference ranges for right ventricular (RV) volumes, ejection fraction (EF) in thalassemia major patients (TM) without myocardial iron overload. Methods and results RV volumes, EF and mass were measured in 80 TM patients who had no myocardial iron overload (myocardial T2* > 20 ms by cardiovascular magnetic resonance). All patients were receiving deferoxamine chelation and none had evidence of pulmonary hypertension or other cardiovascular comorbidity. Forty age and sex matched healthy non-anemic volunteers acted as controls. The mean RV EF was higher in TM patients than controls (males 66.2 ± 4.1% vs 61.6 ± 6%, p = 0.0009; females 66.3 ± 5.1% vs 62.6 ± 6.4%, p = 0.017), which yielded a raised lower threshold of normality for RV EF in TM patients (males 58.0% vs 50.0% and females 56.4% vs 50.1%). RV end-diastolic volume index was higher in male TM patients (mean 98.1 ± 17.3 mL vs 88.4 ± 11.2 mL/m2, p = 0.027), with a higher upper limit (132 vs 110 mL/m2) but this difference was of borderline significance for females (mean 86.5 ± 13.6 mL vs 80.3 ± 12.8 mL/m2, p = 0.09, with upper limit of 113 vs 105 mL/m2). The cardiac index was raised in TM patients (males 4.8 ± 1.0 L/min vs 3.4 ± 0.7 L/min, p < 0.0001; females 4.5 ± 0.8 L/min vs 3.2 ± 0.8 L/min, p < 0.0001). No differences in RV mass index were identified. Conclusion The normal ranges for functional RV parameters in TM patients with no evidence of myocardial iron overload differ from healthy non-anemic controls. The new reference RV ranges are important for determining the functional effects of myocardial iron overload in TM patients. PMID:20416084

  2. Effects of crude oil and swimming behavior and survival in the rice rat

    USGS Publications Warehouse

    Wolfe, J.L.; Esher, R.J.

    1981-01-01

    Oil slicks in laboratory test chambers inhibited swimming behavior of rice rats, and reduced survival at low temperature. Predisposition to enter the water and swim was greatly reduced at both high (200 ml/m2 water surface) and low (20 ml/m2) concentrations of oil. Survival was significantly affected only at high concentrations. The results may be of value in predicting the impact of oil spills on the mammal community of coastal marshes.

  3. The relationship between multilevel models and non-parametric multilevel mixture models: Discrete approximation of intraclass correlation, random coefficient distributions, and residual heteroscedasticity.

    PubMed

    Rights, Jason D; Sterba, Sonya K

    2016-11-01

    Multilevel data structures are common in the social sciences. Often, such nested data are analysed with multilevel models (MLMs) in which heterogeneity between clusters is modelled by continuously distributed random intercepts and/or slopes. Alternatively, the non-parametric multilevel regression mixture model (NPMM) can accommodate the same nested data structures through discrete latent class variation. The purpose of this article is to delineate analytic relationships between NPMM and MLM parameters that are useful for understanding the indirect interpretation of the NPMM as a non-parametric approximation of the MLM, with relaxed distributional assumptions. We define how seven standard and non-standard MLM specifications can be indirectly approximated by particular NPMM specifications. We provide formulas showing how the NPMM can serve as an approximation of the MLM in terms of intraclass correlation, random coefficient means and (co)variances, heteroscedasticity of residuals at level 1, and heteroscedasticity of residuals at level 2. Further, we discuss how these relationships can be useful in practice. The specific relationships are illustrated with simulated graphical demonstrations, and direct and indirect interpretations of NPMM classes are contrasted. We provide an R function to aid in implementing and visualizing an indirect interpretation of NPMM classes. An empirical example is presented and future directions are discussed. © 2016 The British Psychological Society.

  4. Designation of River Klina-Skenderaj Inputs, in the Absence of Measurements (Monitoring)-Kosova

    NASA Astrophysics Data System (ADS)

    Osmanaj, Lavdim; Karahoda, Dafina

    2009-11-01

    The territory of Republic of Kosova is divided in four catchment basins, such as: Basin of river Drini i Bardhë, river Ibri, river Morava of Binca and river Lepenci. [1]The river Klina is left part of the Drini i Bardhë basin.The inputs are designated by the following authors:a) GIANDOTT - VISSENTINb) GA VRILOVICc) THE METHOD OF TYPICALHYDROGRAMAs a result of this studies derive the following parameters: the surface of basin F=77.75km2, width of main flow L=22.00km', width of basin Wb=68.00km', highest quota of the basin Hqb=1750m.l.m, highest quota of inflow Hi=600.00m.l.m, average difference of height D=303.5m, maximal water input: Qmax100 years=112.00 m3/s, an average produce of Alluvium W=980.76m3/s, specific produce of Alluvium Gyears=35270.57 m3/s, secondary conveyance of Alluvium Qa=14.70 m3/s.

  5. Assessing the Rigor of HS Curriculum in Admissions Decisions: A Functional Method, Plus Practical Advising for Prospective Students and High School Counselors

    ERIC Educational Resources Information Center

    Micceri, Theodore; Brigman, Leellen; Spatig, Robert

    2009-01-01

    An extensive, internally cross-validated analytical study using nested (within academic disciplines) Multilevel Modeling (MLM) on 4,560 students identified functional criteria for defining high school curriculum rigor and further determined which measures could best be used to help guide decision making for marginal applicants. The key outcome…

  6. [Will the climate change affect the mortality from prostate cancer?].

    PubMed

    Santos Arrontes, Daniel; García González, Jesús Isidro; Martín Muñoz, Manuel Pablo; Castro Pita, Miguel; Mañas Pelillo, Antonio; Paniagua Andrés, Pedro

    2007-03-01

    The global heating of the atmosphere, as well as the increase of the exposition to sunlight, will be associated with a decrease of the mortality from prostate cancer, due to an increase of the plasmatic levels of vitamin D. To evaluate if climatological factors (temperature, rainfall, and number of sunlight hours per year) may influence the mortality associated with prostate cancer over a five-year period. In this ecology type study we will evaluate the trends of prostate tumors associated mortality in the period between January 1st 1998 and December 31st 2002, in the geographic area of Spain (17 Autonomic communities-CA-and 2 Autonomic cities- Ceuta and Melilla-, 43 million inhabitants). Demographic and mortality data were obtained from the National Institute of Statistics (INE) and climatological data about temperature and rainfall were obtained from the National Institute of Meteorology (INM). The provinces were classified using the climatic index of Martonne (defined as the quotient between annual rainfall and mean annual temperature plus 10). Areas with a quotient below 5 ml/m2/o C are considered extremely arid zones; between 5 and 15 ml/m2/o C are considered arid zones, between 15 and 20 ml/m2/o C semiarid zones; between 20 and 30 ml/m2/o C subhumid zones; between 30 and 60 ml/m2/o C humid zones; and over 60 ml/m2/o C superhumid zones. We compared mortality rates between different climatic areas using the Jonckheere-Terpstra test for six independent samples following the index of Martonne. All calculations were performed using the SPSS v 13.0 for Windows software. A logistic regression model was performed to identify climate factors associated with prostate cancer mortality. A likeliness of the null hypotheses inferior to 0.05 was considered significant. Prostate cancer mortality presented statistically significant differences, being higher in provinces with higher Martonne index (p < 0.001) and lower in areas with a greater number of sunlight hours per year (p = 0.041). The adjusted mortality rate associated with extreme aridity regions and was 21.51 cases/100,000 males year, whereas in humid zones it was 35.87 cases/100,000 males years. Mortality associated with prostate cancer is significantly superior in regions with less exposition to the sunlight. The climate change may lead to a modification of the main epidemiologic patterns, and it may be associated with a modification of cancer mortality rates. Nevertheless, these results should be taken with caution and should be confirmed by prospective studies.

  7. Measuring Dispositional Flow: Validity and reliability of the Dispositional Flow State Scale 2, Italian version.

    PubMed

    Riva, Eleonora F M; Riva, Giuseppe; Talò, Cosimo; Boffi, Marco; Rainisio, Nicola; Pola, Linda; Diana, Barbara; Villani, Daniela; Argenton, Luca; Inghilleri, Paolo

    2017-01-01

    The aim of this study is to evaluate the psychometric properties of the Italian version of the Dispositional Flow Scale-2 (DFS-2), for use with Italian adults, young adults and adolescents. In accordance with the guidelines for test adaptation, the scale has been translated with the method of back translation. The understanding of the item has been checked according to the latest standards on the culturally sensitive translation. The scale thus produced was administered to 843 individuals (of which 60.69% female), between the ages of 15 and 74. The sample is balanced between workers and students. The main activities defined by the subjects allow the sample to be divided into three categories: students, workers, athletes (professionals and semi-professionals). The confirmatory factor analysis, conducted using the Maximum Likelihood Estimator (MLM), showed acceptable fit indexes. Reliability and validity have been verified, and structural invariance has been verified on 6 categories of Flow experience and for 3 subsamples with different with different fields of action. Correlational analysis shows significant high values between the nine dimensions. Our data confirmed the validity and reliability of the Italian DFS-2 in measuring Flow experiences. The scale is reliable for use with Italian adults, young adults and adolescents. The Italian version of the scale is suitable for the evaluation of the subjective tendency to experience Flow trait characteristic in different contest, as sport, study and work.

  8. Developing Appropriate Methods for Cost-Effectiveness Analysis of Cluster Randomized Trials

    PubMed Central

    Gomes, Manuel; Ng, Edmond S.-W.; Nixon, Richard; Carpenter, James; Thompson, Simon G.

    2012-01-01

    Aim. Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Methods. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering—seemingly unrelated regression (SUR) without a robust standard error (SE)—and 4 methods that recognized clustering—SUR and generalized estimating equations (GEEs), both with robust SE, a “2-stage” nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Results. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92–0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. Conclusions. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters. PMID:22016450

  9. User-defined functions in the Arden Syntax: An extension proposal.

    PubMed

    Karadimas, Harry; Ebrahiminia, Vahid; Lepage, Eric

    2015-12-11

    The Arden Syntax is a knowledge-encoding standard, started in 1989, and now in its 10th revision, maintained by the health level seven (HL7) organization. It has constructs borrowed from several language concepts that were available at that time (mainly the HELP hospital information system and the Regenstrief medical record system (RMRS), but also the Pascal language, functional languages and the data structure of frames, used in artificial intelligence). The syntax has a rationale for its constructs, and has restrictions that follow this rationale. The main goal of the Standard is to promote knowledge sharing, by avoiding the complexity of traditional programs, so that a medical logic module (MLM) written in the Arden Syntax can remain shareable and understandable across institutions. One of the restrictions of the syntax is that you cannot define your own functions and subroutines inside an MLM. An MLM can, however, call another MLM, where this MLM will serve as a function. This will add an additional dependency between MLMs, a known criticism of the Arden Syntax knowledge model. This article explains why we believe the Arden Syntax would benefit from a construct for user-defined functions, discusses the need, the benefits and the limitations of such a construct. We used the recent grammar of the Arden Syntax v.2.10, and both the Arden Syntax standard document and the Arden Syntax Rationale article as guidelines. We gradually introduced production rules to the grammar. We used the CUP parsing tool to verify that no ambiguities were detected. A new grammar was produced, that supports user-defined functions. 22 production rules were added to the grammar. A parser was built using the CUP parsing tool. A few examples are given to illustrate the concepts. All examples were parsed correctly. It is possible to add user-defined functions to the Arden Syntax in a way that remains coherent with the standard. We believe that this enhances the readability and the robustness of MLMs. A detailed proposal will be submitted by the end of the year to the HL7 workgroup on Arden Syntax. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Using Cross-Classified Multilevel Models to Disentangle School and Neighborhood Effects: An Example Focusing on Smoking Behaviors among Adolescents in the United States

    PubMed Central

    Dunn, Erin C.; Richmond, Tracy K.; Milliren, Carly E.; Subramanian, S.V.

    2015-01-01

    Background Despite much interest in understanding the influence of contexts on health, most research has focused on one context at a time despite the reality that individuals have simultaneous memberships in multiple settings. Method Using the example of smoking behavior among adolescents in the National Longitudinal Study of Adolescent Health, we applied cross-classified multilevel modeling (CCMM) to examine fixed and random effects for schools and neighborhoods. We compared the CCMM results with those obtained from a traditional multilevel model (MLM) focused on either the school and neighborhood separately. Results In the MLMs, 5.2% of the variation in smoking was due to differences between neighborhoods (when schools were ignored) and 6.3% to differences between schools (when neighborhoods were ignored). However in the CCMM examining neighborhood and school variation simultaneously, the neighborhood-level variation was reduced to 0.4%. Conclusion Results suggest that using MLM, instead of CCMM, could lead to overestimating the importance of certain contexts and could ultimately lead to targeting interventions or policies to the wrong settings. PMID:25579227

  11. Examination of oral absorption and lymphatic transport of halofantrine in a triple-cannulated canine model after administration in self-microemulsifying drug delivery systems (SMEDDS) containing structured triglycerides.

    PubMed

    Holm, René; Porter, Christopher J H; Edwards, Glenn A; Müllertz, Anette; Kristensen, Henning G; Charman, William N

    2003-09-01

    The potential for lipidic self-microemulsifying drug delivery systems (SMEDDS) containing triglycerides with a defined structure, where the different fatty acids on the glycerol backbone exhibit different metabolic fate, to improve the lymphatic transport and the portal absorption of a poorly water-soluble drug, halofantrine, were investigated in fasted lymph cannulated canines. Two different structured triglycerides were incorporated into the SMEDDS; 1,3-dioctanoyl-2-linoleyl-sn-glycerol (C8:0-C18:2-C8:0) (MLM) and 1,3-dilinoyl-2-octanoyl-sn-glycerol (C18:2-C8:0-C18:2) (LML). A previously optimised SMEDDS formulation for halofantrine, comprising of triglyceride, Cremophor EL, Maisine 35-1 and ethanol was selected for bioavailability assessment. The extent of lymphatic transport via the thoracic duct was 17.9% of the dose for the animals dosed with the MLM SMEDDS and 27.4% for LML. Also the plasma availability was affected by the triglyceride incorporated into the multi-component delivery system and availabilities of 56.9% (MLM) and 37.2% (LML) were found. These data indicate that the pharmaceutical scientist can use the structure of the lipid to affect the relative contribution of the two absorption pathways. The MLM formulation produced a total bioavailability of 74.9%, which is higher than the total absorption previously observed after post-prandial administration. This could indicate the utility of disperse lipid-base formulations based on structured triglycerides for the oral delivery of halofantrine, and potentially other lipophilic drugs.

  12. Developing appropriate methods for cost-effectiveness analysis of cluster randomized trials.

    PubMed

    Gomes, Manuel; Ng, Edmond S-W; Grieve, Richard; Nixon, Richard; Carpenter, James; Thompson, Simon G

    2012-01-01

    Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering--seemingly unrelated regression (SUR) without a robust standard error (SE)--and 4 methods that recognized clustering--SUR and generalized estimating equations (GEEs), both with robust SE, a "2-stage" nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92-0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters.

  13. Teaching leadership: the medical student society model.

    PubMed

    Matthews, Jacob H; Morley, Gabriella L; Crossley, Eleanor; Bhanderi, Shivam

    2018-04-01

    All health care professionals in the UK are expected to have the medical leadership and management (MLM) skills necessary for improving patient care, as stipulated by the UK General Medical Council (GMC). Newly graduated doctors reported insufficient knowledge about leadership and quality improvement skills, despite all UK medical schools reporting that MLM is taught within their curriculum. A medical student society organised a series of extracurricular educational events focusing on leadership topics. The society recognised that the events needed to be useful and interesting to attract audiences. Therefore, clinical leaders in exciting fields were invited to talk about their experiences and case studies of personal leadership challenges. The emphasis on personal stories, from respected leaders, was a deliberate strategy to attract students and enhance learning. Evaluation data were collected from the audiences to improve the quality of the events and to support a business case for an intercalated degree in MLM. When leadership and management concepts are taught through personal stories, students find it interesting and are prepared to give up their leisure time to engage with the subject. Students appear to recognise the importance of MLM knowledge to their future careers, and are able to organise their own, and their peers', learning and development. Organising these events and collecting feedback can provide students with opportunities to practise leadership, management and quality improvement skills. These extracurricular events, delivered through a student society, allow for subjects to be discussed in more depth and can complement an already crowded undergraduate curriculum. Newly graduated doctors reported insufficient knowledge about leadership and quality improvement skills. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  14. Adaptation in Tunably Rugged Fitness Landscapes: The Rough Mount Fuji Model

    PubMed Central

    Neidhart, Johannes; Szendro, Ivan G.; Krug, Joachim

    2014-01-01

    Much of the current theory of adaptation is based on Gillespie’s mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage. PMID:25123507

  15. Genome-Wide Association Mapping of Acid Soil Resistance in Barley (Hordeum vulgare L.)

    PubMed Central

    Zhou, Gaofeng; Broughton, Sue; Zhang, Xiao-Qi; Ma, Yanling; Zhou, Meixue; Li, Chengdao

    2016-01-01

    Genome-wide association studies (GWAS) based on linkage disequilibrium (LD) have been used to detect QTLs underlying complex traits in major crops. In this study, we collected 218 barley (Hordeum vulgare L.) lines including wild barley and cultivated barley from China, Canada, Australia, and Europe. A total of 408 polymorphic markers were used for population structure and LD analysis. GWAS for acid soil resistance were performed on the population using a general linkage model (GLM) and a mixed linkage model (MLM), respectively. A total of 22 QTLs (quantitative trait loci) were detected with the GLM and MLM analyses. Two QTLs, close to markers bPb-1959 (133.1 cM) and bPb-8013 (86.7 cM), localized on chromosome 1H and 4H respectively, were consistently detected in two different trials with both the GLM and MLM analyses. Furthermore, bPb-8013, the closest marker to the major Al3+ resistance gene HvAACT1 in barley, was identified to be QTL5. The QTLs could be used in marker-assisted selection to identify and pyramid different loci for improved acid soil resistance in barley. PMID:27064793

  16. Left atrial volume is not an index of left ventricular diastolic dysfunction in patients with sickle cell anaemia.

    PubMed

    Hammoudi, Nadjib; Charbonnier, Magali; Levy, Pierre; Djebbar, Morad; Stankovic Stojanovic, Katia; Ederhy, Stéphane; Girot, Robert; Cohen, Ariel; Isnard, Richard; Lionnet, François

    2015-03-01

    Left ventricular diastolic dysfunction (LVDD) is common in sickle cell anaemia (SCA). Left atrial (LA) size is widely used as an index of LVDD; however, LA enlargement in SCA might also be due to chronic volume overload. To investigate whether LA size can be used to diagnose LVDD in SCA. One hundred and twenty-seven adults with stable SCA underwent echocardiographic assessment. LA volume was measured by the area-length method and indexed to body surface area (LAVi). Left ventricular (LV) filling pressures were assessed using the ratio of early peak diastolic velocities of mitral inflow and septal annular mitral plane (E/e'). Using mitral inflow profile and E/e', LV diastolic function was classified as normal or abnormal. LAVi>28mL/m(2) was used as the threshold to define LA enlargement. The mean age was 28.6±8.5years; there were 83 women. Mean LAVi was 48.3±11.1mL/m(2) and 124 (98%) patients had LA dilatation. In multivariable analysis, age, haemoglobin concentration and LV end-diastolic volume index were independent determinants of LAVi (R(2)=0.51; P<0.0001). E/e' was not linked to LAVi (P=0.43). Twenty patients had LVDD; when compared with patients without LVDD, they had a similar LAVi (52.2±14.7 and 47.5±10.2mL/m(2), respectively; P=0.29). Receiver operating characteristics curve analysis showed that LAVi could not be used to diagnose LVDD (area under curve=0.58; P=0.36). LA enlargement is common in SCA but appears not to be linked to LVDD. LAVi in this population is related to age, haemoglobin concentration and LV morphology. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  17. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vineyard, Craig Michael; Verzi, Stephen Joseph

    As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilizemore » memory.« less

  18. An exploration of multilevel modeling for estimating access to drinking-water and sanitation.

    PubMed

    Wolf, Jennyfer; Bonjour, Sophie; Prüss-Ustün, Annette

    2013-03-01

    Monitoring progress towards the targets for access to safe drinking-water and sanitation under the Millennium Development Goals (MDG) requires reliable estimates and indicators. We analyzed trends and reviewed current indicators used for those targets. We developed continuous time series for 1990 to 2015 for access to improved drinking-water sources and improved sanitation facilities by country using multilevel modeling (MLM). We show that MLM is a reliable and transparent tool with many advantages over alternative approaches to estimate access to facilities. Using current indicators, the MDG target for water would be met, but the target for sanitation missed considerably. The number of people without access to such services is still increasing in certain regions. Striking differences persist between urban and rural areas. Consideration of water quality and different classification of shared sanitation facilities would, however, alter estimates considerably. To achieve improved monitoring we propose: (1) considering the use of MLM as an alternative for estimating access to safe drinking-water and sanitation; (2) completing regular assessments of water quality and supporting the development of national regulatory frameworks as part of capacity development; (3) evaluating health impacts of shared sanitation; (4) using a more equitable presentation of countries' performances in providing improved services.

  19. Detection and validation of genomic regions associated with resistance to rust diseases in a worldwide hexaploid wheat landrace collection using BayesR and mixed linear model approaches.

    PubMed

    Pasam, Raj K; Bansal, Urmil; Daetwyler, Hans D; Forrest, Kerrie L; Wong, Debbie; Petkowski, Joanna; Willey, Nicholas; Randhawa, Mandeep; Chhetri, Mumta; Miah, Hanif; Tibbits, Josquin; Bariana, Harbans; Hayden, Matthew J

    2017-04-01

    BayesR and MLM association mapping approaches in common wheat landraces were used to identify genomic regions conferring resistance to Yr, Lr, and Sr diseases. Deployment of rust resistant cultivars is the most economically effective and environmentally friendly strategy to control rust diseases in wheat. However, the highly evolving nature of wheat rust pathogens demands continued identification, characterization, and transfer of new resistance alleles into new varieties to achieve durable rust control. In this study, we undertook genome-wide association studies (GWAS) using a mixed linear model (MLM) and the Bayesian multilocus method (BayesR) to identify QTL contributing to leaf rust (Lr), stem rust (Sr), and stripe rust (Yr) resistance. Our study included 676 pre-Green Revolution common wheat landrace accessions collected in the 1920-1930s by A.E. Watkins. We show that both methods produce similar results, although BayesR had reduced background signals, enabling clearer definition of QTL positions. For the three rust diseases, we found 5 (Lr), 14 (Yr), and 11 (Sr) SNPs significant in both methods above stringent false-discovery rate thresholds. Validation of marker-trait associations with known rust QTL from the literature and additional genotypic and phenotypic characterisation of biparental populations showed that the landraces harbour both previously mapped and potentially new genes for resistance to rust diseases. Our results demonstrate that pre-Green Revolution landraces provide a rich source of genes to increase genetic diversity for rust resistance to facilitate the development of wheat varieties with more durable rust resistance.

  20. Left atrial function after Cox's maze operation concomitant with mitral valve operation.

    PubMed

    Itoh, T; Okamoto, H; Nimi, T; Morita, S; Sawazaki, M; Ogawa, Y; Asakura, T; Yasuura, K; Abe, T; Murase, M

    1995-08-01

    This study examined whether the atrial fibrillation that commonly occurs in patients with a mitral valve operation could be eliminated by a concomitant maze operation. Left atrial function after Cox's maze operation performed concomitantly with a mitral valve operation was evaluated in 10 patients ranging in age from 38 to 67 years (mean age, 54 years). Seven patients who had had coronary artery bypass grafting served as the control group. Using transthoracic echocardiography, the ratio between the peak speed of the early filling wave and that of the atrial contraction wave (A/E ratio) and the atrial filling fraction (AFF) were determined from transmitral flow measurements. These two indices have been considered to represent the contribution of left atrial active contraction to ventricular filling. The A/E ratio and the AFF were significantly lower in the maze group (0.35 +/- 0.17 versus 0.97 +/- 0.28 [p < 0.01] and 17.6% +/- 8.8% versus 36.8% +/- 6.4% [p < 0.01], respectively). The A/E ratio and the AFF correlated inversely with age (r = -0.72, p < 0.05 and r = 0.76, p < 0.05, respectively) in the maze group. In an angiographic study, the mean left atrial maximal volume index in the maze group was approximately three times larger than that in the control group (117.5 +/- 24.3 mL/m2 versus 35.3 +/- 6.6 mL/m2 [p < 0.01]). The left atrial active emptying volume index was significantly smaller in patients in the maze group (7.2 +/- 2.5 mL/m2 versus 13.1 +/- 4.6 mL/m2 [p < 0.01]). After the maze procedure performed concomitantly with a mitral valve operation in patients with a dilated left atrium, left atrial contraction is detectable but incomplete in the elderly.

  1. Power loss and right ventricular efficiency in patients after tetralogy of Fallot repair with pulmonary insufficiency: clinical implications.

    PubMed

    Fogel, Mark A; Sundareswaran, Kartik S; de Zelicourt, Diane; Dasi, Lakshmi P; Pawlowski, Tom; Rome, Jack; Yoganathan, Ajit P

    2012-06-01

    To quantify right ventricular output power and efficiency and correlate these to ventricular function in patients with repaired tetralogy of Fallot. This might aid in determining the optimal timing for pulmonary valve replacement. We reviewed the cardiac catheterization and magnetic resonance imaging data of 13 patients with tetralogy of Fallot (age, 22 ± 17 years). Using pressure and flow measurements in the main pulmonary artery, cardiac output and regurgitation fraction, right ventricular (RV) power output, loss, and efficiency were calculated. The RV function was evaluated using cardiac magnetic resonance imaging. The RV systolic power was 1.08 ± 0.62 W, with 20.3% ± 8.6% power loss owing to 41% ± 14% pulmonary regurgitation (efficiency, 79.7% ± 8.6%; 0.84 ± 0.73 W), resulting in a net cardiac output of 4.24 ± 1.82 L/min. Power loss correlated significantly with the indexed RV end-diastolic and end-systolic volume (R = 0.78, P = .002 and R = 0.69, P = .009, respectively). The normalized RV power output had a significant negative correlation with RV end-diastolic and end-systolic volumes (both R = -0.87, P = .002 and R = -0.68, P = .023, respectively). A rapid decrease occurred in the RV power capacity with an increasing RV volume, with the curve flattening out at an indexed RV end-diastolic and end-systolic volume threshold of 139 mL/m(2) and 75 mL/m(2), respectively. Significant power loss is present in patients with repaired tetralogy of Fallot and pulmonary regurgitation. A rapid decrease in efficiency occurs with increasing RV volume, suggesting that pulmonary valve replacement should be done before the critical value of 139 mL/m(2) and 75 mL/m(2) for the RV end-diastolic and end-systolic volume, respectively, to preserve RV function. Copyright © 2012 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  2. Lung Deflation and Cardiovascular Structure and Function in Chronic Obstructive Pulmonary Disease. A Randomized Controlled Trial.

    PubMed

    Stone, Ian S; Barnes, Neil C; James, Wai-Yee; Midwinter, Dawn; Boubertakh, Redha; Follows, Richard; John, Leonette; Petersen, Steffen E

    2016-04-01

    Patients with chronic obstructive pulmonary disease develop increased cardiovascular morbidity with structural alterations. To investigate through a double-blind, placebo-controlled, crossover study the effect of lung deflation on cardiovascular structure and function using cardiac magnetic resonance. Forty-five hyperinflated patients with chronic obstructive pulmonary disease were randomized (1:1) to 7 (maximum 14) days inhaled corticosteroid/long-acting β2-agonist fluticasone furoate/vilanterol 100/25 μg or placebo (7-day minimum washout). Primary outcome was change from baseline in right ventricular end-diastolic volume index versus placebo. There was a 5.8 ml/m(2) (95% confidence interval, 2.74-8.91; P < 0.001) increase in change from baseline right ventricular end-diastolic volume index and a 429 ml (P < 0.001) reduction in residual volume with fluticasone furoate/vilanterol versus placebo. Left ventricular end-diastolic and left atrial end-systolic volumes increased by 3.63 ml/m(2) (P = 0.002) and 2.33 ml/m(2) (P = 0.002). In post hoc analysis, right ventricular stroke volume increased by 4.87 ml/m(2) (P = 0.003); right ventricular ejection fraction was unchanged. Left ventricular adaptation was similar; left atrial ejection fraction improved by +3.17% (P < 0.001). Intrinsic myocardial function was unchanged. Pulmonary artery pulsatility increased in two of three locations (main +2.9%, P = 0.001; left +2.67%, P = 0.030). Fluticasone furoate/vilanterol safety profile was similar to placebo. Pharmacologic treatment of chronic obstructive pulmonary disease has consistent beneficial and plausible effects on cardiac function and pulmonary vasculature that may contribute to favorable effects of inhaled therapies. Future studies should investigate the effect of prolonged lung deflation on intrinsic myocardial function. Clinical trial registered with www.clinicaltrials.gov (NCT 01691885).

  3. Lund-Mackay and modified Lund-Mackay score for sinus surgery in children with cystic fibrosis.

    PubMed

    Do, Bao Anh; Lands, Larry C; Mascarella, Marco A; Fanous, Amanda; Saint-Martin, Christine; Manoukian, John J; Nguyen, Lily H P

    2015-08-01

    Patients with cystic fibrosis (CF) frequently present with severe sinonasal disease often requiring radiologic imaging and surgical intervention. Few studies have focused on the relationship between radiologic scoring systems and the need for sinus surgery in this population. The objective of this study is to evaluate the Lund-Mackay (LM) and modified Lund-Mackay (m-LM) scoring systems in predicting the need for sinus surgery or revision surgery in patients with CF. We performed a retrospective chart review of CF patients undergoing computed tomography (CT) sinus imaging at a tertiary care pediatric hospital from 1995 to 2008. Patient scans were scored using both the LM and m-LM systems and compared to the rate of sinus surgery or revision surgery. Receiver-operator characteristics curves (ROC) were used to analyze the radiological scoring systems. A total of 41 children with CF were included in the study. The mean LM score for patients undergoing surgery was 17.3 (±3.1) compared to 11.5 (±6.2) for those treated medically (p<0.01). For the m-LM, the mean score of patients undergoing surgery was 20.3 (±3.5) and 13.5 (±7.3) for those medically treated (p<0.01). Using a ROC curve with a threshold score of 13 for the LM, the sensitivity was 89.3% (95% CI of 72-98) and specificity of 69.2% (95% CI of 39-91). At an optimal score of 19, the m-LM system produced a sensitivity of 67.7% (95% CI of 48-84) and specificity of 84.6% (95% CI of 55-98). The modified Lund-Mackay score provides a high specificity while the Lund-Mackay score a high sensitivity for CF patients who required sinus surgery. The combination of both radiologic scoring systems can potentially predict the need for surgery in this population. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Use of Simpson's method of disc to detect early echocardiographic changes in Doberman Pinschers with dilated cardiomyopathy.

    PubMed

    Wess, G; Mäurer, J; Simak, J; Hartmann, K

    2010-01-01

    M-mode is the echocardiographic gold standard to diagnose dilated cardiomyopathy (DCM) in dogs, whereas Simpson's method of discs (SMOD) is the preferred method to detect echocardiographic evidence of disease in humans. To establish reference values for SMOD and to compare those with M-mode measurements. Nine hundred and sixty-nine examinations of 471 Doberman Pinschers. Using a prospective longitudinal study design. Reference values for SMOD were established using 75 healthy Doberman Pinschers >8 years old with <50 ventricular premature contractions (VPCs) in 24 hours. The ability of the new SMOD cut-off values, normalized to body surface area (BSA), for left ventricular end-diastolic volume (LVEDV/BSA >95mL/m(2) ) and end-systolic volume (LVESV/BSA > 55mL/m(2) ) to detect echocardiographic changes in Doberman Pinschers with DCM was compared with currently recommended M-mode values. Dogs with elevated SMOD values but normal M-mode measurements were followed-up using a prospective longitudinal study design. At the final examination 175 dogs were diagnosed with DCM according to both methods (M-mode and SMOD). At previous examinations, M-mode values were abnormal in 142 examinations only, whereas all 175 SMOD already had detected changes. Additionally, 19 of 154 dogs with >100 VPCs/24 hours and normal M-mode values had abnormal SMOD measurement. Six dogs with increased SMOD measurements remained healthy at several follow-up examinations (classified as false positive); in 24 dogs with increased SMOD measurements, no follow-up examinations were available (classified as unclear). SMOD measurements are superior to M-mode to detect early echocardiographic changes in Dobermans with occult DCM. Copyright © 2010 by the American College of Veterinary Internal Medicine.

  5. Prognostic Factors in Severe Chagasic Heart Failure

    PubMed Central

    Costa, Sandra de Araújo; Rassi, Salvador; Freitas, Elis Marra da Madeira; Gutierrez, Natália da Silva; Boaventura, Fabiana Miranda; Sampaio, Larissa Pereira da Costa; Silva, João Bastista Masson

    2017-01-01

    Background Prognostic factors are extensively studied in heart failure; however, their role in severe Chagasic heart failure have not been established. Objectives To identify the association of clinical and laboratory factors with the prognosis of severe Chagasic heart failure, as well as the association of these factors with mortality and survival in a 7.5-year follow-up. Methods 60 patients with severe Chagasic heart failure were evaluated regarding the following variables: age, blood pressure, ejection fraction, serum sodium, creatinine, 6-minute walk test, non-sustained ventricular tachycardia, QRS width, indexed left atrial volume, and functional class. Results 53 (88.3%) patients died during follow-up, and 7 (11.7%) remained alive. Cumulative overall survival probability was approximately 11%. Non-sustained ventricular tachycardia (HR = 2.11; 95% CI: 1.04 - 4.31; p<0.05) and indexed left atrial volume ≥ 72 mL/m2 (HR = 3.51; 95% CI: 1.63 - 7.52; p<0.05) were the only variables that remained as independent predictors of mortality. Conclusions The presence of non-sustained ventricular tachycardia on Holter and indexed left atrial volume > 72 mL/m2 are independent predictors of mortality in severe Chagasic heart failure, with cumulative survival probability of only 11% in 7.5 years. PMID:28443956

  6. Myocardial adaption to HI(R)T in previously untrained men with a randomized, longitudinal cardiac MR imaging study (Physical adaptions in Untrained on Strength and Heart trial, PUSH-trial).

    PubMed

    Scharf, Michael; Oezdemir, Derya; Schmid, Axel; Kemmler, Wolfgang; von Stengel, Simon; May, Matthias S; Uder, Michael; Lell, Michael M

    2017-01-01

    Although musculoskeletal effects in resistance training are well described, little is known about structural and functional cardiac adaption in formerly untrained subjects. We prospectively evaluated whether short term high intensity (resistance) training (HI(R)T) induces detectable morphologic cardiac changes in previously untrained men in a randomized controlled magnetic resonance imaging (MRI) study. 80 untrained middle-aged men were randomly assigned to a HI(R)T-group (n = 40; 43.5±5.9 years) or an inactive control group (n = 40; 42.0±6.3 years). HI(R)T comprised 22 weeks of training focusing on a single-set to failure protocol in 2-3 sessions/week, each with 10-13 exercises addressing main muscle groups. Repetitions were decreased from 8-10 to 3-5 during study period. Before and after HI(R)T all subjects underwent physiologic examination and cardiac MRI (cine imaging, tagging). Indexed left (LV) and right ventricular (RV) volume (LV: 76.8±15.6 to 78.7±14.8 ml/m2; RV: 77.0±15.5 to 78.7±15.1 ml/m2) and mass (LV: 55.5±9.7 to 57.0±8.8 g/m2; RV: 14.6±3.0 to 15.0±2.9 g/m2) significantly increased with HI(R)T (all p<0.001). Mean LV and RV remodeling indices of HI(R)T-group did not alter with training (0.73g/mL and 0.19g/mL, respectively [p = 0.96 and p = 0.87]), indicating balanced cardiac adaption. Indexed LV (48.4±11.1 to 50.8±11.0 ml/m2) and RV (48.5±11.0 to 50.6±10.7 ml/m2) stroke volume significantly increased with HI(R)T (p<0.001). Myocardial strain and strain rates did not change following resistance exercise. Left atrial volume at end systole slightly increased after HI(R)T (36.2±7.9 to 37.0±8.4 ml/m2, p = 0.411), the ratio to end-diastolic LV volume at baseline and post-training was unchanged (0.47 vs. 0.47, p = 0.79). 22 weeks of HI(R)T lead to measurable, physiological changes in cardiac atrial and ventricular morphologic characteristics and function in previously untrained men. The PUSH-trial is registered at the US National Institutes of Health (ClinicalTrials.gov), NCT01766791.

  7. Risk factors of chronic periodontitis on healing response: a multilevel modelling analysis.

    PubMed

    Song, J; Zhao, H; Pan, C; Li, C; Liu, J; Pan, Y

    2017-09-15

    Chronic periodontitis is a multifactorial polygenetic disease with an increasing number of associated factors that have been identified over recent decades. Longitudinal epidemiologic studies have demonstrated that the risk factors were related to the progression of the disease. A traditional multivariate regression model was used to find risk factors associated with chronic periodontitis. However, the approach requirement of standard statistical procedures demands individual independence. Multilevel modelling (MLM) data analysis has widely been used in recent years, regarding thorough hierarchical structuring of the data, decomposing the error terms into different levels, and providing a new analytic method and framework for solving this problem. The purpose of our study is to investigate the relationship of clinical periodontal index and the risk factors in chronic periodontitis through MLM analysis and to identify high-risk individuals in the clinical setting. Fifty-four patients with moderate to severe periodontitis were included. They were treated by means of non-surgical periodontal therapy, and then made follow-up visits regularly at 3, 6, and 12 months after therapy. Each patient answered a questionnaire survey and underwent measurement of clinical periodontal parameters. Compared with baseline, probing depth (PD) and clinical attachment loss (CAL) improved significantly after non-surgical periodontal therapy with regular follow-up visits at 3, 6, and 12 months after therapy. The null model and variance component models with no independent variables included were initially obtained to investigate the variance of the PD and CAL reductions across all three levels, and they showed a statistically significant difference (P < 0.001), thus establishing that MLM data analysis was necessary. Site-level had effects on PD and CAL reduction; those variables could explain 77-78% of PD reduction and 70-80% of CAL reduction at 3, 6, and 12 months. Other levels only explain 20-30% of PD and CAL reductions. Site-level had the greatest effect on PD and CAL reduction. Non-surgical periodontal therapy with regular follow-up visits had a remarkable curative effect. All three levels had a substantial influence on the reduction of PD and CAL. Site-level had the largest effect on PD and CAL reductions.

  8. Transpulmonary thermodilution using femoral indicator injection: a prospective trial in patients with a femoral and a jugular central venous catheter

    PubMed Central

    2010-01-01

    Introduction Advanced hemodynamic monitoring using transpulmonary thermodilution (TPTD) is established for measurement of cardiac index (CI), global end-diastolic volume index (GEDVI) and extra-vascular lung water index (EVLWI). TPTD requires indicator injection via a central venous catheter (usually placed via the jugular or subclavian vein). However, superior vena cava access is often not feasible due to the clinical situation. This study investigates the conformity of TPTD using femoral access. Methods This prospective study involved an 18-month trial at a medical intensive care unit at a university hospital. Twenty-four patients with both a superior and an inferior vena cava catheter at the same time were enrolled in the study. Results TPTD-variables were calculated from TPTD curves after injection of the indicator bolus via jugular access (TPTDjug) and femoral access (TPTDfem). GEDVIfem and GEDVIjug were significantly correlated (rm = 0.88; P < 0.001), but significantly different (1,034 ± 275 vs. 793 ± 180 mL/m2; P < 0.001). Bland-Altman analysis demonstrated a bias of +241 mL/m2 (limits of agreement: -9 and +491 mL/m2). GEDVIfem, CIfem and ideal body weight were independently associated with the bias (GEDVIfem-GEDVIjug). A correction formula of GEDVIjug after femoral TPTD, was calculated. EVLWIfem and EVLWIjug were significantly correlated (rm = 0.93; P < 0.001). Bland-Altman analysis revealed a bias of +0.83 mL/kg (limits of agreement: -2.61 and +4.28 mL/kg). Furthermore, CIfem and CIjug were significantly correlated (rm = 0.95; P < 0.001). Bland-Altman analysis demonstrated a bias of +0.29 L/min/m2 (limits of agreement -0.40 and +0.97 L/min/m2; percentage-error 16%). Conclusions TPTD after femoral injection of the thermo-bolus provides precise data on GEDVI with a high correlation, but a self-evident significant bias related to the augmented TPTD-volume. After correction of GEDVIfem using a correction formula, GEDVIfem shows high predictive capabilities for GEDVIjug. Regarding CI and EVLWI, accurate TPTD-data is obtained using femoral access. PMID:20500825

  9. Right heart overload contributes to cardiac natriuretic hormone elevation in patients with heart failure.

    PubMed

    Passino, Claudio; Maria Sironi, Anna; Favilli, Brunella; Poletti, Roberta; Prontera, Concetta; Ripoli, Andrea; Lombardi, Massimo; Emdin, Michele

    2005-09-15

    Atrial and brain natriuretic peptides (ANP and BNP) plasma concentration increases and holds a prognostic significance in patients with left ventricular dysfunction. We assessed the hypothesis that right ventricular (RV) overload might significantly contribute to plasma elevation of cardiac natriuretic hormones in patients with heart failure. Forty-one patients with cardiomyopathy and depressed left ventricular (LV) function (ejection fraction, EF, <40%), underwent cardiac magnetic resonance imaging (MRI) and resting plasma determination of ANP and BNP. Nineteen healthy subjects were also studied as control group. Ventricular volumes and function were assessed by MRI. In the group of patients, LVEF was 22.6+/-1.2% (controls: 61.2+/-1.3%, P<0.001, mean+/-S.E.M.), while RVEF was 48.2+/-2.5% (controls: 66.7+/-1.6%, P<0.001); LV and RV end diastolic/systolic volumes, corrected by body surface area, were 143+/-7/114+/-7 ml/m2 (controls 70+/-3/27+/-2 ml/m2, both P<0.001) and 66+/-3/37+/-4 ml/m2 (controls: 63+/-4/21+/-2 ml/m2, P<0.01 only for end-systolic volume). BNP plasma value was on average 324+/-39 pg/ml (range: 23-1280, controls 10+/-2 pg/ml), ANP value was 144+/-17 pg/ml (range: 26-534, controls 15+/-1 pg/ml). BNP positively correlated with either end-diastolic or end-systolic RV volume in patients, less with LV systolic, and not with LV diastolic volume. Moreover, a significant negative correlation was observed between BNP and either LVEF or RVEF. Conversely, ANP showed a significant correlation only with end-systolic RV volume and with both RVEF and LVEF. When multivariate stepwise linear regression analysis was applied LVEF resulted the only independent predictor for ANP plasma values (R=0.591, P<0.001), while LVEF and RV end-diastolic volume for BNP (R=0.881, P<0.001, and R=0.881, P=0.035, respectively). Right heart overload contributes independently to plasma elevation of natriuretic peptides. RV involvement, which is known to independently worsen prognosis in patients with cardiomyopathy, might contribute to their established prognostic power, inducing compensatory secretion of plasma cardiac natriuretic hormones.

  10. Mechanistic insights and characterization of sickle cell disease-associated cardiomyopathy.

    PubMed

    Desai, Ankit A; Patel, Amit R; Ahmad, Homaa; Groth, John V; Thiruvoipati, Thejasvi; Turner, Kristen; Yodwut, Chattanong; Czobor, Peter; Artz, Nicole; Machado, Roberto F; Garcia, Joe G N; Lang, Roberto M

    2014-05-01

    Cardiovascular disease is an important cause of morbidity and mortality in sickle cell disease (SCD). We sought to characterize sickle cell cardiomyopathy using multimodality noninvasive cardiovascular testing and identify potential causative mechanisms. Stable adults with SCD (n=38) and healthy controls (n=13) prospectively underwent same day multiparametric cardiovascular magnetic resonance (cine, T2* iron, vasodilator first pass myocardial perfusion, and late gadolinium enhancement imaging), transthoracic echocardiography, and applanation tonometry. Compared with controls, patients with SCD had severe dilation of the left ventricle (124±27 vs 79±12 mL/m(2)), right ventricle (127±28 vs 83±14 mL/m(2)), left atrium (65±16 vs 41±9 mL/m(2)), and right atrium (78±17 vs 56±17 mL/m(2); P<0.01 for all). Patients with SCD also had a 21% lower myocardial perfusion reserve index than control subjects (1.47±0.34 vs 1.87±0.37; P=0.034). A significant subset of patients with SCD (25%) had evidence of late gadolinium enhancement, whereas only 1 patient had evidence of myocardial iron overload. Diastolic dysfunction was present in 26% of patients with SCD compared with 8% in controls. Estimated filling pressures (E/e', 9.3±2.7 vs 7.3±2.0; P=0.0288) were higher in patients with SCD. Left ventricular dilation and the presence of late gadolinium enhancement were inversely correlated to hepatic T2* times (ie, hepatic iron overload because of frequent blood transfusions; P<0.05 for both), whereas diastolic dysfunction and increased filling pressures were correlated to aortic stiffness (augmentation pressure and index, P<0.05 for all). Sickle cell cardiomyopathy is characterized by 4-chamber dilation and in some patients myocardial fibrosis, abnormal perfusion reserve, diastolic dysfunction, and only rarely myocardial iron overload. Left ventricular dilation and myocardial fibrosis are associated with increased blood transfusion requirements, whereas left ventricular diastolic dysfunction is predominantly correlated with increased aortic stiffness. http://www.clinicaltrials.gov. Unique identifier: NCT01044901. © 2014 American Heart Association, Inc.

  11. Treatment of Heart Failure With Associated Functional Mitral Regurgitation Using the ARTO System: Initial Results of the First-in-Human MAVERIC Trial (Mitral Valve Repair Clinical Trial).

    PubMed

    Rogers, Jason H; Thomas, Martyn; Morice, Marie-Claude; Narbute, Inga; Zabunova, Milana; Hovasse, Thomas; Poupineau, Mathieu; Rudzitis, Ainars; Kamzola, Ginta; Zvaigzne, Ligita; Greene, Samantha; Erglis, Andrejs

    2015-07-01

    MAVERIC (Mitral Valve Repair Clinical Trial) reports the safety and efficacy of the ARTO system in patients with symptomatic heart failure and functional mitral regurgitation (FMR). The ARTO system percutaneously modifies the mitral annulus to improve leaflet coaptation in FMR. The MAVERIC trial is a prospective, nonrandomized first-in-human study. Key inclusion criteria were systolic heart failure New York Heart Association functional classes II to IV, FMR grade ≥2+, left ventricular (LV) ejection fraction ≤40%, LV end-diastolic diameter >50 mm and ≤75 mm. Exclusion criteria were clinical variables that precluded feasibility of the ARTO procedure. Primary outcomes were safety (30-day major adverse events) and efficacy (MR reduction, LV volumes, and functional status). Eleven patients received the ARTO system, and there were no procedural adverse events. From baseline to 30 days, there were meaningful improvements. Effective regurgitant orifice area decreased from 30.3 ± 11.1 mm(2) to 13.5 ± 7.1 mm(2) and regurgitant volumes from 45.4 ± 15.0 ml to 19.5 ± 10.2 ml. LV end-systolic volume index improved from 77.5 ± 24.3 ml/m(2) to 68.5 ± 21.4 ml/m(2), and LV end-diastolic volume index 118.7 ± 28.6 ml/m(2) to 103.9 ± 21.2 ml/m(2). Mitral annular anteroposterior diameter decreased from 45.0 ± 3.3 mm to 38.7 ± 3.0 mm. Functional status was 81.8% New York Heart Association functional class III/IV improving to 54.6% functional class I/II. At 30 days, there were 2 adverse events: 1 pericardial effusion requiring surgical drainage; and 1 asymptomatic device dislodgement. The ARTO system is a novel transcatheter device that can be used safely with meaningful efficacy in the treatment of FMR. (Mitral Valve Repair Clinical Trial [MAVERIC]; NCT02302872). Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  12. Successful Feasibility Human Trial of a New Self-Expandable Percutaneous Pulmonary Valve (Pulsta Valve) Implantation Using Knitted Nitinol Wire Backbone and Trileaflet α-Gal-Free Porcine Pericardial Valve in the Native Right Ventricular Outflow Tract.

    PubMed

    Kim, Gi Beom; Song, Mi Kyoung; Bae, Eun Jung; Park, Eun-Ah; Lee, Whal; Lim, Hong-Gook; Kim, Yong Jin

    2018-06-01

    Self-expandable percutaneous pulmonary valve implantation (PPVI) for native right ventricular outflow tract lesions is still in the clinical trial phase. The aim of this study is to present the result of feasibility study of a novel self-expandable knitted nitinol wire stent mounted with a treated trileaflet α-Gal-free porcine pericardial valve for PPVI. A feasibility study using Pulsta valve (TaeWoong Medical Co, Gyeonggi-do, South Korea) was designed for patients with severe pulmonary regurgitation in the native right ventricular outflow tract, and 6-month follow-up outcomes were reviewed. Ten tetralogy of Fallot patients were enrolled. Before PPVI, severe pulmonary regurgitation (mean pulmonary regurgitation fraction, 45.5%±7.2%; range, 34.9%-56%) and enlarged right ventricular volume (mean indexed right ventricular end-diastolic volume, 176.7±14.3 mL/m 2 ; range, 158.9-205.9 mL/m 2 ) were present. The median age at PPVI was 21.7±6.5 years (range, 13-36 years). Five patients were successfully implanted with 28 mm and the other 5 with 26 mm valves loaded on the 18F delivery cable. No significant periprocedural complications were noted in any patient. At the 6-month follow-up, indexed right ventricular end-diastolic volume was dramatically decreased to 126.3±20.3 mL/m 2 (range, 99-164.2 mL/m 2 ), and the mean value of peak instantaneous pressure gradient between the right ventricle and the pulmonary artery decreased from 6.8±3.5 mm Hg (range, 2-12 mm Hg) before PPVI to 5.7±6.7 mm Hg (range, 2-12 mm Hg) without significant pulmonary regurgitation. There was no adverse event associated with the valve. A feasibility study of the Pulsta valve for native right ventricular outflow tract lesions was completed successfully with planned Pulsta valve implantation and demonstrated good short-term effectiveness without serious adverse events. URL: https://www.clinicaltrials.gov. Unique identifier: NCT02555319. © 2018 American Heart Association, Inc.

  13. Association mapping of iron deficiency chlorosis loci in soybean (Glycine max L. Merr.) advanced breeding lines.

    PubMed

    Wang, Ju; McClean, Phillip E; Lee, Rian; Goos, R Jay; Helms, Ted

    2008-04-01

    Association mapping is an alternative to mapping in a biparental population. A key to successful association mapping is to avoid spurious associations by controlling for population structure. Confirming the marker/trait association in an independent population is necessary for the implementation of the marker in other genetic studies. Two independent soybean populations consisting of advanced breeding lines representing the diversity within maturity groups 00, 0, and I were screened in multi-site, replicated field trials to discover molecular markers associated with iron deficiency chlorosis (IDC), a major yield-limiting factor in soybean. Lines with extreme phenotypes were initially screened to identify simple sequence repeat (SSR) markers putatively associated with the IDC. Marker data collected from all lines were used to control for population structure and kinship relationships. Single factor analysis of variance (SFA) and mixed linear model (MLM) analyses were used to discover marker/trait associations. The MLM analyses, which include population structure, kinship or both factors, reduced the number of markers significantly associated with IDC by 50% compared with SFA. With the MLM approach, three markers were found to be associated with IDC in the first population. Two of these markers, Satt114 and Satt239, were also found to be associated with IDC in the second confirmation population. For both populations, those lines with the tolerance allele at both these two marker loci had significantly lower IDC scores than lines with one or no tolerant alleles.

  14. Recoveries of rat lymph FA after administration of specific structured 13C-TAG.

    PubMed

    Vistisen, Bodil; Mu, Huiling; Høy, Carl-Erik

    2003-09-01

    The potential of the specific structured TAG MLM [where M = caprylic acid (8:0) and L = linoleic acid (18:2n-6)] is the simultaneous delivery of energy and EFA. Compared with long-chain TAG (LLL), they may be more rapidly hydrolyzed and absorbed. This study examined the lymphatic recoveries of intragastrically administered L*L*L*, M*M*M*, ML*M, and ML*L* (where * = 13C-labeled FA) in rats. Lymph lipids were separated into lipid classes and analyzed by GC combustion isotope ratio MS. The recoveries of lymph TAG 18:2n-6 8 h after administration of L*L*L*, ML*M, and ML*L* were 38.6, 48.4, and 49.1%, respectively, whereas after 24 h the recoveries were approximately 50% in all experimental groups. The exogenous contribution to lymph TAG 18:2n-6 was approximately 80 and 60% at maximum absorption of the specific structured TAG and L*L*L*, respectively, 3-6 h after administration. The tendency toward more rapid recovery of exogenous long-chain FA following administration of specific structured TAG compared with long-chain TAG was probably due to fast hydrolysis. The lymphatic recovery of 8:0 was 2.2% 24 h after administration of M*M*M*. This minor lymphatic recovery of exogenous 8:0 was probably due to low stimulation of chylomicron formation. These results demonstrate tendencies toward faster lymphatic recovery of long-chain FA after administration of specific structured TAG compared with long-chain TAG.

  15. Factor structure and psychometric properties of the trier inventory for chronic stress (TICS) in a representative german sample

    PubMed Central

    2012-01-01

    Background Chronic stress results from an imbalance of personal traits, resources and the demands placed upon an individual by social and occupational situations. This chronic stress can be measured using the Trier Inventory for Chronic Stress (TICS). Aims of the present study are to test the factorial structure of the TICS, report its psychometric properties, and evaluate the influence of gender and age on chronic stress. Methods The TICS was answered by N = 2,339 healthy participants aged 14 to 99. The sample was selected by random-route sampling. Exploratory factor analyses with Oblimin-rotated Principal Axis extraction were calculated. Confirmatory factor analyses applying Robust Maximum Likelihood estimations (MLM) tested model fit and configural invariance as well as the measurement invariance for gender and age. Reliability estimations and effect sizes are reported. Results In the exploratory factor analyses, both a two-factor and a nine-factor model emerged. Confirmatory factor analyses resulted in acceptable model fit (RMSEA), with model comparison fit statistics corroborating the superiority of the nine-factor model. Most factors were moderately to highly intercorrelated. Reliabilities were good to very good. Measurement invariance tests gave evidence for differential effects of gender and age on the factor structure. Furthermore, women and younger individuals, especially those aged 35 to 44, tended to report more chronic stress than men and older individuals. Conclusions The proposed nine-factor structure could be factorially validated, results in good scale reliability, and heuristically can be grouped by two higher-order factors: "High Demands" and "Lack of Satisfaction". Age and gender represent differentiable and meaningful contributors to the perception of chronic stress. PMID:22463771

  16. Additional mechanism for left ventricular dysfunction: chronic pulmonary regurgitation decreases left ventricular preload in patients with tetralogy of Fallot.

    PubMed

    Ylitalo, Pekka; Jokinen, Eero; Lauerma, Kirsi; Holmström, Miia; Pitkänen-Argillander, Olli M

    2018-02-01

    Right ventricular dysfunction in patients with tetralogy of Fallot and significant pulmonary regurgitation may lead to systolic dysfunction of the left ventricle due to altered ventricular interaction. We were interested in determining whether chronic pulmonary regurgitation affects the preload of the left ventricle. In addition, we wanted to study whether severe chronic pulmonary regurgitation would alter the preload of the left ventricle when compared with patients having preserved pulmonary valve annulus. The study group comprised 38 patients with tetralogy of Fallot who underwent surgical repair between 1990 and 2003. Transannular patching was required in 21 patients to reconstruct the right ventricular outflow tract. Altogether, 48 age- and gender-matched healthy volunteers were recruited. Cardiac MRI was performed on all study patients to assess the atrial and ventricular volumes and function. Severe pulmonary regurgitation (>30 ml/m2) was present in 13 patients, of whom 11 had a transannular patch, but only two had a preserved pulmonary valve annulus. The ventricular preload volumes from both atria were significantly reduced in patients with severe pulmonary regurgitation, and left ventricular stroke volumes (44.1±4.7 versus 58.9±10.7 ml/m2; p<0.0001) were smaller compared with that in patients with pulmonary regurgitation <30 ml/m2 or in controls. In patients with tetralogy of Fallot, severe pulmonary regurgitation has a significant effect on volume flow through the left atrium. Reduction in left ventricular preload volume may be an additional factor contributing to left ventricular dysfunction.

  17. Three-level multilevel growth models for nested change data: a guide for group treatment researchers.

    PubMed

    Tasca, Giorgio A; Illing, Vanessa; Joyce, Anthony S; Ogrodniczuk, John S

    2009-07-01

    Researchers have known for years about the negative impact on Type I error rates caused by dependencies in hierarchically nested and longitudinal data. Despite this, group treatment researchers do not consistently use methods such as multilevel models (MLMs) to assess dependence and appropriately analyse their nested data. The goals of this study are to review some of the study design issues with regard to hierarchically nested and longitudinal data, discuss MLMs for assessing and handling dependence in data, and present a guide for developing a three-level growth MLM that is appropriate for group treatment data, design, and research questions. The authors present an example from group treatment research to illustrate these issues and methods.

  18. Evolution of Echocardiographic Measures of Cardiac Disease From CKD to ESRD and Risk of All-Cause Mortality: Findings From the CRIC Study.

    PubMed

    Bansal, Nisha; Roy, Jason; Chen, Hsiang-Yu; Deo, Rajat; Dobre, Mirela; Fischer, Michael J; Foster, Elyse; Go, Alan S; He, Jiang; Keane, Martin G; Kusek, John W; Mohler, Emile; Navaneethan, Sankar D; Rahman, Mahboob; Hsu, Chi-Yuan

    2018-05-18

    Abnormal cardiac structure and function are common in chronic kidney disease (CKD) and end-stage renal disease (ESRD) and linked with mortality and heart failure. We examined changes in echocardiographic measures during the transition from CKD to ESRD and their associations with post-ESRD mortality. Prospective study. We studied 417 participants with CKD in the Chronic Renal Insufficiency Cohort (CRIC) who had research echocardiograms during CKD and ESRD. We measured change in left ventricular mass index, left ventricular ejection fraction (LVEF), diastolic relaxation (normal, mildly abnormal, and moderately/severely abnormal), left ventricular end-systolic (LVESV), end-diastolic (LVEDV) volume, and left atrial volume from CKD to ESRD. All-cause mortality after dialysis therapy initiation. Cox proportional hazard models were used to test the association of change in each echocardiographic measure with postdialysis mortality. Over a mean of 2.9 years between pre- and postdialysis echocardiograms, there was worsening of mean LVEF (52.5% to 48.6%; P<0.001) and LVESV (18.6 to 20.2mL/m 2.7 ; P<0.001). During this time, there was improvement in left ventricular mass index (60.4 to 58.4g/m 2.7 ; P=0.005) and diastolic relaxation (11.11% to 4.94% with moderately/severely abnormal; P=0.02). Changes in left atrial volume (4.09 to 4.15mL/m 2 ; P=0.08) or LVEDV (38.6 to 38.4mL/m 2.7 ; P=0.8) were not significant. Worsening from CKD to ESRD of LVEF (adjusted HR for every 1% decline in LVEF, 1.03; 95% CI, 1.00-1.06) and LVESV (adjusted HR for every 1mL/m 2.7 increase, 1.04; 95% CI, 1.02-1.07) were independently associated with greater risk for postdialysis mortality. Some missing or technically inadequate echocardiograms. In a longitudinal study of patients with CKD who subsequently initiated dialysis therapy, LVEF and LVESV worsened and were significantly associated with greater risk for postdialysis mortality. There may be opportunities for intervention during this transition period to improve outcomes. Copyright © 2018 National Kidney Foundation, Inc. All rights reserved.

  19. Physiological response during activity programs using Wii-based video games in patients with cystic fibrosis (CF).

    PubMed

    del Corral, Tamara; Percegona, Janaína; Seborga, Melisa; Rabinovich, Roberto A; Vilaró, Jordi

    2014-12-01

    Patients with cystic fibrosis (CF) are characterized by an abnormal ventilation response that limits the exercise capacity. Exercise training increases exercise capacity, decreases dyspnea and improves health-related quality of life in CF. Adherence to pulmonary rehabilitation programs is a key factor to guarantee optimal benefits and a difficult goal in this population. The aim of this study was to determine the physiological response during three Nintendo Wii™ video game activities (VGA) candidates to be used as training modalities in patients with CF. 24 CF patients (age 12.6±3.7 years; BMI 18.8±2.9kgm(-2); FEV1 93.8±18.8%pred) were included. All participants performed, on two separate days, 3 different VGA: 1) Wii Fit Plus (Wii-Fit); 2) Wii Active (Wii-Acti), and 3) Wii Family Trainer (Wii-Train), in random order during 5min. The obtained results were compared with the 6-min walk test (6MWT). The physiological variables [oxygen uptake (VO2), minute ventilation (VE), and heart rate (HR)] were recorded using a portable metabolic analyzer. During all VGA and 6MWT, VO2 reached a plateau from the 3rd min. Compared with the 6MWT (1024.2±282.2mLm(-1)), Wii-Acti (1232.2±427.2mLm(-1)) and Wii-Train (1252.6±360.2mLm(-1)) reached higher VO2 levels during the last 3min (p<0.0001 in both cases), while Wii-Fit (553.8±113.2mLm(-1)) reached significantly lower levels of VO2 (p<0.001). Similar effects were seen for the ventilatory volume (VE). No differences in dyspnea and oxygen saturation were seen between the different modalities. All patients were compliant with all three Wii™ modalities. Active video game are well tolerated by patients with CF. All the modalities evaluated imposed a constant load but were associated with different physiological responses reflecting the different intensities imposed. Wii-Acti and Wii-Train impose a significantly high metabolic demand comparable to the 6MWT. Further research is needed to evaluate the effects of VGA as a training program to increase exercise capacity for CF patients. Copyright © 2014 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.

  20. Aortic elasticity and left ventricular function after arterial switch operation: MR imaging--initial experience.

    PubMed

    Grotenhuis, Heynric B; Ottenkamp, Jaap; Fontein, Duveken; Vliegen, Hubert W; Westenberg, Jos J M; Kroft, Lucia J M; de Roos, Albert

    2008-12-01

    To prospectively assess aortic dimensions, aortic elasticity, aortic valve competence, and left ventricular (LV) systolic function in patients after the arterial switch operation (ASO) by using magnetic resonance (MR) imaging. Informed consent was obtained from all participants for this local ethics committee-approved study. Fifteen patients (11 male patients, four female patients; mean age, 16 years +/- 4 [standard deviation]; imaging performed 16.1 years after surgery +/- 3.7) and 15 age- and sex-matched control subjects (11 male subjects, four female subjects; mean age, 16 years +/- 4) were evaluated. Velocity-encoded MR imaging was used to assess aortic pulse wave velocity (PWV), and a balanced turbo-field-echo sequence was used to assess aortic root distensibility. Standard velocity-encoded and multisection-multiphase imaging sequences were used to assess aortic valve function, systolic LV function, and LV mass. The two-tailed Mann-Whitney U test and Spearman rank correlation coefficient were used for statistical analysis. Patients treated with the ASO showed aortic root dilatation at three predefined levels (mean difference, 5.7-9.4 mm; P < or = .007) and reduced aortic elasticity (PWV of aortic arch, 5.1 m/sec +/- 1.2 vs 3.9 m/sec +/- 0.7, P = .004; aortic root distensibility, [2.2 x 10(-3)] x mm Hg(-1) +/- 1.8 vs [4.9 x 10(-3)] x mm Hg(-1) +/- 2.9, P < .01) compared with control subjects. Minor degrees of aortic regurgitation (AR) were present (AR fraction, 5% +/- 3 in patients vs 1% +/- 1 in control subjects; P < .001). Patients had impaired systolic LV function (LV ejection fraction [LVEF], 51% +/- 6 vs 58% +/- 5 in control subjects; P = .003), in addition to enlarged LV dimensions (end-diastolic volume [EDV], 112 mL/m(2) +/- 13 vs 95 mL/m(2) +/- 16, P = .007; end-systolic volume [ESV], 54 mL/m(2) +/- 11 vs 39 mL/m(2) +/- 7, P < .001). Degree of AR predicted decreased LVEF (r = 0.41, P = .026) and was correlated with increased LV dimensions (LV EDV: r = 0.48, P = .008; LV ESV: r = 0.67, P < .001). Aortic root dilatation and reduced elasticity of the proximal aorta are frequently observed in patients who have undergone the ASO, in addition to minor degrees of AR, reduced LV systolic function, and increased LV dimensions. RSNA, 2008

  1. Potential use of avocado oil on structured lipids MLM-type production catalysed by commercial immobilised lipases.

    PubMed

    Caballero, Eduardo; Soto, Carmen; Olivares, Araceli; Altamirano, Claudia

    2014-01-01

    Structured Lipids are generally constituents of functional foods. Growing demands for SL are based on a fuller understanding of nutritional requirements, lipid metabolism, and improved methods to produce them. Specifically, this work was aimed to add value to avocado oil by producing dietary triacylglycerols (TAG) containing medium-chain fatty acids (M) at positions sn-1,3 and long-chain fatty acids (L) at position sn-2. These MLM-type structured lipids (SL) were produced by interesterification of caprylic acid (CA) (C8:0) and avocado oil (content of C18:1). The regiospecific sn-1,3 commercial lipases Lipozyme RM IM and TL IM were used as biocatalysts to probe the potential of avocado oil to produce SL. Reactions were performed at 30-50°C for 24 h in solvent-free media with a substrate molar ratio of 1∶2 (TAG:CA) and 4-10% w/w enzyme content. The lowest incorporation of CA (1.1% mol) resulted from Lipozyme RM IM that was incubated at 50°C. The maximum incorporation of CA into sn-1,3 positions of TAG was 29.2% mol. This result was obtained at 30°C with 10% w/w Lipozyme TL IM, which is the highest values obtained in solvent-free medium until now for structured lipids of low-calories. This strategy opens a new market to added value products based on avocado oil.

  2. Potential Use of Avocado Oil on Structured Lipids MLM-Type Production Catalysed by Commercial Immobilised Lipases

    PubMed Central

    Caballero, Eduardo; Soto, Carmen; Olivares, Araceli; Altamirano, Claudia

    2014-01-01

    Structured Lipids are generally constituents of functional foods. Growing demands for SL are based on a fuller understanding of nutritional requirements, lipid metabolism, and improved methods to produce them. Specifically, this work was aimed to add value to avocado oil by producing dietary triacylglycerols (TAG) containing medium-chain fatty acids (M) at positions sn-1,3 and long-chain fatty acids (L) at position sn-2. These MLM-type structured lipids (SL) were produced by interesterification of caprylic acid (CA) (C8:0) and avocado oil (content of C18:1). The regiospecific sn-1,3 commercial lipases Lipozyme RM IM and TL IM were used as biocatalysts to probe the potential of avocado oil to produce SL. Reactions were performed at 30–50°C for 24 h in solvent-free media with a substrate molar ratio of 1∶2 (TAG:CA) and 4–10% w/w enzyme content. The lowest incorporation of CA (1.1% mol) resulted from Lipozyme RM IM that was incubated at 50°C. The maximum incorporation of CA into sn-1,3 positions of TAG was 29.2% mol. This result was obtained at 30°C with 10% w/w Lipozyme TL IM, which is the highest values obtained in solvent-free medium until now for structured lipids of low-calories. This strategy opens a new market to added value products based on avocado oil. PMID:25248107

  3. Radiative transfer modelling inside thermal protection system using hybrid homogenization method for a backward Monte Carlo method coupled with Mie theory

    NASA Astrophysics Data System (ADS)

    Le Foll, S.; André, F.; Delmas, A.; Bouilly, J. M.; Aspa, Y.

    2012-06-01

    A backward Monte Carlo method for modelling the spectral directional emittance of fibrous media has been developed. It uses Mie theory to calculate the radiative properties of single fibres, modelled as infinite cylinders, and the complex refractive index is computed by a Drude-Lorenz model for the dielectric function. The absorption and scattering coefficient are homogenised over several fibres, but the scattering phase function of a single one is used to determine the scattering direction of energy inside the medium. Sensitivity analysis based on several Monte Carlo results has been performed to estimate coefficients for a Multi-Linear Model (MLM) specifically developed for inverse analysis of experimental data. This model concurs with the Monte Carlo method and is highly computationally efficient. In contrast, the surface emissivity model, which assumes an opaque medium, shows poor agreement with the reference Monte Carlo calculations.

  4. A MULTILAYER BIOCHEMICAL DRY DEPOSITION MODEL 1. MODEL FORMULATION

    EPA Science Inventory

    A multilayer biochemical dry deposition model has been developed based on the NOAA Multilayer Model (MLM) to study gaseous exchanges between the soil, plants, and the atmosphere. Most of the parameterizations and submodels have been updated or replaced. The numerical integration ...

  5. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies

    PubMed Central

    Rukhin, Andrew L.

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583

  6. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    PubMed

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  7. Ia diastolic dysfunction: an echocardiographic grade.

    PubMed

    Pandit, Anil; Mookadam, Farouk; Hakim, Fayaz A; Mulroy, Eoin; Saadiq, Rayya; Doherty, Mairead; Cha, Stephen; Seward, James; Wilansky, Susan

    2015-01-01

    To demonstrate that a distinct group of patients with Grade Ia diastolic dysfunction who do not conform to present ASE/ESE diastolic grading exists. Echocardiographic and demographic data of the Grade Ia diastolic dysfunction were extracted and compared with that of Grades I and II in 515 patients. The mean of age of the cohort was 75 ± 9 years and body mass index did not differ significantly between the 3 groups (P = 0.45). Measurements of left atrial volume index (28.58 ± 7 mL/m(2) in I, 33 ± 10 mL/m(2) in Ia, and 39 ± 12 mL/m(2) in II P < 0.001), isovolumic relaxation time (IVRT) (100 ± 17 msec in I, 103 ± 21 msec in Ia, and 79 ± 15 msec in II P < 0.001), deceleration time (248 ± 52 msec in I, 263 ± 58 msec in Ia, and 217 ± 57 msec in II P < 0.001), medial E/e' (10 ± 3 in I, 18 ± 5.00 in Ia, and 22 ± 8 in II), and lateral E/e' (8 ± 3 in I, 15 ± 6 in Ia, and 18 ± 9 in II P < 0.001) were significantly different in grade Ia compared with I and II. These findings remained significant even after adjusting for age, gender, diabetes, and smoking. Patients with echocardiographic characteristics of relaxation abnormality (E/A ratio of <0.8) and elevated filling pressures (septal E/e' ≥15, lateral E/e' ≥12, average E/e' ≥13) should be graded as a separate Grade Ia group. © 2014, Wiley Periodicals, Inc.

  8. Serum levels of NT- pro ANP, BNP, NT-pro BNP and function of the left atrium in patients with heart failure and preserved ejection fraction after myocardial infarction

    NASA Astrophysics Data System (ADS)

    Shurupov, V.; Suslova, T.; Ryabov, V.

    2015-11-01

    The objective of our study was to evaluate the levels of natriuretic peptides in patients (pts) with heart failure with preserved ejection fraction (HFpEF) in 12 month after ST elevation myocardial infarction (STEMI) with a focus on the function of left atrium (LA) and left ventricular (LV) filling pressure. 55 pts were included in the study. 6-minute walk test was performed. Echo exam was performed by the diagnostic system VIVID 7. BNP in whole blood was determined using the Triage ® Meter BNP test. The serum levels of NT-pro BNP, NT-pro ANP («Biomedica», Austria) were determined in blood samples by enzyme-linked immune-sorbent assay (ELISA). LA volume index were differences (16.03±3.39 ml/m2; 25.36±8.26 ml/m2; 29.41±9.46 ml/m2 accordingly I, II, III class) depending on severity of HF. Well as E/E' ratio were differences (7.5±1.4; 9.8±5.1; 13.5±7.6 accordingly I, II, III class) depending on severity of HF. The LA volume index correlated with levels of NT-pro ANP (R=0.29; p=0.04), levels of NT-pro BNP (R=0.37; p=0.01), levels of BNP (R=0.51; p=0.0001). The LV filling pressure correlated with levels of NT-pro ANP (R=0.45; p=0.002), levels of NT-pro BNP (R=0.49; p=0.001), levels of BNP (R=0.37; p=0.01).

  9. Biventricular Heart Remodeling After Percutaneous or Surgical Pulmonary Valve Implantation: Evaluation by Cardiac Magnetic Resonance.

    PubMed

    Secchi, Francesco; Resta, Elda C; Cannaò, Paola M; Pluchinotta, Francesca; Piazza, Luciane; Butera, Gianfranco; Carminati, Mario; Sardanelli, Francesco

    2017-11-01

    The aim of this study was to evaluate the impact of percutaneous pulmonary valve implantation (PPVI) and surgical pulmonary valve replacement (SPVR) on biventricular and pulmonary valve function using cardiac magnetic resonance. Thirty-five patients aged 20±8 years (mean±SD) underwent PPVI, whereas 16 patients aged 30±11 years underwent SPVR. Cardiac magnetic resonance examinations were performed before and after the procedures with an average follow-up interval of 10 months. Cine steady-state free precession sequences for cardiac function and phase-contrast sequences for pulmonary flow were performed. The right ventricle (RV) and left ventricle (LV) functions were evaluated using a dedicated software. The RV end-diastolic volume index (mL/m) decreased significantly after PPVI and SPVR, from 74 to 64 (P=0.030) and from 137 to 83 (P=0.001), respectively. The RV ejection fraction increased significantly after SPVR, from 47% to 53% (P=0.038). The LV end-diastolic volume index increased significantly after PPVI, from 66 to 76 mL/m (P<0.001). The LV stroke volume index increased significantly after PPVI, from 34 to 43 mL/m (P=0.004). The analysis of bivariate correlations showed that in patients undergoing SPVR the RV changes after the procedure were positively correlated to LV changes in terms of end-systolic volume index (r=0587; P=0.017) and ejection fraction (r=0.681; P=0.004). A RV volumetric reduction and a positive effect on ventricular-ventricular interaction were observed after both PPVI and SPVR. After PPVI, a positive volumetric LV remodeling was found. No LV remodeling was found after SPVR. After both procedures, the replaced pulmonary valve functioned well.

  10. Metabolism of defined structured triglyceride particles compared to mixtures of medium and long chain triglycerides intravenously infused in dogs.

    PubMed

    Simoens, Ch; Deckelbaum, R J; Carpentier, Y A

    2004-08-01

    The present study aimed to determine whether including medium-chain fatty acids (MCFA) in specifically designed structured triglycerides (STG) with a MCFA in sn-1 and sn-3 positions and a long-chain (LC) FA in sn-2 position (MLM) would lead to different effects on plasma lipids and FA distribution into plasma and tissue lipids by comparison to a mixture of separate MCT and LCT molecules (MMM/LLL). The fatty acid (FA) composition was comparable in both lipid emulsions. Lipids were infused over 9h daily, in 2 groups of dogs (n = 6 each), for 28 days as a major component (55% of the non-protein energy intake) of total parenteral nutrition (TPN). Blood samples were obtained on specific days, before starting and just before stopping TPN. The concentration of plasma lipids was measured before starting and before stopping TPN on days 1, 2, 3, 4, 5, 8, 10, 12, 16 and 28. Biopsies were obtained from liver, muscle and adipose tissue 15 days before starting, and again on the day following cessation of TPN. In addition, the spleen was removed after the TPN period. FA composition in plasma and tissue lipids was analysed by gas liquid chromatography in different lipid components of plasma and tissues. No differences in either safety or tolerance parameters were detected between both lipid preparations. A lower rise of plasma TG (P < 0.05) was observed during MLM infusion, indicating a faster elimination rate of MLM vs MMM/LLL emulsion. In spite of the differences of TG molecules which would be assumed to affect the site of FA delivery and metabolic fate, FA distribution in phospholipids (PL) of hepatic and extrahepatic tissues did not substantially differ between both emulsions. Copyright 2003 Elsevier Ltd.

  11. ON AERODYNAMIC AND BOUNDARY LAYER RESISTANCES WITHIN DRY DEPOSITION MODELS

    EPA Science Inventory

    There have been many empirical parameterizations for the aerodynamic and boundary layer resistances proposed in the literature, e.g. those of the Meyers Multi-Layer Deposition Model (MLM) used with the nation-wide dry deposition network. Many include arbitrary constants or par...

  12. Development of an up-grading process to produce MLM structured lipids from sardine discards.

    PubMed

    Morales-Medina, R; Munio, M; Guadix, A; Guadix, E M

    2017-08-01

    The aim of the work was to produce MLM structured lipids with caprylic acid (M) as medium chain fatty acid located at the external bonds of the glycerol backbone and concentrated polyunsaturated fatty acids (L) from sardine discards (Sardine pilchardus) in the central bond of the glycerol. To that end, the following steps were conducted: (i) fish oil extraction, (ii) Omega-3 free fatty acids (FFA) concentration (low temperature winterization), (iii) two-steps enzymatic esterification and (iv) triacylglycerols (TAG) purification (liquid column chromatography). The resultant purified triacylglycerols accomplished with the oxidative state (peroxide and anisidine value, PV and AV) required for refined oils. As enzymatic treatment, Omega-3 concentrate FFA (Omega-3>600mg Omega-3 per g oil) were esterified with dicaprylic glycerol employing Novozyme 435. This process presented high regioselectivity, with ∼80mol% of concentrated fatty acids esterified at the sn-2 position. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. SENSITIVITY OF THE NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION MULTILAYER MODEL TO INSTRUMENT ERROR AND PARAMETERIZATION UNCERTAINTY

    EPA Science Inventory

    The response of the National Oceanic and Atmospheric Administration multilayer inferential dry deposition velocity model (NOAA-MLM) to error in meteorological inputs and model parameterization is reported. Monte Carlo simulations were performed to assess the uncertainty in NOA...

  14. The dynamic relationship between emotional and physical states: an observational study of personal health records

    PubMed Central

    Lee, Ye-Seul; Jung, Won-Mo; Jang, Hyunchul; Kim, Sanghyun; Chung, Sun-Yong; Chae, Younbyoung

    2017-01-01

    Objectives Recently, there has been increasing interest in preventing and managing diseases both inside and outside medical institutions, and these concerns have supported the development of the individual Personal Health Record (PHR). Thus, the current study created a mobile platform called “Mind Mirror” to evaluate psychological and physical conditions and investigated whether PHRs would be a useful tool for assessment of the dynamic relationship between the emotional and physical conditions of an individual. Methods Mind Mirror was used to collect 30 days of observational data about emotional valence and the physical states of pain and fatigue from 20 healthy participants, and these data were used to analyze the dynamic relationship between emotional and physical conditions. Additionally, based on the cross-correlations between these three parameters, a multilevel multivariate regression model (mixed linear model [MLM]) was implemented. Results The strongest cross-correlation between emotional and physical conditions was at lag 0, which implies that emotion and body condition changed concurrently. In the MLM, emotional valence was negatively associated with fatigue (β =−0.233, P<0.001), fatigue was positively associated with pain (β =0.250, P<0.001), and pain was positively associated with fatigue (β =0.398, P<0.001). Conclusion Our study showed that emotional valence and one’s physical condition negatively influenced one another, while fatigue and pain positively affected each other. These findings suggest that the mind and body interact instantaneously, in addition to providing a possible solution for the recording and management of health using a PHR on a daily basis. PMID:28223814

  15. A Two-Step Approach to Analyze Satisfaction Data

    ERIC Educational Resources Information Center

    Ferrari, Pier Alda; Pagani, Laura; Fiorio, Carlo V.

    2011-01-01

    In this paper a two-step procedure based on Nonlinear Principal Component Analysis (NLPCA) and Multilevel models (MLM) for the analysis of satisfaction data is proposed. The basic hypothesis is that observed ordinal variables describe different aspects of a latent continuous variable, which depends on covariates connected with individual and…

  16. Multilevel SEM Strategies for Evaluating Mediation in Three-Level Data

    ERIC Educational Resources Information Center

    Preacher, Kristopher J.

    2011-01-01

    Strategies for modeling mediation effects in multilevel data have proliferated over the past decade, keeping pace with the demands of applied research. Approaches for testing mediation hypotheses with 2-level clustered data were first proposed using multilevel modeling (MLM) and subsequently using multilevel structural equation modeling (MSEM) to…

  17. Microbial load monitor

    NASA Technical Reports Server (NTRS)

    Holen, J. T.; Royer, E. R.

    1976-01-01

    A card configuration which combines the functions of identification, enumeration and antibiotic sensitivity into one card was developed. An instrument package was designed around the card to integrate the card filling, incubation reading, computation and decision making process into one compact unit. Support equipment was also designed to prepare the expandable material used in the MLM.

  18. The relationship between atrial electromechanical delay and left atrial mechanical function in stroke patients

    PubMed Central

    Akıl, Mehmet Ata; Akıl, Eşref; Bilik, Mehmet Zihni; Oylumlu, Mustafa; Acet, Halit; Yıldız, Abdülkadir; Akyüz, Abdurrahman; Ertaş, Faruk; Toprak, Nizamettin

    2015-01-01

    Objective: The aim of this study was to evaluate the relationship between atrial electromechanical delay (EMD) measured with tissue Doppler imaging (TDI) and left atrial (LA) mechanical functions in patients with ischemic stroke and compare them with healthy controls. Methods: Thirty patients with ischemic stroke were enrolled into this cross-sectional, observational study. The control group consisted of 35 age- and gender-matched apparently healthy individuals patients. Acute cerebral infarcts of probable embolic origin were diagnosed via imaging and were confirmed by a neurologist. Echocardiographically, time intervals from the beginning of P wave to beginning of A wave from the lateral and septal mitral and right ventricular tricuspid annuli in TDI were recorded. The differences between these intervals gave the mechanical delays (inter- and intra-atrial). Left atrial (LA) volumes were measured using the biplane area-length method, and LA mechanical function parameters were calculated. Statistical analysis was performed using student’s t-test, chi-squared test, and Pearson’s test. Results: The laboratory and clinical characteristics were similar in the two groups. Increased left atrial EMD (21.36±10.38 ms versus 11.74±6.06 ms, p<0.001), right atrial EMD (13.66±8.62 ms versus 9.66±6.81 ms, p=0.040), and interatrial EMD (35.03±9.95 ms versus 21.40±8.47 ms, p<0.001) were observed in stroke patients as compared to controls. Active LA emptying volume and fraction and passive LA emptying volumes and fraction were similar between controls and stroke patients. Total LA emptying volumes were significantly increased in stroke patients as compared to healthy controls (33.19±11.99 mL/m2 versus 27.48±7.08 mL/m2, p=0.021). Conclusion: According to the results of our study, interatrial electromechanical delay may be a new predictor for ischemic stroke. PMID:25537998

  19. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  20. Item Response Theory Using Hierarchical Generalized Linear Models

    ERIC Educational Resources Information Center

    Ravand, Hamdollah

    2015-01-01

    Multilevel models (MLMs) are flexible in that they can be employed to obtain item and person parameters, test for differential item functioning (DIF) and capture both local item and person dependence. Papers on the MLM analysis of item response data have focused mostly on theoretical issues where applications have been add-ons to simulation…

  1. Iterative usage of fixed and random effect models for powerful and efficient genome-wide association studies

    USDA-ARS?s Scientific Manuscript database

    False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises t...

  2. Adapting current Arden Syntax knowledge for an object oriented event monitor.

    PubMed

    Choi, Jeeyae; Lussier, Yves A; Mendoça, Eneida A

    2003-01-01

    Arden Syntax for Medical Logic Module (MLM)1 was designed for writing and sharing task-specific health knowledge in 1989. Several researchers have developed frameworks to improve the sharability and adaptability of Arden Syntax MLMs, an issue known as "curly braces" problem. Karadimas et al proposed an Arden Syntax MLM-based decision support system that uses an object oriented model and the dynamic linking features of the Java platform.2 Peleg et al proposed creating a Guideline Expression Language (GEL) based on Arden Syntax's logic grammar.3 The New York Presbyterian Hospital (NYPH) has a collection of about 200 MLMs. In a process of adapting the current MLMs for an object-oriented event monitor, we identified two problems that may influence the "curly braces" one: (1) the query expressions within the curly braces of Arden Syntax used in our institution are cryptic to the physicians, institutional dependent and written ineffectively (unpublished results), and (2) the events are coded individually within a curly braces, resulting sometimes in a large number of events - up to 200.

  3. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  4. Creating a consistent dark-target aerosol optical depth record from MODIS and VIIRS

    NASA Astrophysics Data System (ADS)

    Levy, R. C.; Mattoo, S.; Munchak, L. A.; Patadia, F.; Holz, R.

    2014-12-01

    To answer fundamental questions about our changing climate, we must quantify how aerosols are changing over time. This is a global question that requires regional characterization, because in some places aerosols are increasing and in others they are decreasing. Although NASA's Moderate resolution Imaging Spectrometer (MODIS) sensors have provided quantitative information about global aerosol optical depth (AOD) for more than a decade, the creation of an aerosol climate data record (CDR) requires consistent multi-decadal data. With the Visible and Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi-NPP, there is potential to continue the MODIS aerosol time series. Yet, since the operational VIIRS aerosol product is produced by a different algorithm, it is not suitable to continue MODIS to create an aerosol CDR. Therefore, we have applied the MODIS Dark-target (DT) algorithm to VIIRS observations, taking into account the slight differences in wavelengths, resolutions and geometries between the two sensors. More specifically, we applied the MODIS DT algorithm to a dataset known as the Intermediate File Format (IFF), created by the University of Wisconsin. The IFF is produced for both MODIS and VIIRS, with the idea that a single (MODIS-like or ML) algorithm can be run either dataset, which can in turn be compared to the MODIS Collection 6 (M6) retrieval that is run on standard MODIS data. After minimizing or characterizing remaining differences between ML on MODIS-IFF (or ML-M) and M6, we have performed apples-to-apples comparison between ML-M and ML on VIIRS IFF (ML-V). Examples of these comparisons include time series of monthly global mean, monthly and seasonal global maps at 1° resolution, and collocations as compared to AERONET. We concentrate on the overlapping period January 2012 through June 2014, and discuss some of the remaining discrepancies between the ML-V and ML-M datasets.

  5. The influence of percutaneous closure of patent ductus arteriosus on left ventricular size and function: a prospective study using two- and three-dimensional echocardiography and measurements of serum natriuretic peptides.

    PubMed

    Eerola, Anneli; Jokinen, Eero; Boldt, Talvikki; Pihkala, Jaana

    2006-03-07

    We aimed to evaluate the effect of percutaneous closure of patent ductus arteriosus (PDA) on left ventricular (LV) hemodynamics. Today, most PDAs are closed percutaneously. Little is known, however, about hemodynamic changes after the procedure. Of 37 children (ages 0.6 to 10.6 years) taken to the catheterization laboratory for percutaneous PDA closure, the PDA was closed in 33. Left ventricular diastolic and systolic dimensions, volumes, and function were examined by two-dimensional (2D) and three-dimensional (3D) echocardiography and serum concentrations of natriuretic peptides measured before PDA closure, on the following day, and 6 months thereafter. Control subjects comprised 36 healthy children of comparable ages. At baseline, LV diastolic diameter measured >+2 SD in 5 of 33 patients. In 3D echocardiography, a median LV diastolic volume measured 54.0 ml/m2 in the control subjects and 58.4 ml/m2 (p < 0.05) in the PDA group before closure and 57.2 ml/m2 (p = NS) 6 months after closure. A median N-terminal brain natriuretic peptide (pro-BNP) concentration measured 72 ng/l in the control group and 141 ng/l in the PDA group before closure (p = 0.001) and 78.5 ng/l (p = NS) 6 months after closure. Patients differed from control subjects in indices of LV systolic and diastolic function at baseline. By the end of follow-up, all these differences had disappeared. Even in the subgroup of patients with normal-sized LV at baseline, the LV diastolic volume decreased significantly during follow-up. Changes in LV volume and function caused by PDA disappear by 6 months after percutaneous closure. Even the children with normal-sized LV benefit from the procedure.

  6. Obese Hypertensive Men Have Lower Circulating Proatrial Natriuretic Peptide Concentrations Despite Greater Left Atrial Size.

    PubMed

    Asferg, Camilla L; Andersen, Ulrik B; Linneberg, Allan; Goetze, Jens P; Jeppesen, Jørgen L

    2018-05-07

    Obese persons have lower circulating natriuretic peptide (NP) concentrations. It has been proposed that this natriuretic handicap plays a role in obesity-related hypertension. In contrast, hypertensive patients with left atrial enlargement have higher circulating NP concentrations. On this background, we investigated whether obese hypertensive men could have lower circulating NP concentrations despite evidence of pressure-induced greater left atrial size. We examined 98 obese men (body mass index [BMI] ≥ 30.0 kg/m2) and 27 lean normotensive men (BMI 20.0-24.9 kg/m2). All men were healthy, medication free, with normal left ventricular ejection fraction. We measured blood pressure using 24-hour ambulatory blood pressure (ABP) recordings. Hypertension was defined as 24-hour ABP ≥ 130/80 mm Hg, and normotension was defined as 24-hour ABP < 130/80 mm Hg. We determined left atrial size using echocardiography, and we measured fasting serum concentrations of midregional proatrial NP (MR-proANP). Of the 98 obese men, 62 had hypertension and 36 were normotensive. The obese hypertensive men had greater left atrial size (mean ± SD: 28.7 ± 6.0 ml/m2) compared with the lean normotensive men (23.5 ± 4.5 ml/m2) and the obese normotensive men (22.7 ± 5.1 ml/m2), P < 0.01. Nevertheless, despite evidence of pressure-induced greater left atrial size, the obese hypertensive men had lower serum MR-proANP concentrations (median [interquartile range]: 48.5 [37.0-64.7] pmol/l) compared with the lean normotensive men (69.3 [54.3-82.9] pmol/l), P < 0.01, whereas the obese normotensive men had serum MR-proANP concentrations in between the 2 other groups (54.1 [43.6-62.9] pmol/l). Despite greater left atrial size, obese hypertensive men have lower circulating MR-proANP concentrations compared with lean normotensive men.

  7. Emergency Department Overcrowding and Ambulance Turnaround Time

    PubMed Central

    Lee, Yu Jin; Shin, Sang Do; Lee, Eui Jung; Cho, Jin Seong; Cha, Won Chul

    2015-01-01

    Objective The aims of this study were to describe overcrowding in regional emergency departments in Seoul, Korea and evaluate the effect of crowdedness on ambulance turnaround time. Methods This study was conducted between January 2010 and December 2010. Patients who were transported by 119-responding ambulances to 28 emergency centers within Seoul were eligible for enrollment. Overcrowding was defined as the average occupancy rate, which was equal to the average number of patients staying in an emergency department (ED) for 4 hours divided by the number of beds in the ED. After selecting groups for final analysis, multi-level regression modeling (MLM) was performed with random-effects for EDs, to evaluate associations between occupancy rate and turnaround time. Results Between January 2010 and December 2010, 163,659 patients transported to 28 EDs were enrolled. The median occupancy rate was 0.42 (range: 0.10-1.94; interquartile range (IQR): 0.20-0.76). Overcrowded EDs were more likely to have older patients, those with normal mentality, and non-trauma patients. Overcrowded EDs were more likely to have longer turnaround intervals and traveling distances. The MLM analysis showed that an increase of 1% in occupancy rate was associated with 0.02-minute decrease in turnaround interval (95% CI: 0.01 to 0.03). In subgroup analyses limited to EDs with occupancy rates over 100%, we also observed a 0.03 minute decrease in turnaround interval per 1% increase in occupancy rate (95% CI: 0.01 to 0.05). Conclusions In this study, we found wide variation in emergency department crowding in a metropolitan Korean city. Our data indicate that ED overcrowding is negatively associated with turnaround interval with very small practical significance. PMID:26115183

  8. Infarct size and left ventricular remodelling after preventive percutaneous coronary intervention

    PubMed Central

    Mangion, Kenneth; Carrick, David; Hennigan, Barry W; Payne, Alexander R; McClure, John; Mason, Maureen; Das, Rajiv; Wilson, Rebecca; Edwards, Richard J; Petrie, Mark C; McEntegart, Margaret; Eteiba, Hany; Oldroyd, Keith G; Berry, Colin

    2016-01-01

    Objective We hypothesised that, compared with culprit-only primary percutaneous coronary intervention (PCI), additional preventive PCI in selected patients with ST-elevation myocardial infarction with multivessel disease would not be associated with iatrogenic myocardial infarction, and would be associated with reductions in left ventricular (LV) volumes in the longer term. Methods In the preventive angioplasty in myocardial infarction trial (PRAMI; ISRCTN73028481), cardiac magnetic resonance (CMR) was prespecified in two centres and performed (median, IQR) 3 (1, 5) and 209 (189, 957) days after primary PCI. Results From 219 enrolled patients in two sites, 84% underwent CMR. 42 (50%) were randomised to culprit-artery-only PCI and 42 (50%) were randomised to preventive PCI. Follow-up CMR scans were available in 72 (86%) patients. There were two (4.8%) cases of procedure-related myocardial infarction in the preventive PCI group. The culprit-artery-only group had a higher proportion of anterior myocardial infarctions (MIs) (55% vs 24%). Infarct sizes (% LV mass) at baseline and follow-up were similar. At follow-up, there was no difference in LV ejection fraction (%, median (IQR), (culprit-artery-only PCI vs preventive PCI) 51.7 (42.9, 60.2) vs 54.4 (49.3, 62.8), p=0.23), LV end-diastolic volume (mL/m2, 69.3 (59.4, 79.9) vs 66.1 (54.7, 73.7), p=0.48) and LV end-systolic volume (mL/m2, 31.8 (24.4, 43.0) vs 30.7 (23.0, 36.3), p=0.20). Non-culprit angiographic lesions had low-risk Syntax scores and 47% had non-complex characteristics. Conclusions Compared with culprit-only PCI, non-infarct-artery MI in the preventive PCI strategy was uncommon and LV volumes and ejection fraction were similar. PMID:27504003

  9. Absolute radiant power measurement for the Au M lines of laser-plasma using a calibrated broadband soft X-ray spectrometer with flat-spectral response.

    PubMed

    Troussel, Ph; Villette, B; Emprin, B; Oudot, G; Tassin, V; Bridou, F; Delmotte, F; Krumrey, M

    2014-01-01

    CEA implemented an absolutely calibrated broadband soft X-ray spectrometer called DMX on the Omega laser facility at the Laboratory for Laser Energetics (LLE) in 1999 to measure radiant power and spectral distribution of the radiation of the Au plasma. The DMX spectrometer is composed of 20 channels covering the spectral range from 50 eV to 20 keV. The channels for energies below 1.5 keV combine a mirror and a filter with a coaxial photo-emissive detector. For the channels above 5 keV the photoemissive detector is replaced by a conductive detector. The intermediate energy channels (1.5 keV < photon energy < 5 keV) use only a filter and a coaxial detector. A further improvement of DMX consists in flat-response X-ray channels for a precise absolute measurement of the photon flux in the photon energy range from 0.1 keV to 6 keV. Such channels are equipped with a filter, a Multilayer Mirror (MLM), and a coaxial detector. We present as an example the development of channel for the gold M emission lines in the photon energy range from 2 keV to 4 keV which has been successfully used on the OMEGA laser facility. The results of the radiant power measurements with the new MLM channel and with the usual channel composed of a thin titanium filter and a coaxial detector (without mirror) are compared. All elements of the channel have been calibrated in the laboratory of the Physikalisch-Technische Bundesanstalt, Germany's National Metrology Institute, at the synchrotron radiation facility BESSY II in Berlin using dedicated well established and validated methods.

  10. Should policy-makers and managers trust PSI? An empirical validation study of five patient safety indicators in a national health service

    PubMed Central

    2012-01-01

    Background Patient Safety Indicators (PSI) are being modestly used in Spain, somewhat due to concerns on their empirical properties. This paper provides evidence by answering three questions: a) Are PSI differences across hospitals systematic -rather than random?; b) Do PSI measure differences among hospital-providers -as opposed to differences among patients?; and, c) Are measurements able to detect hospitals with a higher than "expected" number of cases? Methods An empirical validation study on administrative data was carried out. All 2005 and 2006 publicly-funded hospital discharges were used to retrieve eligible cases of five PSI: Death in low-mortality DRGs (MLM); decubitus ulcer (DU); postoperative pulmonary embolism or deep-vein thrombosis (PE-DVT); catheter-related infections (CRI), and postoperative sepsis (PS). Empirical Bayes statistic (EB) was used to estimate whether the variation was systematic; logistic-multilevel modelling determined what proportion of the variation was explained by the hospital; and, shrunken residuals, as provided by multilevel modelling, were plotted to flag hospitals performing worse than expected. Results Variation across hospitals was observed to be systematic in all indicators, with EB values ranging from 0.19 (CI95%:0.12 to 0.28) in PE-DVT to 0.34 (CI95%:0.25 to 0.45) in DU. A significant proportion of the variance was explained by the hospital, once patient case-mix was adjusted: from a 6% in MLM (CI95%:3% to 11%) to a 24% (CI95%:20% to 30%) in CRI. All PSI were able to flag hospitals with rates over the expected, although this capacity decreased when the largest hospitals were analysed. Conclusion Five PSI showed reasonable empirical properties to screen healthcare performance in Spanish hospitals, particularly in the largest ones. PMID:22369291

  11. Association of Left Atrial Function and Left Atrial Enhancement in Patients with Atrial Fibrillation: A Cardiac Magnetic Resonance Study

    PubMed Central

    Habibi, Mohammadali; Lima, Joao A.C.; Khurram, Irfan M.; Zimmerman, Stefan L.; Zipunnikov, Vadim; Fukumoto, Kotaro; Spragg, David; Ashikaga, Hiroshi; Rickard, John; Marine, Joseph E.; Calkins, Hugh; Nazarian, Saman

    2015-01-01

    Background Atrial fibrillation (AF) is associated with left atrial (LA) structural and functional changes. Cardiac magnetic resonance (CMR) late gadolinium enhancement (LGE) and feature-tracking are capable of noninvasive quantification of LA fibrosis and myocardial motion, respectively. We sought to examine the association of phasic LA function with LA enhancement in patients with AF. Methods and Results LA structure and function was measured in 90 AF patients (age 61 ± 10 years, 76% male) referred for ablation and 14 healthy volunteers. Peak global longitudinal LA strain (PLAS), LA systolic strain rate (SR-s), and early (SR-ed) and late diastolic (SR-ld) strain rates were measured using cine-CMR images acquired during sinus rhythm. The degree of LGE was quantified. Compared to patients with paroxysmal AF (60% of cohort), those with persistent AF had larger maximum LA volume index (LAVImax, 56 ± 17ml/m2 versus 49 ± 13ml/m2 p=0.036), and increased LGE (27.1± 11.7% versus 36.8 ± 14.8% p<0.001). Aside from LA active emptying fraction, all LA parameters (passive emptying fraction, PLAS, SR-s, SR-ed and SR-ld) were lower in patients with persistent AF (p< 0.05 for all). Healthy volunteers had less LGE and higher LA functional parameters compared to AF patients (p<0.05 for all). In multivariable analysis, increased LGE was associated with lower LA passive emptying fraction, PLAS, SR-s, SR-ed, and SR-ld (p<0.05 for all). Conclusions Increased LA enhancement is associated with decreased LA reservoir, conduit, and booster pump functions. Phasic measurement of LA function using feature-tracking CMR may add important information regarding the physiological importance of LA fibrosis. PMID:25652181

  12. Correlation between land use changes and shoreline changes around THE Nakdong River in Korea using landsat images.

    NASA Astrophysics Data System (ADS)

    Kwon, J. S.; Lim, C.; Baek, S. G.; Shin, S.

    2015-12-01

    Coastal erosion has badly affected the marine environment, as well as the safety of various coastal structures. In order to monitor shoreline changes due to coastal erosion, remote sensing techniques are being utilized. The land-cover map classifies the physical material on the surface of the earth, and it can be utilized in establishing eco-policy and land-use policy. In this study, we analyzed the correlation between land-use changes around the Nakdong River and shoreline changes at Busan Dadaepo Beach adjacent to the river. We produced the land-cover map based on the guidelines published by the Ministry of Environment Korea, using eight Landsat satellite images obtained from 1984 to 2015. To observe land use changes around the Nakdong River, the study site was set to include the surroundings areas of the Busan Dadaepo Beach, the Nakdong River as well as its estuary, and also Busan New Port. For the land-use classification of the study site, we also produced a land-cover map divided into seven categories according to the Ministry of Environment, Korea guidelines and using the most accurate Maximum Likelihood Method (MLM). Land use changes inland, at 500m from the shoreline, were excluded for the correlation analysis between land use changes and shoreline changes. The other categories, except for the water category, were transformed into numerical values and the land-use classifications, using all other categories, were analyzed. Shoreline changes were observed by setting the base-line and three cut-lines. We assumed that longshore bars around the Nakdong River and the shoreline of the Busan Dadaepo Beach are affected. Therefore, we expect that shoreline changes happen due to the influence of barren land, wetlands, built-up areas and deposition. The causes are due to natural factors, such as weather, waves, tide currents, longshore currents, and also artificial factors such as coastal structures, construction, and dredging.

  13. Math Talk and Representations in Elementary Classrooms of Beginning Teachers: A MLM Exploratory Analysis

    ERIC Educational Resources Information Center

    Alnizami, Reema

    2017-01-01

    This study examined the math talk and the use of multiple representations in elementary classrooms of 134 beginning teachers, all in their second year of teaching. A quantitative correlational research design was employed to investigate the research questions. The data were collected using a log instrument, the Instructional Practices Log in…

  14. Exploring the Relationship between Inclusive Education and Achievement: New Perspectives

    ERIC Educational Resources Information Center

    Cosier, Meghan

    2010-01-01

    This study used Multilevel Modeling (MLM) with a sample of over 1300 students with disabilities between the ages of six and nine years old nested within 180 school districts. A sample from the Pre-Elementary Education Longitudinal Study (PEELS) dataset (Institute of Education Sciences) was used to explore the relationship between hours in general…

  15. Measurement Error Correction Formula for Cluster-Level Group Differences in Cluster Randomized and Observational Studies

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Preacher, Kristopher J.

    2016-01-01

    Multilevel modeling (MLM) is frequently used to detect cluster-level group differences in cluster randomized trial and observational studies. Group differences on the outcomes (posttest scores) are detected by controlling for the covariate (pretest scores) as a proxy variable for unobserved factors that predict future attributes. The pretest and…

  16. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  17. On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood

    ERIC Educational Resources Information Center

    Karabatsos, George

    2017-01-01

    This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon…

  18. Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks

    ERIC Educational Resources Information Center

    Wu, Wei; West, Stephen G.; Taylor, Aaron B.

    2009-01-01

    Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…

  19. Chronic infection with Mycobacterium lepraemurium induces alterations in the hippocampus associated with memory loss.

    PubMed

    Becerril-Villanueva, Enrique; Ponce-Regalado, María Dolores; Pérez-Sánchez, Gilberto; Salazar-Juárez, Alberto; Arreola, Rodrigo; Álvarez-Sánchez, María Elizbeth; Juárez-Ortega, Mario; Falfán-Valencia, Ramcés; Hernández-Pando, Rogelio; Morales-Montor, Jorge; Pavón, Lenin; Rojas-Espinosa, Oscar

    2018-06-13

    Murine leprosy, caused by Mycobacterium lepraemurium (MLM), is a chronic disease that closely resembles human leprosy. Even though this disease does not directly involve the nervous system, we investigated a possible effect on working memory during this chronic infection in Balb/c mice. We evaluated alterations in the dorsal region of the hippocampus and measured peripheral levels of cytokines at 40, 80, and 120 days post-infection. To evaluate working memory, we used the T-maze while a morphometric analysis was conducted in the hippocampus regions CA1, CA2, CA3, and dentate gyrus (DG) to measure morphological changes. In addition, a neurochemical analysis was performed by HPLC. Our results show that, at 40 days post-infection, there was an increase in the bacillary load in the liver and spleen associated to increased levels of IL-4, working memory deterioration, and changes in hippocampal morphology, including degeneration in the four subregions analyzed. Also, we found a decrease in neurotransmitter levels at the same time of infection. Although MLM does not directly infect the nervous system, these findings suggest a possible functional link between the immune system and the central nervous system.

  20. Using cross-classified multilevel models to disentangle school and neighborhood effects: an example focusing on smoking behaviors among adolescents in the United States.

    PubMed

    Dunn, Erin C; Richmond, Tracy K; Milliren, Carly E; Subramanian, S V

    2015-01-01

    Despite much interest in understanding the influence of contexts on health, most research has focused on one context at a time, ignoring the reality that individuals have simultaneous memberships in multiple settings. Using the example of smoking behavior among adolescents in the National Longitudinal Study of Adolescent Health, we applied cross-classified multilevel modeling (CCMM) to examine fixed and random effects for schools and neighborhoods. We compared the CCMM results with those obtained from a traditional multilevel model (MLM) focused on either the school and neighborhood separately. In the MLMs, 5.2% of the variation in smoking was due to differences between neighborhoods (when schools were ignored) and 6.3% of the variation in smoking was due to differences between schools (when neighborhoods were ignored). However in the CCMM examining neighborhood and school variation simultaneously, the neighborhood-level variation was reduced to 0.4%. Results suggest that using MLM, instead of CCMM, could lead to overestimating the importance of certain contexts and could ultimately lead to targeting interventions or policies to the wrong settings. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Eccentric LVH healing after starting renal replacement therapy.

    PubMed

    Vertolli, Ugo; Lupia, Mario; Naso, Agostino

    2002-01-01

    Hypertension and left ventricular hypertrophy (LVH) are commonly associated in patients with CRF starting RDT. We report a case of eccentric LVH with marked dilatation and subsequent mitral incompetence of +3/4 that disappeared after three months of standard hemodialysis. Mrs SN, 62 years old, starting HD, had an echocardiography because of dyspnoea; the echo showed: dilated left atrium (78 ml/m2), moderately dilated left ventricle with normal systolic function (TDV 81 ml/m2, EF 66%), an increased ventricular mass (120 gr/m2) and a high grade mitral incompetence +3/4. After three months standard RDT and a dry weight only 2 kg less, the patients was normotensive without therapy, a cardiac angiogram with a hemodynamic study was performed as a pre-transplant workout: a normal left ventricle was found with normal systolic function (TDV 66, TSV 17, GS 49, EF 75%), and a perfectly competent mitral valve (reflux disappeared). The coronary angiography did not reveal critical stenosis. A new echocardiography confinned the data of the hemodynamic study: hypertensive cardiomiopathy with normal systolic function. After one year the patient has been transplanted, with a good renal function and the cardiac echo unchanged. Relieving uremic toxicity ameliorated the cardiac performance in this particular patient.

  2. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    PubMed

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  3. Sequential Right and Left Ventricular Assessment in Posttetralogy of Fallot Patients with Significant Pulmonary Regurgitation.

    PubMed

    Wijesekera, Vishva A; Raju, Rekha; Precious, Bruce; Berger, Adam J; Kiess, Marla C; Leipsic, Jonathon A; Grewal, Jasmine

    2016-12-01

    The natural history of right ventricular (RV) and left ventricular (LV) size and function among adults with tetralogy of Fallot (TOF) repair and hemodynamically significant pulmonary regurgitation (PR) is not known. The main aim of this study was to determine changes in RV and LV size and function over time in an adult population with TOF repair and hemodynamically significant pulmonary regurgitation. Forty patients with repaired TOF and hemodynamically significant PR were included. These patients were identified on the basis of having more than one CMR between January 2008 and 2015. Patients with a prosthetic pulmonary valve or any cardiac intervention between CMR studies were excluded. Rate of progression (ROP) of RV dilation was determined for both indexed right ventricular end-systolic volume (RVESVi) and indexed right ventricular end-diastolic volume (RVEDVi), and calculated as the difference between the last and first volumes divided by the number of years between CMR#1 and CMR#2. Subjects were also divided into two groups based on the distribution of the ROP of RV dilation: Group I-rapid ROP (>50th percentile) and Group II-slower ROP (≤50th percentile). The interval between CMR#1 and CMR#2 was 3.9 ± 1.7 years (range 1-8 years). We did find a significant change in RVEDVi and RVESVi over this time period, although the magnitude of change was small. Nine patients (23%) had a reduction in right ventricular ejection fraction (RVEF) by greater than 5%, 13 patients (33%) had an increase in RVEDVi by greater than 10 mL/m 2 and seven patients (18%) had an increase in RVESVi by greater than 10 mL/m 2 . Median ROP for RVEDVi was 1.8 (range -10.4 to 21.8) mL/(m 2 year); RVESVi 1.1 (range -5.8 to 24.5) mL/(m 2 year) and RVEF -0.5 (range -8 to 4)%/year. Patients with a rapid ROP had significantly larger RV volumes at the time of CMR#1 and lower RVEF as compared to the slow ROP group. There was no overall significant change in LVEDVi, LVESVi, or LVEF over this time period. We have demonstrated, in a small population of patients with hemodynamically significant PR, that there is a small increase in RV volumes and decrease in RVEF over a mean 4-year period. We believe it to be reasonable practice to perform CMR at least every 4 years in asymptomatic patients with repaired TOF and hemodynamically significant PR. We found that LV volumes and function remained stable during the study period, suggesting that significant progressive LV changes are less likely to occur over a shorter time period. Our results inform a safe standardized approach to monitoring adults with hemodynamically significant PR post TOF repair and assist in planning allocation of this expensive and limited resource. © 2016 Wiley Periodicals, Inc.

  4. Multi-model-based interactive authoring environment for creating shareable medical knowledge.

    PubMed

    Ali, Taqdir; Hussain, Maqbool; Ali Khan, Wajahat; Afzal, Muhammad; Hussain, Jamil; Ali, Rahman; Hassan, Waseem; Jamshed, Arif; Kang, Byeong Ho; Lee, Sungyoung

    2017-10-01

    Technologically integrated healthcare environments can be realized if physicians are encouraged to use smart systems for the creation and sharing of knowledge used in clinical decision support systems (CDSS). While CDSSs are heading toward smart environments, they lack support for abstraction of technology-oriented knowledge from physicians. Therefore, abstraction in the form of a user-friendly and flexible authoring environment is required in order for physicians to create shareable and interoperable knowledge for CDSS workflows. Our proposed system provides a user-friendly authoring environment to create Arden Syntax MLM (Medical Logic Module) as shareable knowledge rules for intelligent decision-making by CDSS. Existing systems are not physician friendly and lack interoperability and shareability of knowledge. In this paper, we proposed Intelligent-Knowledge Authoring Tool (I-KAT), a knowledge authoring environment that overcomes the above mentioned limitations. Shareability is achieved by creating a knowledge base from MLMs using Arden Syntax. Interoperability is enhanced using standard data models and terminologies. However, creation of shareable and interoperable knowledge using Arden Syntax without abstraction increases complexity, which ultimately makes it difficult for physicians to use the authoring environment. Therefore, physician friendliness is provided by abstraction at the application layer to reduce complexity. This abstraction is regulated by mappings created between legacy system concepts, which are modeled as domain clinical model (DCM) and decision support standards such as virtual medical record (vMR) and Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT). We represent these mappings with a semantic reconciliation model (SRM). The objective of the study is the creation of shareable and interoperable knowledge using a user-friendly and flexible I-KAT. Therefore we evaluated our system using completeness and user satisfaction criteria, which we assessed through the system- and user-centric evaluation processes. For system-centric evaluation, we compared the implementation of clinical information modelling system requirements in our proposed system and in existing systems. The results suggested that 82.05% of the requirements were fully supported, 7.69% were partially supported, and 10.25% were not supported by our system. In the existing systems, 35.89% of requirements were fully supported, 28.20% were partially supported, and 35.89% were not supported. For user-centric evaluation, the assessment criterion was 'ease of use'. Our proposed system showed 15 times better results with respect to MLM creation time than the existing systems. Moreover, on average, the participants made only one error in MLM creation using our proposed system, but 13 errors per MLM using the existing systems. We provide a user-friendly authoring environment for creation of shareable and interoperable knowledge for CDSS to overcome knowledge acquisition complexity. The authoring environment uses state-of-the-art decision support-related clinical standards with increased ease of use. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  6. New applications of maximum likelihood and Bayesian statistics in macromolecular crystallography.

    PubMed

    McCoy, Airlie J

    2002-10-01

    Maximum likelihood methods are well known to macromolecular crystallographers as the methods of choice for isomorphous phasing and structure refinement. Recently, the use of maximum likelihood and Bayesian statistics has extended to the areas of molecular replacement and density modification, placing these methods on a stronger statistical foundation and making them more accurate and effective.

  7. The Use of Multilevel Modeling to Estimate Which Measures Are Most Influential in Determining an Institution's Placement in Carnegie's New Doctoral/Research University Classification Schema

    ERIC Educational Resources Information Center

    Micceri, Theodore

    2007-01-01

    This research sought to determine whether any measure(s) used in the Carnegie Foundation's classification of Doctoral/Research Universities contribute to a greater degree than other measures to final rank placement. Multilevel Modeling (MLM) was applied to all eight of the Carnegie Foundation's predictor measures using final rank…

  8. Detecting Intervention Effects in a Cluster-Randomized Design Using Multilevel Structural Equation Modeling for Binary Responses

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Preacher, Kristopher J.; Bottge, Brian A.

    2015-01-01

    Multilevel modeling (MLM) is frequently used to detect group differences, such as an intervention effect in a pre-test--post-test cluster-randomized design. Group differences on the post-test scores are detected by controlling for pre-test scores as a proxy variable for unobserved factors that predict future attributes. The pre-test and post-test…

  9. Left atrial function in patients with light chain amyloidosis: A transthoracic 3D speckle tracking imaging study.

    PubMed

    Mohty, Dania; Petitalot, Vincent; Magne, Julien; Fadel, Bahaa M; Boulogne, Cyrille; Rouabhia, Dounia; ElHamel, Chahrazed; Lavergne, David; Damy, Thibaud; Aboyans, Victor; Jaccard, Arnaud

    2018-04-01

    Systemic light chain amyloidosis (AL) is characterized by the extracellular deposition of amyloid fibrils. Transthoracic echocardiography is the modality of choice to assess cardiac function in patients with AL. Whereas left ventricular (LV) function has been well studied in this patient population, data regarding the value of left atrial (LA) function in AL patients are lacking. In this study, we aim to examine the impact of LA volumes and function on survival in AL patients as assessed by real-time 3D echocardiography. A total of 77 patients (67±10 years, 60% men) with confirmed AL and 39 healthy controls were included. All standard 2D echocardiographic and 3D-LA parameters were obtained. Fourteen patients (18%) were in Mayo Clinic (MC) stage I, 30 (39%) in stage II, and 33 (43%) in stage III at initial evaluation. There was no significant difference among the MC stages groups in terms of age, gender, or cardiovascular risk factors. As compared to patients in MC II and MC I, those in MC III had significantly larger indexed 3D-LA volumes (MCIII: 46±15mL/m 2 , MC II: 38±12mL/m 2 , and MC I: 23±9mL/m 2 , p<0.0001), lower 3D-LA total emptying fraction (3D-tLAEF) (21±13% vs. 31±15% vs. 43±7%, respectively, p<0.0001), and worse 3D peak atrial longitudinal strain (3D-PALS) (11±9% vs. 18±13% vs. 20±7%, respectively, p=0.007). Two-year survival was significantly lower in patients with 3D-tLAEF <+34% (p=0.003) and in those with 3D-PALS <+14% (p=0.034). Both parameters provided incremental prognostic value over maximal LA volume in multivariate analysis. Functional LA parameters are progressively altered in AL patients according to the MC stage. A decrease in 3D-PALS is associated with worse outcome, independently of LA volume. Copyright © 2017 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  10. Validation of automated lobe segmentation on paired inspiratory-expiratory chest CT in 8-14 year-old children with cystic fibrosis.

    PubMed

    Konietzke, Philip; Weinheimer, Oliver; Wielpütz, Mark O; Savage, Dasha; Ziyeh, Tiglath; Tu, Christin; Newman, Beverly; Galbán, Craig J; Mall, Marcus A; Kauczor, Hans-Ulrich; Robinson, Terry E

    2018-01-01

    Densitometry on paired inspiratory and expiratory multidetector computed tomography (MDCT) for the quantification of air trapping is an important approach to assess functional changes in airways diseases such as cystic fibrosis (CF). For a regional analysis of functional deficits, an accurate lobe segmentation algorithm applicable to inspiratory and expiratory scans is beneficial. We developed a fully automated lobe segmentation algorithm, and subsequently validated automatically generated lobe masks (ALM) against manually corrected lobe masks (MLM). Paired inspiratory and expiratory CTs from 16 children with CF (mean age 11.1±2.4) acquired at 4 time-points (baseline, 3mon, 12mon, 24mon) with 2 kernels (B30f, B60f) were segmented, resulting in 256 ALM. After manual correction spatial overlap (Dice index) and mean differences in lung volume and air trapping were calculated for ALM vs. MLM. The mean overlap calculated with Dice index between ALM and MLM was 0.98±0.02 on inspiratory, and 0.86±0.07 on expiratory CT. If 6 lobes were segmented (lingula treated as separate lobe), the mean overlap was 0.97±0.02 on inspiratory, and 0.83±0.08 on expiratory CT. The mean differences in lobar volumes calculated in accordance with the approach of Bland and Altman were generally low, ranging on inspiratory CT from 5.7±52.23cm3 for the right upper lobe to 17.41±14.92cm3 for the right lower lobe. Higher differences were noted on expiratory CT. The mean differences for air trapping were even lower, ranging from 0±0.01 for the right upper lobe to 0.03±0.03 for the left lower lobe. Automatic lobe segmentation delivers excellent results for inspiratory and good results for expiratory CT. It may become an important component for lobe-based quantification of functional deficits in cystic fibrosis lung disease, reducing necessity for user-interaction in CT post-processing.

  11. Effects of simultaneous and optimized sequential cardiac resynchronization therapy on myocardial oxidative metabolism and efficiency.

    PubMed

    Christenson, Stuart D; Chareonthaitawee, Panithaya; Burnes, John E; Hill, Michael R S; Kemp, Brad J; Khandheria, Bijoy K; Hayes, David L; Gibbons, Raymond J

    2008-02-01

    Cardiac resynchronization therapy (CRT) can improve left ventricular (LV) hemodynamics and function. Recent data suggest the energy cost of such improvement is favorable. The effects of sequential CRT on myocardial oxidative metabolism (MVO(2)) and efficiency have not been previously assessed. Eight patients with NYHA class III heart failure were studied 196 +/- 180 days after CRT implant. Dynamic [(11)C]acetate positron emission tomography (PET) and echocardiography were performed after 1 hour of: 1) AAI pacing, 2) simultaneous CRT, and 3) sequential CRT. MVO(2) was calculated using the monoexponential clearance rate of [(11)C]acetate (k(mono)). Myocardial efficiency was expressed in terms of the work metabolic index (WMI). P values represent overall significance from repeated measures analysis. Global LV and right ventricular (RV) MVO(2) were not significantly different between pacing modes, but the septal/lateral MVO(2) ratio differed significantly with the change in pacing mode (AAI pacing = 0.696 +/- 0.094 min(-1), simultaneous CRT = 0.975 +/- 0.143 min(-1), and sequential CRT = 0.938 +/- 0.189 min(-1); overall P = 0.001). Stroke volume index (SVI) (AAI pacing = 26.7 +/- 10.4 mL/m(2), simultaneous CRT = 30.6 +/- 11.2 mL/m(2), sequential CRT = 33.5 +/- 12.2 mL/m(2); overall P < 0.001) and WMI (AAI pacing = 3.29 +/- 1.34 mmHg*mL/m(2)*10(6), simultaneous CRT = 4.29 +/- 1.72 mmHg*mL/m(2)*10(6), sequential CRT = 4.79 +/- 1.92 mmHg*mL/m(2)*10(6); overall P = 0.002) also differed between pacing modes. Compared with simultaneous CRT, additional changes in septal/lateral MVO(2), SVI, and WMI with sequential CRT were not statistically significant on post hoc analysis. In this small selected population, CRT increases LV SVI without increasing MVO(2), resulting in improved myocardial efficiency. Additional improvements in LV work, oxidative metabolism, and efficiency from simultaneous to sequential CRT were not significant.

  12. Measuring coherence of computer-assisted likelihood ratio methods.

    PubMed

    Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H

    2015-04-01

    Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  14. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  15. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  16. The Maximum Likelihood Solution for Inclination-only Data

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2006-12-01

    The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag

  17. Mixing in the Extratropical Stratosphere: Model-measurements Comparisons using MLM Diagnostics

    NASA Technical Reports Server (NTRS)

    Ma, Jun; Waugh, Darryn W.; Douglass, Anne R.; Kawa, Stephan R.; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    We evaluate transport processes in the extratropical lower stratosphere for both models and measurements with the help of equivalent length diagnostic from the modified Lagrangian-mean (MLM) analysis. This diagnostic is used to compare measurements of long-lived tracers made by the Cryogenic Limb Array Etalon Spectrometer (CLAES) on the Upper Atmosphere Research Satellite (UARS) with simulated tracers. Simulations are produced in Chemical and Transport Models (CTMs), in which meteorological fields are taken from the Goddard Earth Observing System Data Assimilation System (GEOS DAS), the Middle Atmosphere Community Climate Model (MACCM2), and the Geophysical Fluid Dynamics Laboratory (GFDL) "SKYHI" model, respectively. Time series of isentropic equivalent length show that these models are able to capture major mixing and transport properties observed by CLAES, such as the formation and destruction of polar barriers, the presence of surf zones in both hemispheres. Differences between each model simulation and the observation are examined in light of model performance. Among these differences, only the simulation driven by GEOS DAS shows one case of the "top-down" destruction of the Antarctic polar vortex, as observed in the CLAES data. Additional experiments of isentropic advection of artificial tracer by GEOS DAS winds suggest that diabatic movement might have considerable contribution to the equivalent length field in the 3D CTM diagnostics.

  18. Multilevel built environment features and individual odds of overweight and obesity in Utah

    PubMed Central

    Xu, Yanqing; Wen, Ming; Wang, Fahui

    2015-01-01

    Based on the data from the Behavioral Risk Factor Surveillance System (BRFSS) in 2007, 2009 and 2011 in Utah, this research uses multilevel modeling (MLM) to examine the associations between neighborhood built environments and individual odds of overweight and obesity after controlling for individual risk factors. The BRFSS data include information on 21,961 individuals geocoded to zip code areas. Individual variables include BMI (body mass index) and socio-demographic attributes such as age, gender, race, marital status, education attainment, employment status, and whether an individual smokes. Neighborhood built environment factors measured at both zip code and county levels include street connectivity, walk score, distance to parks, and food environment. Two additional neighborhood variables, namely the poverty rate and urbanicity, are also included as control variables. MLM results show that at the zip code level, poverty rate and distance to parks are significant and negative covariates of the odds of overweight and obesity; and at the county level, food environment is the sole significant factor with stronger fast food presence linked to higher odds of overweight and obesity. These findings suggest that obesity risk factors lie in multiple neighborhood levels and built environment features need to be defined at a neighborhood size relevant to residents' activity space. PMID:26251559

  19. Enzymatic synthesis of structured lipids.

    PubMed

    Iwasaki, Yugo; Yamane, Tsuneo

    2004-01-01

    Structured lipids (SLs) are defined as lipids that are modified chemically or enzymatically in order to change their structure. This review deals with structured triacylglycerols (STGs) and structured phospholipids (SPLs). The most typical STGs are MLM-type STGs, having medium chain fatty acids (FAs) at the 1- and 3-positions and a long chain fatty acid at the 2- position. MLM-type STGs are synthesized by: 1) 1,3-position-specific lipase-catalyzed acyl exchange of TG with FA or with FA ethylester (FAEt); 2) 1,3-position-specific lipase-catalyzed acylation of glycerol with FA, giving symmetric 1,3-diacyl-sn-glycerol, followed by chemical acylation at the sn-2 position, and; 3) 1,3-position-specific lipase-catalyzed deacylation of TG, giving 2-monoacylglycerol, followed by reacylation at the 1- and 3-positions with FA or with (FAEt). Enzymatic preparation of SPLs requires: 1) acyl group modification, and 2) head group modification of phospholipids. Acyl group modification is performed using lipases or phospholipase A2-mediated transesterification or ester synthesis to introduce arbitrary fatty acid to phospholipids. Head group modification is carried out by phospholipase D-catalyzed transphosphatidylation. A wide range of compounds can be introduced into the polar head of phospholipids, making it possible to prepare various SPLs.

  20. Real-time three-dimensional echocardiographic study of left ventricular function after infarct exclusion surgery for ischemic cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Qin, J. X.; Shiota, T.; McCarthy, P. M.; Firstenberg, M. S.; Greenberg, N. L.; Tsujino, H.; Bauer, F.; Travaglini, A.; Hoercher, K. J.; Buda, T.; hide

    2000-01-01

    BACKGROUND: Infarct exclusion (IE) surgery, a technique of left ventricular (LV) reconstruction for dyskinetic or akinetic LV segments in patients with ischemic cardiomyopathy, requires accurate volume quantification to determine the impact of surgery due to complicated geometric changes. METHODS AND RESULTS: Thirty patients who underwent IE (mean age 61+/-8 years, 73% men) had epicardial real-time 3-dimensional echocardiographic (RT3DE) studies performed before and after IE. RT3DE follow-up was performed transthoracically 42+/-67 days after surgery in 22 patients. Repeated measures ANOVA was used to compare the values before and after IE surgery and at follow-up. Significant decreases in LV end-diastolic (EDVI) and end-systolic (ESVI) volume indices were apparent immediately after IE and in follow-up (EDVI 99+/-40, 67+/-26, and 71+/-31 mL/m(2), respectively; ESVI 72+/-37, 40+/-21, and 42+/-22 mL/m(2), respectively; P:<0.05). LV ejection fraction increased significantly and remained higher (0.29+/-0.11, 0.43+/-0.13, and 0.42+/-0.09, respectively, P:<0.05). Forward stroke volume in 16 patients with preoperative mitral regurgitation significantly improved after IE and in follow-up (22+/-12, 53+/-24, and 58+/-21 mL, respectively, P:<0.005). New York Heart Association functional class at an average 285+/-144 days of clinical follow-up significantly improved from 3.0+/-0.8 to 1.8+/-0.8 (P:<0.0001). Smaller end-diastolic and end-systolic volumes measured with RT3DE immediately after IE were closely related to improvement in New York Heart Association functional class at clinical follow-up (Spearman's rho=0.58 and 0.60, respectively). CONCLUSIONS: RT3DE can be used to quantitatively assess changes in LV volume and function after complicated LV reconstruction. Decreased LV volume and increased ejection fraction imply a reduction in LV wall stress after IE surgery and are predictive of symptomatic improvement.

  1. Mitral valve repair for ischemic moderate mitral regurgitation in patients undergoing coronary artery bypass grafting

    PubMed Central

    Toktas, Faruk; Yavuz, Senol; Ozsin, Kadir K.; Sanri, Umut S.

    2016-01-01

    Objectives: To investigate whether mitral valve repair (MVR) at the time of coronary artery bypass grafting (CABG) in patients with ischemic moderate mitral regurgitation (MR) and coronary artery disease could improve short- and mid-term postoperative outcomes. Methods: Between March 2013 and December 2015, 90 patients with moderate ischemic MR underwent first-time CABG in Bursa Yuksek Ihtisas Training and Research Hospital, Bursa, Turkey. Out of 90 patients, 44 (48.9%) underwent combined CABG+MVR. The remaining 46 (51.1%) underwent CABG alone. Ventricular functions and effort capacities of patients in both groups were evaluated echocardiographically and clinically in the preoperative period, and in the first postoperative year. Results: Postoperative regurgitant volume changes according to preoperative values were -24.76±19 ml/beat in the combined CABG+MVR group, and -8.70±7.2 ml/beat in the CABG alone group (p=0.001). The change of vena contracta width was -3.40±0.2 mm in the combined CABG+MVR group whereas in the CABG alone -1.45±0.7 mm (p=0.019). The changes of left ventricular end-systolic volume index were -30.77±25.9 ml/m2 in the combined CABG+MVR group and -15.6±9.4 ml/m2 in the CABG alone group (p=0.096). Ejection fraction changes in the combined CABG+MVR group was +1.51±5.3% and in the CABG alone group was +1.15±4.3%. No statistically significant difference was found between both groups (p=0.604). Preoperative New York Heart Association class values in the combined CABG+MVR group was 2.18±0.45, and in the CABG alone group was 2.13±0.54. Conclusions: Moderate MR in patients undergoing CABG affects the outcome adversely and it does not reliably improve after CABG alone. Therefore, patients with ischemic moderate MR should undergo simultaneous MVR at the time of CABG. PMID:27464861

  2. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  3. New prior sampling methods for nested sampling - Development and testing

    NASA Astrophysics Data System (ADS)

    Stokes, Barrie; Tuyl, Frank; Hudson, Irene

    2017-06-01

    Nested Sampling is a powerful algorithm for fitting models to data in the Bayesian setting, introduced by Skilling [1]. The nested sampling algorithm proceeds by carrying out a series of compressive steps, involving successively nested iso-likelihood boundaries, starting with the full prior distribution of the problem parameters. The "central problem" of nested sampling is to draw at each step a sample from the prior distribution whose likelihood is greater than the current likelihood threshold, i.e., a sample falling inside the current likelihood-restricted region. For both flat and informative priors this ultimately requires uniform sampling restricted to the likelihood-restricted region. We present two new methods of carrying out this sampling step, and illustrate their use with the lighthouse problem [2], a bivariate likelihood used by Gregory [3] and a trivariate Gaussian mixture likelihood. All the algorithm development and testing reported here has been done with Mathematica® [4].

  4. Synthesizing Regression Results: A Factored Likelihood Method

    ERIC Educational Resources Information Center

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  5. Evaluating the effects of dam breach methodologies on Consequence Estimation through Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Kalyanapu, A. J.; Thames, B. A.

    2013-12-01

    Dam breach modeling often includes application of models that are sophisticated, yet computationally intensive to compute flood propagation at high temporal and spatial resolutions. This results in a significant need for computational capacity that requires development of newer flood models using multi-processor and graphics processing techniques. Recently, a comprehensive benchmark exercise titled the 12th Benchmark Workshop on Numerical Analysis of Dams, is organized by the International Commission on Large Dams (ICOLD) to evaluate the performance of these various tools used for dam break risk assessment. The ICOLD workshop is focused on estimating the consequences of failure of a hypothetical dam near a hypothetical populated area with complex demographics, and economic activity. The current study uses this hypothetical case study and focuses on evaluating the effects of dam breach methodologies on consequence estimation and analysis. The current study uses ICOLD hypothetical data including the topography, dam geometric and construction information, land use/land cover data along with socio-economic and demographic data. The objective of this study is to evaluate impacts of using four different dam breach methods on the consequence estimates used in the risk assessments. The four methodologies used are: i) Froehlich (1995), ii) MacDonald and Langridge-Monopolis 1984 (MLM), iii) Von Thun and Gillete 1990 (VTG), and iv) Froehlich (2008). To achieve this objective, three different modeling components were used. First, using the HEC-RAS v.4.1, dam breach discharge hydrographs are developed. These hydrographs are then provided as flow inputs into a two dimensional flood model named Flood2D-GPU, which leverages the computer's graphics card for much improved computational capabilities of the model input. Lastly, outputs from Flood2D-GPU, including inundated areas, depth grids, velocity grids, and flood wave arrival time grids, are input into HEC-FIA, which provides the consequence assessment for the solution to the problem statement. For the four breach methodologies, a sensitivity analysis of four breach parameters, breach side slope (SS), breach width (Wb), breach invert elevation (Elb), and time of failure (tf), is conducted. Up to, 68 simulations are computed to produce breach hydrographs in HEC-RAS for input into Flood2D-GPU. The Flood2D-GPU simulation results were then post-processed in HEC-FIA to evaluate: Total Population at Risk (PAR), 14-yr and Under PAR (PAR14-), 65-yr and Over PAR (PAR65+), Loss of Life (LOL) and Direct Economic Impact (DEI). The MLM approach resulted in wide variability in simulated minimum and maximum values of PAR, PAR 65+ and LOL estimates. For PAR14- and DEI, Froehlich (1995) resulted in lower values while MLM resulted in higher estimates. This preliminary study demonstrated the relative performance of four commonly used dam breach methodologies and their impacts on consequence estimation.

  6. Role Conflict and Reality Shock among Neophyte Navy Nurses

    DTIC Science & Technology

    1992-08-01

    job satisfaction; (4) role ambiguity did not have a significant effect on job satisfaction. |I m ( m l m mlm m m m - ’ The main problems...ONE INTRODUCTION The new graduate nurse reports to her first job filled with dreams, aspirations, and enthusiasm for her first challenging work ... the new nurse graduate’s school-to- work transition and what problems are defined by the new graduates; 3. Determine the dissatisfies, stressors ,

  7. Air & Space Power Journal. Volume 28, Number 5, September-October 2014

    DTIC Science & Technology

    2014-10-01

    authorized could employ up to three four-ship cells concurrently.27 If three cells are employed simul - taneously, then 12 of the 21 jets (57 percent...Performance Evaluation of a Forward Arming and Refueling Point (FARP) Using Discrete Event Simulation ,” Graduate Research Project AFIT/MLM/ ENS/05...information, within which propaganda is a species, and therefore addresses all in- formation—biased and unbiased , true and false—designed to shape

  8. Estimates of Crustal Transmission Losses Using MLM Array Processing.

    DTIC Science & Technology

    1982-07-01

    boundary with a half space below, and with some form of reflection characteristic and/or loss mechanism. If acoustic energy , upon encountering the bottom...sea-sediment interface would probably be sufficient. However, sound energy does penetrate beneath the sea -2- floor and is both reflected and refracted...back to the water. In an active acoustical experiment, especially at longer ranges, a significant amount of tne received energy may come from waves

  9. Ride-along data LOS 130, 170 & LO330 shots z3139, 3140 and 3141

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loisel, Guillaume Pascal

    Each instrument records the x-ray emission from the Z-pinch dynamic hohlraum (ZPDH); LOS 130 TIXTLs instruments record the absorption of the pinch backlighter through an expanding NaF/Mg foil; LOS 170 MLM instruments record monochromatic images at 276 and 528 eV energies near and before ZPDH stagnation time; LOS 330 TREX 6A & B: recoded time resolved absorption spectra from a radiatively heated Ne gas.

  10. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  11. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  12. New method to incorporate Type B uncertainty into least-squares procedures in radionuclide metrology.

    PubMed

    Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei

    2016-03-01

    We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.

  13. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  14. Acute hemodynamic effects of adaptive servo-ventilation in patients with heart failure.

    PubMed

    Yamada, Shiro; Sakakibara, Mamoru; Yokota, Takashi; Kamiya, Kiwamu; Asakawa, Naoya; Iwano, Hiroyuki; Yamada, Satoshi; Oba, Koji; Tsutsui, Hiroyuki

    2013-01-01

    Adaptive servo-ventilation (ASV) improves cardiac function in patients with heart failure (HF). We compared the hemodynamics of control and HF patients, and identified the predictors for acute effects of ASV in HF. We performed baseline echocardiographic measurements and hemodynamic measurements at baseline and after 15 min of ASV during cardiac catheterization in 11 control and 34 HF patients. Heart rate and blood pressure did not change after ASV in either the control or HF group. Stroke volume index (SVI) decreased from 49.3±7.6 to 41.3±7.6 ml/m2 in controls (P<0.0001) but did not change in the HF patients (from 34.8±11.5 to 32.8±8.9 ml/m2, P=0.148). In the univariate analysis, pulmonary capillary wedge pressure (PCWP), mitral regurgitation (MR)/left atrial (LA) area, E/A, E/e', and the sphericity index defined by the ratio between the short-axis and long-axis dimensions of the left ventricle significantly correlated with % change of SVI from baseline during ASV. PCWP and MR/LA area were independent predictors by multivariate analysis. Moreover, responders (15 of 34 HF patients; 44%) categorized by an increase in SVI showed significantly higher PCWP, MR, and sphericity index. Left ventricular structure and MR, as well as PCWP, could predict acute favorable effects on hemodynamics by ASV therapy in HF patients. 

  15. Independent effects of both right and left ventricular function on plasma brain natriuretic peptide.

    PubMed

    Vogelsang, Thomas Wiis; Jensen, Ruben J; Monrad, Astrid L; Russ, Kaspar; Olesen, Uffe H; Hesse, Birger; Kjaer, Andreas

    2007-09-01

    Brain natriuretic peptide (BNP) is increased in heart failure; however, the relative contribution of the right and left ventricles is largely unknown. To investigate if right ventricular function has an independent influence on plasma BNP concentration. Right (RVEF), left ventricular ejection fraction (LVEF), and left ventricular end-diastolic volume index (LVEDVI) were determined in 105 consecutive patients by first-pass radionuclide ventriculography (FP-RNV) and multiple ECG-gated equilibrium radionuclide ventriculography (ERNV), respectively. BNP was analyzed by immunoassay. Mean LVEF was 0.51 (range 0.10-0.83) with 36% having a reduced LVEF (<0.50). Mean RVEF was 0.50 (range 0.26-0.78) with 43% having a reduced RVEF (<0.50). The mean LVEDVI was 92 ml/m2 with 22% above the upper normal limit (117 ml/m2). Mean BNP was 239 pg/ml range (0.63-2523). In univariate linear regression analysis LVEF, LVEDVI and RVEF all correlated significantly with log BNP (p<0.0001). In a multivariate analysis only RVEF and LVEF remained significant. The parameter estimates of the final adjusted model indicated that RVEF and LVEF influence on log BNP were of the same magnitude. BNP, which is a strong prognostic marker in heart failure, independently depends on both left and right ventricular systolic function. This might, at least in part, explain why BNP holds stronger prognostic value than LVEF alone.

  16. Effect of the Epicardial Adipose Tissue Volume on the Prevalence of Paroxysmal and Persistent Atrial Fibrillation.

    PubMed

    Oba, Kageyuki; Maeda, Minetaka; Maimaituxun, Gulinu; Yamaguchi, Satoshi; Arasaki, Osamu; Fukuda, Daiju; Yagi, Shusuke; Hirata, Yukina; Nishio, Susumu; Iwase, Takashi; Takao, Shoichiro; Kusunose, Kenya; Yamada, Hirotsugu; Soeki, Takeshi; Wakatsuki, Tetsuzo; Harada, Masafumi; Masuzaki, Hiroaki; Sata, Masataka; Shimabukuro, Michio

    2018-05-25

    Although increasing evidence suggests that epicardial adipose tissue volume (EATV) is associated with atrial fibrillation (AF), it is controversial whether there is a dose-response relationship of increasing EATV along the continuum of AF. We evaluated the effect of the EATV on the prevalence of paroxysmal AF (PAF) and persistent AF (PeAF) and the relationships with cardiac structure and functional remodeling.Methods and Results:Subjects who underwent multidetector computed tomography (MDCT) coronary angiography because of symptoms suggestive of coronary artery disease were divided into sinus rhythm (SR) (n=112), PAF (n=133), and PeAF (n=71) groups. The EATV index (EATV/body surface area, mL/m 2 ) was strongly associated with the prevalence of PAF and PeAF on the model adjusted for known AF risk factors. The effect of the EATV index on the prevalence of PeAF, but not on that of PAF, was modified by the left atrial (LA) dimension, suggesting that extension of the LA dimension is related to EATV expansion in PeAF. The cutoff value of the EATV index for the prevalence was higher in PeAF than in PAF (64 vs. 55 mL/m 2 , P<0.01). The EATV index is associated with the prevalence of PAF and PeAF, and its cutoff values are predictive for PAF and PeAF development independently of other AF risk factors.

  17. Preoperative left ventricular ejection fraction and left atrium reverse remodeling after mitral regurgitation surgery.

    PubMed

    Machado, Lucia R; Meneghelo, Zilda M; Le Bihan, David C S; Barretto, Rodrigo B M; Carvalho, Antonio C; Moises, Valdir A

    2014-11-06

    Left atrium enlargement has been associated with cardiac events in patients with mitral regurgitation (MR). Left atrium reverse remodeling (LARR) occur after surgical correction of MR, but the preoperative predictors of this phenomenon are not well known. It is therefore important to identify preoperative predictors for postoperative LARR. We enrolled 62 patients with chronic severe MR (prolapse or flail leaflet) who underwent successful mitral valve surgery (repair or replacement); all with pre- and postoperative echocardiography. LARR was defined as a reduction in left atrium volume index (LAVI) of ≥ 25%. Stepwise multiple regression analysis was used to identify independent predictors of LARR. LARR occurred in 46 patients (74.2%), with the mean LAVI decreasing from 85.5 mL/m2 to 49.7 mL/m2 (p <0.001). These patients had a smaller preoperative left ventricular systolic volume (p =0.022) and a higher left ventricular ejection fraction (LVEF) (p =0.034). LVEF was identified as the only preoperative variable significantly associated with LARR (odds ratio, 1.086; 95% confidence interval, 1.002-1.178). A LVEF cutoff value of 63.5% identified patients with LARR of ≥ 25% with a sensitivity of 71.7% and a specificity of 56.3%. LARR occurs frequently after mitral valve surgery and is associated with preoperative LVEF higher than 63.5%.

  18. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches.

    PubMed

    Shih, Weichung Joe; Li, Gang; Wang, Yining

    2016-03-01

    Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  20. Approximated maximum likelihood estimation in multifractal random walks

    NASA Astrophysics Data System (ADS)

    Løvsletten, O.; Rypdal, M.

    2012-04-01

    We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.

  1. What influences the choice of assessment methods in health technology assessments? Statistical analysis of international health technology assessments from 1989 to 2002.

    PubMed

    Draborg, Eva; Andersen, Christian Kronborg

    2006-01-01

    Health technology assessment (HTA) has been used as input in decision making worldwide for more than 25 years. However, no uniform definition of HTA or agreement on assessment methods exists, leaving open the question of what influences the choice of assessment methods in HTAs. The objective of this study is to analyze statistically a possible relationship between methods of assessment used in practical HTAs, type of assessed technology, type of assessors, and year of publication. A sample of 433 HTAs published by eleven leading institutions or agencies in nine countries was reviewed and analyzed by multiple logistic regression. The study shows that outsourcing of HTA reports to external partners is associated with a higher likelihood of using assessment methods, such as meta-analysis, surveys, economic evaluations, and randomized controlled trials; and with a lower likelihood of using assessment methods, such as literature reviews and "other methods". The year of publication was statistically related to the inclusion of economic evaluations and shows a decreasing likelihood during the year span. The type of assessed technology was related to economic evaluations with a decreasing likelihood, to surveys, and to "other methods" with a decreasing likelihood when pharmaceuticals were the assessed type of technology. During the period from 1989 to 2002, no major developments in assessment methods used in practical HTAs were shown statistically in a sample of 433 HTAs worldwide. Outsourcing to external assessors has a statistically significant influence on choice of assessment methods.

  2. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Composite Partial Likelihood Estimation Under Length-Biased Sampling, With Application to a Prevalent Cohort Study of Dementia

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing

    2013-01-01

    The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265

  4. On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood.

    PubMed

    Karabatsos, George

    2018-06-01

    This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon previous methods because it provides an omnibus test of the entire hierarchy of cancellation axioms, beyond double cancellation. It does so while accounting for the posterior uncertainty that is inherent in the empirical orderings that are implied by these axioms, together. The new method is illustrated through a test of the cancellation axioms on a classic survey data set, and through the analysis of simulated data.

  5. Maximum-likelihood estimation of parameterized wavefronts from multifocal data

    PubMed Central

    Sakamoto, Julia A.; Barrett, Harrison H.

    2012-01-01

    A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282

  6. Epidemiologic programs for computers and calculators. A microcomputer program for multiple logistic regression by unconditional and conditional maximum likelihood methods.

    PubMed

    Campos-Filho, N; Franco, E L

    1989-02-01

    A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.

  7. Military Review. Volume 82, Number 2, March-April 2002

    DTIC Science & Technology

    2002-04-01

    Association (1997), 399-411. 6. A.M. Friedlander, S.L. Welkos, M.L.M. Pitt, J.W. Ezzell , P.L. Worsham, K.J. Rose, et al., �Postexposure Prophylaxis Against...forces. The helicopter drop should be from two to three miles behind the target, so that instead of going deeper into enemy territory for the attack...333 pages, $25.00. Two weeks before the end of World War II, a Japanese submarine in the Philippine Sea, about 350 miles East of Leyte, torpedoed

  8. Earthquake-Induced Liquefaction of Confined Soil Zones: A Centrifuge Study

    DTIC Science & Technology

    1993-12-09

    8217. 4.2 Pore pressure transducers (PPT) Pore pressures in the saturated soil were monitored by Druck PDCR 81 pore pressure transducers. This type of pore...PPT5406 -20 MTn-23.3 2 3 4 5 6 7 a 2 10 II 12 Mo, 34 .8 PPT6270 20~ Mlm=-I .2 1 2 3 4 5 6 8 a 1 0 II 12 4 M....7 0 PPT6263 A 3d Moa20.S PPT6260 7 -0 1 2 3

  9. Transactions of the Army Conference on Applied Mathematics and Computing (3rd) Held at Atlanta, Georgia on 13-16 May 1986

    DTIC Science & Technology

    1986-02-01

    jask 1 in (x,y) plane. Porn in Work Socca - Y” S,=2.0m -. I - .em Fig. 4b Task 2 in (x,y) plane. I IE:/lvli-r;: ILIldtili ml=m2= 2.0 kg...represented by a grid of 400x200 points, each point corresponding to a pixel of a computer video terminal. For each point A = (Re(A), ImW), a free critical

  10. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  11. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  12. Unified framework to evaluate panmixia and migration direction among multiple sampling locations.

    PubMed

    Beerli, Peter; Palczewski, Michal

    2010-05-01

    For many biological investigations, groups of individuals are genetically sampled from several geographic locations. These sampling locations often do not reflect the genetic population structure. We describe a framework using marginal likelihoods to compare and order structured population models, such as testing whether the sampling locations belong to the same randomly mating population or comparing unidirectional and multidirectional gene flow models. In the context of inferences employing Markov chain Monte Carlo methods, the accuracy of the marginal likelihoods depends heavily on the approximation method used to calculate the marginal likelihood. Two methods, modified thermodynamic integration and a stabilized harmonic mean estimator, are compared. With finite Markov chain Monte Carlo run lengths, the harmonic mean estimator may not be consistent. Thermodynamic integration, in contrast, delivers considerably better estimates of the marginal likelihood. The choice of prior distributions does not influence the order and choice of the better models when the marginal likelihood is estimated using thermodynamic integration, whereas with the harmonic mean estimator the influence of the prior is pronounced and the order of the models changes. The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.

  13. Modeling the Arden Syntax for medical decisions in XML.

    PubMed

    Kim, Sukil; Haug, Peter J; Rocha, Roberto A; Choi, Inyoung

    2008-10-01

    A new model expressing Arden Syntax with the eXtensible Markup Language (XML) was developed to increase its portability. Every example was manually parsed and reviewed until the schema and the style sheet were considered to be optimized. When the first schema was finished, several MLMs in Arden Syntax Markup Language (ArdenML) were validated against the schema. They were then transformed to HTML formats with the style sheet, during which they were compared to the original text version of their own MLM. When faults were found in the transformed MLM, the schema and/or style sheet was fixed. This cycle continued until all the examples were encoded into XML documents. The original MLMs were encoded in XML according to the proposed XML schema and reverse-parsed MLMs in ArdenML were checked using a public domain Arden Syntax checker. Two hundred seventy seven examples of MLMs were successfully transformed into XML documents using the model, and the reverse-parse yielded the original text version of MLMs. Two hundred sixty five of the 277 MLMs showed the same error patterns before and after transformation, and all 11 errors related to statement structure were resolved in XML version. The model uses two syntax checking mechanisms, first an XML validation process, and second, a syntax check using an XSL style sheet. Now that we have a schema for ArdenML, we can also begin the development of style sheets for transformation ArdenML into other languages.

  14. Multilevel stigma as a barrier to HIV testing in Central Asia: a context quantified.

    PubMed

    Smolak, Alex; El-Bassel, Nabila

    2013-10-01

    Central Asia is experiencing one of the fastest growing HIV epidemics in the world, with some areas' infection rates doubling yearly since 2000. This study examines the impact of multilevel stigma (individual, family, and community) on uptake of HIV testing and receipt of HIV testing results among women in Central Asia. The sample consists of 38,884 ever-married, Central Asian women between the ages of 15 and 49. Using multilevel modeling (MLM), HIV stigma variables at the individual, family, and community levels were used to assess the significance of differences in HIV testing and receipt of HIV test results among participants while adjusting for possible confounding factors, such as age, wealth, and education. MLM results indicate that HIV stigma is significantly associated with decreased HIV testing uptake at the individual, family, and community levels and with a decrease in receipt at the community level. A one standard deviation increase in individual, family, and community level composite stigma score was associated with a respective 49 %, 59 %, and 94 % (p < 0.001) decrease in the odds of having been tested for HIV. A one standard deviation increase in community composite stigma score was associated with a 99 % (p < 0.001) decrease in the odds of test receipt. HIV stigma operates on the individual, family, and community levels to hinder HIV testing uptake and at the community level to hinder receipt. These findings have important interventions implications to improve uptake of HIV testing and receipt of HIV test results.

  15. Likelihood-based methods for evaluating principal surrogacy in augmented vaccine trials.

    PubMed

    Liu, Wei; Zhang, Bo; Zhang, Hui; Zhang, Zhiwei

    2017-04-01

    There is growing interest in assessing immune biomarkers, which are quick to measure and potentially predictive of long-term efficacy, as surrogate endpoints in randomized, placebo-controlled vaccine trials. This can be done under a principal stratification approach, with principal strata defined using a subject's potential immune responses to vaccine and placebo (the latter may be assumed to be zero). In this context, principal surrogacy refers to the extent to which vaccine efficacy varies across principal strata. Because a placebo recipient's potential immune response to vaccine is unobserved in a standard vaccine trial, augmented vaccine trials have been proposed to produce the information needed to evaluate principal surrogacy. This article reviews existing methods based on an estimated likelihood and a pseudo-score (PS) and proposes two new methods based on a semiparametric likelihood (SL) and a pseudo-likelihood (PL), for analyzing augmented vaccine trials. Unlike the PS method, the SL method does not require a model for missingness, which can be advantageous when immune response data are missing by happenstance. The SL method is shown to be asymptotically efficient, and it performs similarly to the PS and PL methods in simulation experiments. The PL method appears to have a computational advantage over the PS and SL methods.

  16. Handwriting individualization using distance and rarity

    NASA Astrophysics Data System (ADS)

    Tang, Yi; Srihari, Sargur; Srinivasan, Harish

    2012-01-01

    Forensic individualization is the task of associating observed evidence with a specific source. The likelihood ratio (LR) is a quantitative measure that expresses the degree of uncertainty in individualization, where the numerator represents the likelihood that the evidence corresponds to the known and the denominator the likelihood that it does not correspond to the known. Since the number of parameters needed to compute the LR is exponential with the number of feature measurements, a commonly used simplification is the use of likelihoods based on distance (or similarity) given the two alternative hypotheses. This paper proposes an intermediate method which decomposes the LR as the product of two factors, one based on distance and the other on rarity. It was evaluated using a data set of handwriting samples, by determining whether two writing samples were written by the same/different writer(s). The accuracy of the distance and rarity method, as measured by error rates, is significantly better than the distance method.

  17. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  18. Generalisability in economic evaluation studies in healthcare: a review and case studies.

    PubMed

    Sculpher, M J; Pang, F S; Manca, A; Drummond, M F; Golder, S; Urdahl, H; Davies, L M; Eastwood, A

    2004-12-01

    To review, and to develop further, the methods used to assess and to increase the generalisability of economic evaluation studies. Electronic databases. Methodological studies relating to economic evaluation in healthcare were searched. This included electronic searches of a range of databases, including PREMEDLINE, MEDLINE, EMBASE and EconLit, and manual searches of key journals. The case studies of a decision analytic model involved highlighting specific features of previously published economic studies related to generalisability and location-related variability. The case-study involving the secondary analysis of cost-effectiveness analyses was based on the secondary analysis of three economic studies using data from randomised trials. The factor most frequently cited as generating variability in economic results between locations was the unit costs associated with particular resources. In the context of studies based on the analysis of patient-level data, regression analysis has been advocated as a means of looking at variability in economic results across locations. These methods have generally accepted that some components of resource use and outcomes are exchangeable across locations. Recent studies have also explored, in cost-effectiveness analysis, the use of tests of heterogeneity similar to those used in clinical evaluation in trials. The decision analytic model has been the main means by which cost-effectiveness has been adapted from trial to non-trial locations. Most models have focused on changes to the cost side of the analysis, but it is clear that the effectiveness side may also need to be adapted between locations. There have been weaknesses in some aspects of the reporting in applied cost-effectiveness studies. These may limit decision-makers' ability to judge the relevance of a study to their specific situations. The case study demonstrated the potential value of multilevel modelling (MLM). Where clustering exists by location (e.g. centre or country), MLM can facilitate correct estimates of the uncertainty in cost-effectiveness results, and also a means of estimating location-specific cost-effectiveness. The review of applied economic studies based on decision analytic models showed that few studies were explicit about their target decision-maker(s)/jurisdictions. The studies in the review generally made more effort to ensure that their cost inputs were specific to their target jurisdiction than their effectiveness parameters. Standard sensitivity analysis was the main way of dealing with uncertainty in the models, although few studies looked explicitly at variability between locations. The modelling case study illustrated how effectiveness and cost data can be made location-specific. In particular, on the effectiveness side, the example showed the separation of location-specific baseline events and pooled estimates of relative treatment effect, where the latter are assumed exchangeable across locations. A large number of factors are mentioned in the literature that might be expected to generate variation in the cost-effectiveness of healthcare interventions across locations. Several papers have demonstrated differences in the volume and cost of resource use between locations, but few studies have looked at variability in outcomes. In applied trial-based cost-effectiveness studies, few studies provide sufficient evidence for decision-makers to establish the relevance or to adjust the results of the study to their location of interest. Very few studies utilised statistical methods formally to assess the variability in results between locations. In applied economic studies based on decision models, most studies either stated their target decision-maker/jurisdiction or provided sufficient information from which this could be inferred. There was a greater tendency to ensure that cost inputs were specific to the target jurisdiction than clinical parameters. Methods to assess generalisability and variability in economic evaluation studies have been discussed extensively in the literature relating to both trial-based and modelling studies. Regression-based methods are likely to offer a systematic approach to quantifying variability in patient-level data. In particular, MLM has the potential to facilitate estimates of cost-effectiveness, which both reflect the variation in costs and outcomes between locations and also enable the consistency of cost-effectiveness estimates between locations to be assessed directly. Decision analytic models will retain an important role in adapting the results of cost-effectiveness studies between locations. Recommendations for further research include: the development of methods of evidence synthesis which model the exchangeability of data across locations and allow for the additional uncertainty in this process; assessment of alternative approaches to specifying multilevel models to the analysis of cost-effectiveness data alongside multilocation randomised trials; identification of a range of appropriate covariates relating to locations (e.g. hospitals) in multilevel models; and further assessment of the role of econometric methods (e.g. selection models) for cost-effectiveness analysis alongside observational datasets, and to increase the generalisability of randomised trials.

  19. Assessing compatibility of direct detection data: halo-independent global likelihood analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2016-10-18

    We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be comparedmore » with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.« less

  20. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  1. Between-litter variation in developmental studies of hormones and behavior: Inflated false positives and diminished power.

    PubMed

    Williams, Donald R; Carlsson, Rickard; Bürkner, Paul-Christian

    2017-10-01

    Developmental studies of hormones and behavior often include littermates-rodent siblings that share early-life experiences and genes. Due to between-litter variation (i.e., litter effects), the statistical assumption of independent observations is untenable. In two literatures-natural variation in maternal care and prenatal stress-entire litters are categorized based on maternal behavior or experimental condition. Here, we (1) review both literatures; (2) simulate false positive rates for commonly used statistical methods in each literature; and (3) characterize small sample performance of multilevel models (MLM) and generalized estimating equations (GEE). We found that the assumption of independence was routinely violated (>85%), false positives (α=0.05) exceeded nominal levels (up to 0.70), and power (1-β) rarely surpassed 0.80 (even for optimistic sample and effect sizes). Additionally, we show that MLMs and GEEs have adequate performance for common research designs. We discuss implications for the extant literature, the field of behavioral neuroendocrinology, and provide recommendations. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A Penalized Likelihood Framework For High-Dimensional Phylogenetic Comparative Methods And An Application To New-World Monkeys Brain Evolution.

    PubMed

    Julien, Clavel; Leandro, Aristide; Hélène, Morlon

    2018-06-19

    Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.

  3. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  4. The development of variable MLM editor and TSQL translator based on Arden Syntax in Taiwan.

    PubMed

    Liang, Yan Ching; Chang, Polun

    2003-01-01

    The Arden Syntax standard has been utilized in the medical informatics community in several countries during the past decade. It is never used in nursing in Taiwan. We try to develop a system that acquire medical expert knowledge in Chinese and translates data and logic slot into TSQL Language. The system implements TSQL translator interpreting database queries referred to in the knowledge modules. The decision-support systems in medicine are data driven system where TSQL triggers as inference engine can be used to facilitate linking to a database.

  5. On the existence of maximum likelihood estimates for presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.

  6. Likelihood-based modification of experimental crystal structure electron density maps

    DOEpatents

    Terwilliger, Thomas C [Sante Fe, NM

    2005-04-16

    A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.

  7. Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach

    NASA Astrophysics Data System (ADS)

    Billman, Caleb; Gonthier, P. L.; Harding, A. K.

    2012-01-01

    We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.

  8. Coalescent-based species tree inference from gene tree topologies under incomplete lineage sorting by maximum likelihood.

    PubMed

    Wu, Yufeng

    2012-03-01

    Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.

  9. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  10. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  11. Effect of radiance-to-reflectance transformation and atmosphere removal on maximum likelihood classification accuracy of high-dimensional remote sensing data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1994-01-01

    Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.

  12. Bias correction of risk estimates in vaccine safety studies with rare adverse events using a self-controlled case series design.

    PubMed

    Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley

    2013-12-15

    The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.

  13. Simulation-Based Evaluation of Hybridization Network Reconstruction Methods in the Presence of Incomplete Lineage Sorting

    PubMed Central

    Kamneva, Olga K; Rosenberg, Noah A

    2017-01-01

    Hybridization events generate reticulate species relationships, giving rise to species networks rather than species trees. We report a comparative study of consensus, maximum parsimony, and maximum likelihood methods of species network reconstruction using gene trees simulated assuming a known species history. We evaluate the role of the divergence time between species involved in a hybridization event, the relative contributions of the hybridizing species, and the error in gene tree estimation. When gene tree discordance is mostly due to hybridization and not due to incomplete lineage sorting (ILS), most of the methods can detect even highly skewed hybridization events between highly divergent species. For recent divergences between hybridizing species, when the influence of ILS is sufficiently high, likelihood methods outperform parsimony and consensus methods, which erroneously identify extra hybridizations. The more sophisticated likelihood methods, however, are affected by gene tree errors to a greater extent than are consensus and parsimony. PMID:28469378

  14. Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.

    PubMed

    dos Reis, Mario; Yang, Ziheng

    2011-07-01

    The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.

  15. Computation of nonparametric convex hazard estimators via profile methods.

    PubMed

    Jankowski, Hanna K; Wellner, Jon A

    2009-05-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.

  16. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  17. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  18. Five Methods for Estimating Angoff Cut Scores with IRT

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2017-01-01

    This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…

  19. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  20. Approximate likelihood approaches for detecting the influence of primordial gravitational waves in cosmic microwave background polarization

    NASA Astrophysics Data System (ADS)

    Pan, Zhen; Anderes, Ethan; Knox, Lloyd

    2018-05-01

    One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.

  1. [Cardioprotective effects of glutamine in patients with ischemic heart disease operated under conditions of extracorporeal blood circulation].

    PubMed

    Lomivorotov, V V; Efremov, S M; Shmyrev, V A; Ponomarev, D N; Sviatchenko, A V; Kniaz'kova, L G

    2012-01-01

    It was conducted a study of glutamine cardioptotective effects during perioperative use in patients with ischemic heart disease, operated under CB. Exclusion criteria were: left ventricular ejection fraction less than 50%, diabetes melitus, myocardial infarction less than 3 months ago, Patients of the study group (n=25) had glutamine (20% solution N(2)-L-alanine-L-glutamine ("Dipeptiven" Fresenius Kabi, Germany); 0.4 g/kg/day. Patients of control group (n=25) received placebo (0.9% NaCl solution). The main indicators were the dynamics of troponin I, as well as central hemodynamics parameters. On the 1-st day after operation the concentration of troponin I was significantly lower in the glutamine-group compared placebo-group (1.280 (0.840-2.230) 2.410 (1.060-6.600) ng/ml; p=0.035). 4 hours after CB in a glutamine-group also had significantly large indicators of cardiac index (2.58 (2.34-2.91) l/min/m2 vs 2.03 (1.76-2.32)) l/min/m2; p=0,002) and stroke index (32.8 (27.8-36.0.) ml/m2 vs 26.1 (22.6-31.8) ml/m2; p=0.023). Systemic vascular resistance index was significantly lower in glutamine-group (1942 (1828-2209) dyn x s/cm(-5)/m2 vs 2456 (2400-3265) dyn x s/cm(-5)/m2; p=0.001). Conclusion. Perioperative use of N(2)-L-alanine-L-glutamine during the first 24 hours ofperioperative period gives cardioprotective effect in patients with ischemic heart disease operated under CB.

  2. Percutaneous pulmonary valve implantation in patients with dysfunction of a "native" right ventricular outflow tract - Mid-term results.

    PubMed

    Georgiev, Stanimir; Tanase, Daniel; Ewert, Peter; Meierhofer, Christian; Hager, Alfred; von Ohain, Jelena Pabst; Eicken, Andreas

    2018-05-01

    To investigate the feasibility and mid-term results of percutaneous pulmonary valve implantation (PPVI) in patients with conduit free or "native" right ventricular outflow tracts (RVOT). We identified all 18 patients with conduit free or "native" right ventricular outflow tract, who were treated with percutaneous pulmonary valve implantation (PPVI) in our institution. They were divided into two groups - these in whom the central pulmonary artery was used as an anchoring point for the preparation of the landing zone (n=10) for PPVI and these, in whom a pulmonary artery branch was used for this purpose (n=8). PPVI was performed successfully in all patients with significant immediate RVOT gradient and pulmonary regurgitation grade reduction. Four patients had insignificant paravalvular regurgitation. In one patient the valve was explanted after 4months because of bacterial endocarditis. A follow-up of 19 (4-60) months showed sustained good function of the other implanted valves. The MRI indexed right ventricular end diastolic volume significantly decreased from 108(54-174) ml/m 2 before the procedure to 76(60-126) ml/m 2 six months after PPVI, p=0.01. PPVI is feasible with good mid-term results in selected patients with a "native" RVOT without a previously implanted conduit. Creating a stable landing zone with a diameter less than the largest available valve (currently 29mm) is crucial for the technical success of the procedure. Further studies and the development of new devices could widen the indications for this novel treatment. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Comparison of hydrogen peroxide and peracetic acid as isolator sterilization agents in a hospital pharmacy.

    PubMed

    Bounoure, Frederic; Fiquet, Herve; Arnaud, Philippe

    2006-03-01

    The efficacy of hydrogen peroxide and peracetic acid as isolator sterilization agents was compared. Sterilization and efficacy tests were conducted in a flexible 0.8-m3 transfer isolator using a standard load of glass bottles and sterile medical devices in their packing paper. Bacillus stearothermophilus spores were placed in six critical locations of the isolator and incubated at 55 degrees C in a culture medium for 14 days. Sterilization by 4.25 mL/m3 of 33% vapor-phase hydrogen peroxide and 12.5 mL/m3 of 3.5% peracetic acid was tested in triplicate. Sterility was validated for hydrogen peroxide and peracetic acid at 60, 90, 120, and 180 minutes and at 90, 120, 150, 180, 210, and 240 minutes, respectively. In an efficacy test conducted with an empty isolator, the sterilization time required to destroy B. stearothermophilus spores was 90 minutes for both sterilants, indicating that they have comparable bactericidal properties. During the validation test with a standard load, the sterilization time using hydrogen peroxide was 150 minutes versus 120 minutes with peracetic acid. The glove cuff was particularly difficult for hydrogen peroxide to sterilize, likely due to its slower diffusion time than that of peracetic acid. Hydrogen peroxide is an environmentally safer agent than peracetic acid; however, its bacteriostatic properties, lack of odor, and poor diffusion time may limit its use in sterilizing some materials. Hydrogen peroxide is a useful alternative to peracetic acid for isolator sterilization in a hospital pharmacy or parenteral nutrition preparation unit.

  4. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  5. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  6. Updated logistic regression equations for the calculation of post-fire debris-flow likelihood in the western United States

    USGS Publications Warehouse

    Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2016-06-30

    Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.

  7. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  8. Challenges in Species Tree Estimation Under the Multispecies Coalescent Model

    PubMed Central

    Xu, Bo; Yang, Ziheng

    2016-01-01

    The multispecies coalescent (MSC) model has emerged as a powerful framework for inferring species phylogenies while accounting for ancestral polymorphism and gene tree-species tree conflict. A number of methods have been developed in the past few years to estimate the species tree under the MSC. The full likelihood methods (including maximum likelihood and Bayesian inference) average over the unknown gene trees and accommodate their uncertainties properly but involve intensive computation. The approximate or summary coalescent methods are computationally fast and are applicable to genomic datasets with thousands of loci, but do not make an efficient use of information in the multilocus data. Most of them take the two-step approach of reconstructing the gene trees for multiple loci by phylogenetic methods and then treating the estimated gene trees as observed data, without accounting for their uncertainties appropriately. In this article we review the statistical nature of the species tree estimation problem under the MSC, and explore the conceptual issues and challenges of species tree estimation by focusing mainly on simple cases of three or four closely related species. We use mathematical analysis and computer simulation to demonstrate that large differences in statistical performance may exist between the two classes of methods. We illustrate that several counterintuitive behaviors may occur with the summary methods but they are due to inefficient use of information in the data by summary methods and vanish when the data are analyzed using full-likelihood methods. These include (i) unidentifiability of parameters in the model, (ii) inconsistency in the so-called anomaly zone, (iii) singularity on the likelihood surface, and (iv) deterioration of performance upon addition of more data. We discuss the challenges and strategies of species tree inference for distantly related species when the molecular clock is violated, and highlight the need for improving the computational efficiency and model realism of the likelihood methods as well as the statistical efficiency of the summary methods. PMID:27927902

  9. Parameter estimation of history-dependent leaky integrate-and-fire neurons using maximum-likelihood methods

    PubMed Central

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2012-01-01

    When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282

  10. A likelihood ratio test for evolutionary rate shifts and functional divergence among proteins

    PubMed Central

    Knudsen, Bjarne; Miyamoto, Michael M.

    2001-01-01

    Changes in protein function can lead to changes in the selection acting on specific residues. This can often be detected as evolutionary rate changes at the sites in question. A maximum-likelihood method for detecting evolutionary rate shifts at specific protein positions is presented. The method determines significance values of the rate differences to give a sound statistical foundation for the conclusions drawn from the analyses. A statistical test for detecting slowly evolving sites is also described. The methods are applied to a set of Myc proteins for the identification of both conserved sites and those with changing evolutionary rates. Those positions with conserved and changing rates are related to the structures and functions of their proteins. The results are compared with an earlier Bayesian method, thereby highlighting the advantages of the new likelihood ratio tests. PMID:11734650

  11. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  12. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  13. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  14. Likelihoods for fixed rank nomination networks

    PubMed Central

    HOFF, PETER; FOSDICK, BAILEY; VOLFOVSKY, ALEX; STOVEL, KATHERINE

    2014-01-01

    Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586

  15. Utilizing wheel-ring architecture for stable and selectable single-longitudinal-mode erbium fiber laser

    NASA Astrophysics Data System (ADS)

    Yeh, Chien-Hung; Yang, Zi-Qing; Huang, Tzu-Jung; Chow, Chi-Wai

    2018-03-01

    To achieve a steady single-longitudinal-mode (SLM) erbium-doped fiber (EDF) laser, the wheel-ring architecture is proposed in the laser cavity. According to Vernier effect, the proposed wheel-ring can produce three different free spectrum ranges (FSRs) to serve as the mode-filter for suppressing the densely multi-longitudinal-mode (MLM). Here, to complete wavelength-tunable EDF laser, an optical tunable bandpass filter (OTBF) is utilized inside the cavity for tuning arbitrarily. In addition, the entire output performances of the proposed EDF wheel-ring laser are also discussed and analyzed experimentally.

  16. The Development of Variable MLM Editor and TSQL Translator Based on Arden Syntax in Taiwan

    PubMed Central

    Liang, Yan-Ching; Chang, Polun

    2003-01-01

    The Arden Syntax standard has been utilized in the medical informatics community in several countries during the past decade. It is never used in nursing in Taiwan. We try to develop a system that acquire medical expert knowledge in Chinese and translates data and logic slot into TSQL Language. The system implements TSQL translator interpreting database queries referred to in the knowledge modules. The decision-support systems in medicine are data driven system where TSQL triggers as inference engine can be used to facilitate linking to a database. PMID:14728414

  17. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  18. Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors

    PubMed Central

    Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.

    2009-01-01

    In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527

  19. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  20. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.

  1. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  2. Less-Complex Method of Classifying MPSK

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2006-01-01

    An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may "know" little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term "modulation classification" or simply "classification" signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum- likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of carrier phase, of a complicated sum of functions of the baseband sample values, the carrier phase, the carrier-signal and noise magnitudes, and M or M'. Then the likelihood ratio, defined as the ratio between the likelihood functions, is computed, leading to the choice of whichever hypothesis - M or M'- is more likely. In the alternative method, the integral in each likelihood function is approximated by a sum over values of the integrand sampled at a number, 1, of equally spaced values of carrier phase. Used in this way, 1 is a parameter that can be adjusted to trade computational complexity against the probability of misclassification. In the limit as 1 approaches infinity, one obtains the integral form of the likelihood function and thus recovers the ML classification. The present approximate method has been tested in comparison with the ML method by means of computational simulations. The results of the simulations have shown that the performance (as quantified by probability of misclassification) of the approximate method is nearly indistinguishable from that of the ML method (see figure).

  3. Discovering the desirable alleles contributing to the lignocellulosic biomass traits in saccharum germplasm collections for energy cane improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd, James; Comstock, Jack C.

    Phenotyping Methods: The accessions (which includes 21 taxa and 1,177 accessions) in the World Collection of Sugarcane and Related Grasses (WCSRG) was evaluated for the following traits: arenchyma, internode length and diameter, pubescence, pith, Brix, stalk number and fiber. A core of 300 accessions that included each species in the World Collection was selected by using the Maximization Strategy in MStrat software. Results: The core had a higher diversity rating than random selections of 300 accessions. The Shannon–Weaver Diversity Index scores of the core and whole collection were similar indicating that the majority of the diversity was captured by themore » core collection. The ranges and medians between the core and WCSRG were similar; only two of the trait medians were not significant at P = 0.05 using the non-parametric Wilcoxon method and the coincidence rate (CR % = 96.2) was high (>80) indicating that extreme values were retained. Thus, the phenotypic diversity of these traits in the WCSRG was well represented by the core collection. Associations Methods: Genotypic and phenotypic data were collected for 1002 accessions of the WCSRG including 209 SSR markers. Association analysis was performed using both General Linear (GLM) and Maximum Likelihood (MLM) models. Different core collections with 300 accessions each were selected based on genotypic, phenotypic and combined data based on the Maximization Strategy in MStrat software. Results: A major portion of the genotyping involving SNPs is being conducted by Dr. Jianping Wang of the University of Florida under the DOE award DE-FG 02-11ER 65214 and the genotypic and phenotypic associations will be reported separately next year. In the current, study forty one and seventeen markers were found to be associated with traits using the GLM and MLM methods respectively including associations with arenchyma, internode length and diameter, pubescence, pith, and Sugar Cane Yellow Leaf Virus. The data indicates that each of the cores and the World Collection are similar to each other genotypically and phenotypically, but the core that was selected using only genotypic data was significantly different phenotypically. This indicates that there is not enough association between the genotypic and phenotypic diversity as to select using only genotypic diversity and get the full phenotypic diversity. Core Collection: Creation and Phenotyping Methods: To evaluate this germplasm for breeding purposes, a representative diversity panel selected from the WCSRG of approximately 300 accessions was planted at Canal Point, FL in three replications. These accessions were measured for stalk height and stalk number multiple times throughout the growing season and Brix and fresh biomass during harvest in 2013 and, stalk height, stalk number, stalk diameter, internode length, Brix and fresh and dry biomass was determined in the ratoon crop harvest in 2014. Results: In correlations of multiple measurements, there were higher correlations for early measurements of stalk number and stalk height with harvest traits like Brix and fresh weight. Hybrids had higher fresh mass and Brix while Saccharum spontaneum had higher stalk number and dry mass. The heritability of hybrid mass traits was lower in the ratoon crop. According to the principal component analysis, the diversity panel was divided into two groups. One group had accessions with high stalk number and high dry biomass like S. spontaneum and the other groups contained accessions with higher Brix and fresh biomass like S. officinarum. Mass traits correlated with each other as expected but hybrids had lower correlations between fresh and dry mass. Stalk number and the mass traits correlated with each other except in S. spontaneum and hybrids in the first ratoon. There were 110 accessions not significantly different in Brix from the commercial sugarcane checks including 10 S. spontaneum accessions. There were 27 dry and 6 fresh mass accessions significantly higher than the commercial sugarcane checks. Core Collection: Fiber analysis Methods: A biomass sample was taken from each accession then shredded and dried. Fiber analysis was then performed on each sample. The acetyl groups, acid insoluble lignin, acid soluble lignin, arabinan, glucan, holocellulose, total lignin, structural ash, and xylan were quantified on a % fiber basis and nonstructural ash on a % total basis. Results: There were significant, but not large differences between species for holocellulose, lignin, acetyl, acid soluble lignin, nonstructural ash, and glucan. For each trait, S. spontaneum had significantly more holocellulose, glucan, lignin, and nonstructural ash and less acetyl and acid soluble lignin than the other species. In all populations, glucan and was positively correlated with holocellulose were positively correlated and glucan and and holocellulose were negatively correlated with lignin. In hybrids, internode length correlated positively with holocellulose and nonstructural ash and negatively with lignin. The heritability estimates for each of the fiber component traits is low indicating that environment is an important factor in fiber composition. Principal component analysis indicated that a large amount of diversity exists within each of the species.« less

  4. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  5. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Deciphering Genomic Regions for High Grain Iron and Zinc Content Using Association Mapping in Pearl Millet

    PubMed Central

    Anuradha, N.; Satyavathi, C. Tara; Bharadwaj, C.; Nepolean, T.; Sankar, S. Mukesh; Singh, Sumer P.; Meena, Mahesh C.; Singhal, Tripti; Srivastava, Rakesh K.

    2017-01-01

    Micronutrient malnutrition, especially deficiency of two mineral elements, iron [Fe] and zinc [Zn] in the developing world needs urgent attention. Pearl millet is one of the best crops with many nutritional properties and is accessible to the poor. We report findings of the first attempt to mine favorable alleles for grain iron and zinc content through association mapping in pearl millet. An association mapping panel of 130 diverse lines was evaluated at Delhi, Jodhpur and Dharwad, representing all the three pearl millet growing agro-climatic zones of India, during 2014 and 2015. Wide range of variation was observed for grain iron (32.3–111.9 ppm) and zinc (26.6–73.7 ppm) content. Genotyping with 114 representative polymorphic SSRs revealed 0.35 mean gene diversity. STRUCTURE analysis revealed presence of three sub-populations which was further supported by Neighbor-Joining method of clustering and principal coordinate analysis (PCoA). Marker-trait associations (MTAs) were analyzed with 267 markers (250 SSRs and 17 genic markers) in both general linear model (GLM) and mixed linear model (MLM), however, MTAs resulting from MLM were considered for more robustness of the associations. After appropriate Bonferroni correction, Xpsmp 2261 (13.34% R2-value), Xipes 0180 (R2-value of 11.40%) and Xipes 0096 (R2-value of 11.38%) were consistently associated with grain iron and zinc content for all the three locations. Favorable alleles and promising lines were identified for across and specific environments. PPMI 1102 had highest number (7) of favorable alleles, followed by four each for PPMFeZMP 199 and PPMI 708 for across the environment performance for both grain Fe and Zn content, while PPMI 1104 had alleles specific to Dharwad for grain Fe and Zn content. When compared with the reference genome Tift 23D2B1-P1-P5, Xpsmp 2261 amplicon was identified in intergenic region on pseudomolecule 5, while the other marker, Xipes 0810 was observed to be overlapping with aspartic proteinase (Asp) gene on pseudomolecule 3. Thus, this study can help in breeding new lines with enhanced micronutrient content using marker-assisted selection (MAS) in pearl millet leading to improved well-being especially for women and children. PMID:28507551

  7. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.

    2013-10-15

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. Themore » key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.« less

  8. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas.

    PubMed

    Washeleski, Robert L; Meyer, Edmond J; King, Lyon B

    2013-10-01

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.

  9. Modified Maxium Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model

    DTIC Science & Technology

    2015-08-01

    McCullagh, P.; Nelder, J.A. Generalized Linear Model , 2nd ed.; Chapman and Hall: London, 1989. 7. Johnston, J. Econometric Methods, 3rd ed.; McGraw...FOR A DOSE-RESPONSE MODEL ECBC-TN-068 Kyong H. Park Steven J. Lagan RESEARCH AND TECHNOLOGY DIRECTORATE August 2015 Approved for public release...Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model 5a. CONTRACT NUMBER 5b. GRANT

  10. Stabilization of Pt monolayer catalysts under harsh conditions of fuel cells

    DOE PAGES

    Zhang, Xiaoming; Liu, Ping; Yu, Shansheng; ...

    2015-05-21

    We employed density functional theory (DFT) to explore the stability of core (M = Cu, Ru, Rh, Pd, Ag, Os, Ir, Au)-shell (Pt) catalysts under harsh conditions, including solutions and reaction intermediates involved in the oxygen reduction reaction (ORR) in fuel cells. A pseudomorphic surface alloy (PSA) with a Pt monolayer (Pt 1ML) supported on an M surface, Pt 1ML/M(111) or (001), was considered as a model system. Different sets of candidate M cores were identified to achieve a stable Pt 1ML shell depending on the conditions. In vacuum conditions, the Pt 1ML shell can be stabilized on the mostmore » of M cores except Cu, Ag, and Au. The situation varies under various electrochemical conditions. Depending on the solutions and the operating reaction pathways of the ORR, different M should be considered. Pd and Ir are the only core metals studied, being able to keep the Pt ML shell intact in perchloric acid, sulfuric acid, phosphoric acid, and alkaline solutions as well as under the ORR conditions via different pathways. Ru and Os cores should also be paid attention, which only fall during the ORR via the *OOH intermediate. Rh core works well as long as the ORR does not undergo the pathway via *O intermediate. Our results show that PSAs can behave differently from the near surface alloy, Pt 1ML/M 1ML/Pt(111), highlighting the importance of considering both chemical environments and the atomic structures in rational design of highly stable core-shell nanocatalysts. Finally, the roles that d-band center of a core M played in determining the stability of supported Pt 1ML shell were also discussed.« less

  11. Immediate clinical and haemodynamic benefits of restoration of pulmonary valvar competence in patients with pulmonary hypertension.

    PubMed

    Lurz, P; Nordmeyer, J; Coats, L; Taylor, A M; Bonhoeffer, P; Schulze-Neick, I

    2009-04-01

    To analyse the potential benefit of restoration of pulmonary valvar competence in patients with severe pulmonary regurgitation (PR) and pulmonary hypertension (PH) associated with congenital heart disease. Retrospective study. Tertiary paediatric and adult congenital heart cardiac centre. Percutaneous pulmonary valve implantation (PPVI). All patients who underwent PPVI for treatment of PR in the presence of PH (mean PAP >25 mm Hg). Seven patients with severe PH as a result of congenital heart disease and severe PR underwent PPVI. The valve implantation procedure was feasible and uncomplicated in all seven cases, successfully abolishing PR. There was a significant increase in diastolic (15.4 (7.3) to 34.0 (8.5) mm Hg; p = 0.007) and mean (29.7 (8.1) to 41.3 (12.9) mm Hg; p = 0.034) pulmonary artery pressures, and an improvement in NYHA functional class (from median IV to median III; p<0.008). Peripheral oxygen saturations rose from 85.9% (11.0%) to 91.7% (8.3%) (p = 0.036). Right ventricular (RV) volumes decreased (from 157.0 (44.7) to 140.3 (53.3) ml/m(2)), while effective RV stroke volume increased (from 23.4 (9.3) to 41.0 (11.6) ml/m(2)). During a median follow-up of 20.3 months (range 1.3-47.5), valvar competence was well maintained despite near systemic pulmonary pressures. None of the valved stents were explanted during follow-up. Trans-catheter treatment of PR in patients with PH is well tolerated and leads to clinical and haemodynamic improvement, most probably caused by a combination of increased pulmonary perfusion pressures and RV efficiency.

  12. Training-specific changes in cardiac structure and function: a prospective and longitudinal assessment of competitive athletes.

    PubMed

    Baggish, Aaron L; Wang, Francis; Weiner, Rory B; Elinoff, Jason M; Tournoux, Francois; Boland, Arthur; Picard, Michael H; Hutter, Adolph M; Wood, Malissa J

    2008-04-01

    This prospective, longitudinal study examined the effects of participation in team-based exercise training on cardiac structure and function. Competitive endurance athletes (EA, n = 40) and strength athletes (SA, n = 24) were studied with echocardiography at baseline and after 90 days of team training. Left ventricular (LV) mass increased by 11% in EA (116 +/- 18 vs. 130 +/- 19 g/m(2); P < 0.001) and by 12% in SA (115 +/- 14 vs. 132 +/- 11 g/m(2); P < 0.001; P value for the compared Delta = NS). EA experienced LV dilation (end-diastolic volume: 66.6 +/- 10.0 vs. 74.7 +/- 9.8 ml/m(2), Delta = 8.0 +/- 4.2 ml/m(2); P < 0.001), enhanced diastolic function (lateral E': 10.9 +/- 0.8 vs. 12.4 +/- 0.9 cm/s, P < 0.001), and biatrial enlargement, while SA experience LV hypertrophy (posterior wall: 4.5 +/- 0.5 vs. 5.2 +/- 0.5 mm/m(2), P < 0.001) and diminished diastolic function (E' basal lateral LV: 11.6 +/- 1.3 vs. 10.2 +/- 1.4 cm/s, P < 0.001). Further, EA experienced right ventricular (RV) dilation (end-diastolic area: 1,460 +/- 220 vs. 1,650 +/- 200 mm/m(2), P < 0.001) coupled with enhanced systolic and diastolic function (E' basal RV: 10.3 +/- 1.5 vs. 11.4 +/- 1.7 cm/s, P < 0.001), while SA had no change in RV parameters. We conclude that participation in 90 days of competitive athletics produces significant training-specific changes in cardiac structure and function. EA develop biventricular dilation with enhanced diastolic function, while SA develop isolated, concentric left ventricular hypertrophy with diminished diastolic relaxation.

  13. Impact of alcohol harm reduction strategies in community sports clubs: pilot evaluation of the Good Sports program.

    PubMed

    Rowland, Bosco; Allen, Felicity; Toumbourou, John W

    2012-05-01

    Approximately 4.5 million Australians are involved in community sports clubs. A high level of alcohol consumption tends to be commonplace in this setting. The only program of its type in the world, the Good Sports program was designed to reduce harmful alcohol consumption in these Australian community sports clubs. The program offers a staged accreditation process to encourage the implementation of alcohol harm-reduction strategies. We conducted a postintervention adoption study to evaluate whether community sports club accreditation through the Good Sports program was associated with lower rates of alcohol consumption. We examined alcohol consumption rates in 113 clubs (N = 1,968 participants) and compared these to consumption rates in the general community. We hypothesized that members of clubs with more advanced implementation of the Good Sports accreditation program (Stage Two) would consume less alcohol than those with less advanced implementation (Stage One). Multilevel modeling (MLM) indicated that on days when teams competed, Stage Two club members consumed 19% less alcohol than Stage One club members. MLM also indicated that the length of time a club had been in the Good Sports program was associated with reduced rates of weekly drinking that exceeded Australian short-term risky drinking guidelines. However consumption rates for all clubs were still higher than the general community. Higher accreditation stage also predicted reduced long-term risky drinking by club members. Our findings suggest that community sports clubs show evidence of higher levels of alcohol consumption and higher rates of risky consumption than the general community. Implementation of the Good Sports accreditation strategy was associated with lower alcohol consumption in these settings.

  14. Survival characteristics and prognostic variables of dogs with mitral regurgitation attributable to myxomatous valve disease.

    PubMed

    Borgarelli, M; Savarino, P; Crosara, S; Santilli, R A; Chiavegato, D; Poggi, M; Bellino, C; La Rosa, G; Zanatta, R; Haggstrom, J; Tarducci, A

    2008-01-01

    There are few studies evaluating the natural history and prognostic variables in chronic mitral valve disease (CMVI) in a heterogeneous population of dogs. To estimate survival and prognostic value of clinical and echocardiographic variables in dogs with CMVI of varying severity. Five hundred and fifty-eight dogs belonging to 36 breeds were studied. Dogs were included after clinical examination and echocardiography. Long-term outcome was assessed by telephone interview with the owner. The mean follow-up time was 22.7 +/- 13.6 months, and the median survival time was 19.5 +/- 13.2 months. In univariate analysis, age>8 years, syncope, HR>140 bpm, dyspnea, arrhythmias, class of heart failure (International Small Animal Cardiac Health Council), furosemide therapy, end-systolic volume-index (ESV-I)>30 mL/m(2), left atrial to aortic root ratio (LA/Ao)>1.7, E wave transmitral peak velocity (Emax)>1.2 m/s, and bilateral mitral valve leaflet engagement were associated with survival time when all causes of death were included. For the cardiac-related deaths, all the previous variables except dyspnea and EDV-I>100 mL/m(2) were significantly associated with survival time. Significant variables in multivariate analysis (all causes of death) were syncope, LA/Ao>1.7 m/s, and Emax>1.2 m/s. For cardiac-related death, the only significant variable was LA/Ao>1.7. Mild CMVI is a relatively benign condition in dogs. However, some clinical variables can identify dogs at a higher risk of death; these variables might be useful to identify individuals that need more frequent monitoring or therapeutic intervention.

  15. Adaptive Servo-Ventilation Treatment Increases Stroke Volume in Stable Systolic Heart Failure Patients With Low Tricuspid Annular Plane Systolic Excursion.

    PubMed

    Iwasaku, Toshihiro; Ando, Tomotaka; Eguchi, Akiyo; Okuhara, Yoshitaka; Naito, Yoshiro; Mano, Toshiaki; Masuyama, Tohru; Hirotani, Shinichi

    2017-05-31

    We hypothesized that the effects of adaptive servo-ventilation (ASV) therapy were influenced by right-sided heart performance. This study aimed to clarify the interaction between the effects of ASV and right-sided heart performance in patients with stable heart failure (HF) with reduced ejection fraction (HFrEF).Twenty-six stable HF inpatients (left ventricular ejection fraction < 0.45, without moderate to severe mitral regurgitation (MR) were analyzed. Echocardiography was performed before and after 30 minutes of ASV. ASV increased stroke volume index (SVI) in 14 patients (30.0 ± 11.9 to 41.1 ± 16.1 mL/m 2 ) and reduced SVI in 12 patients (36.0 ± 10.1 to 31.9 ± 12.2 mL/m 2 ). Multivariate linear regression analysis revealed that tricuspid annular plane systolic excursion (TAPSE) before ASV was an independent association factor for (SV during ASV - SV before ASV)/LVEDV × 100 (%) (%ΔSV/LVEDV). ROC analysis of TAPSE for %ΔSV/LVEDV > 0 showed that the cut-off point was 16.5 mm. All patients were divided into 2 groups according to the TAPSE value. Although no significant differences were found in the baseline characteristics and blood tests, there were significant differences in tricuspid lateral annular systolic velocity, TAPSE, right atrial area, and right ventricular (RV) area before ASV between patients with TAPSE ≤ 16.5 mm and those with TAPSE > 16.5 mm. Interestingly, ASV reduced RV area and increased TAPSE in patients with TAPSE ≤ 16.5 mm, while it reduced TAPSE in those > 16.5 mm.ASV therapy has the potential to increase SVI in stable HFrEF patients with low TAPSE.

  16. Household's willingness to pay for heterogeneous attributes of drinking water quality and services improvement: an application of choice experiment

    NASA Astrophysics Data System (ADS)

    Dauda, Suleiman Alhaji; Yacob, Mohd Rusli; Radam, Alias

    2015-09-01

    The service of providing good quality of drinking water can greatly improve the lives of the community and maintain a normal health standard. For a large number of population in the world, specifically in the developing countries, the availability of safe water for daily sustenance is none. Damaturu is the capital of Yobe State, Nigeria. It hosts a population of more than two hundred thousand, yet only 45 % of the households are connected to the network of Yobe State Water Corporation's pipe borne water services; this has led people to source for water from any available source and thus, exposed them to the danger of contracting waterborne diseases. In order to address the problem, Yobe State Government has embarked on the construction of a water treatment plant with a capacity and facility to improve the water quality and connect the town with water services network. The objectives of this study are to assess the households' demand preferences of the heterogeneous water attributes in Damaturu, and to estimate their marginal willingness to pay, using mixed logit model in comparison with conditional logit model. A survey of 300 households randomly sampled indicated that higher education greatly influenced the households' WTP decisions. The most significant variable from both of the models is TWQ, which is MRS that rates the water quality from the level of satisfactory to very good. 219 % in simple model is CLM, while 126 % is for the interaction model. As for MLM, 685 % is for the simple model and 572 % is for the interaction model. Estimate of MLM has more explanatory powers than CLM. Essentially, this finding can help the government in designing cost-effective management and efficient tariff structure.

  17. One-Pot Procedure for Recovery of Gallic Acid from Wastewater and Encapsulation within Protein Particles.

    PubMed

    Nourbakhsh, Himan; Madadlou, Ashkan; Emam-Djomeh, Zahra; Wang, Yi-Cheng; Gunasekaran, Sundaram; Mousavi, Mohammad E

    2016-02-24

    A whey protein isolate solution was heat-denatured and treated with the enzyme transglutaminase, which cross-linked ≈26% of the amino groups and increased the magnitude of the ζ-potential value. The protein solution was microemulsified, and then the resulting water-in-oil microemulsion was dispersed within a gallic acid-rich model wastewater. Gallic acid extraction by the outlined microemulsion liquid membrane (MLM) from the exterior aqueous phase (wastewater) and accumulation within the internal aqueous nanodroplets induced protein cold-set gelation and resulted in the formation of gallic acid-enveloping nanoparticles. Measurements with a strain-controlled rheometer indicated a progressive increase in the MLM viscosity during gallic acid recovery corresponding to particle formation. The mean hydrodynamic size of the nanoparticles made from the heat-denatured and preheated enzymatically cross-linked proteins was 137 and 122 nm, respectively. The enzymatic cross-linking of whey proteins led to a higher gallic acid recovery yield and increased the glass transition enthalpy and temperature. A similar impact on glass transition indices was observed by the gallic acid-induced nanoparticulation of proteins. Scanning electron microscopy showed the existence of numerous jammed/fused nanoparticles. It was suggested on the basis of the results of Fourier transform infrared spectroscopy that the in situ nanoparticulation of proteins shifted the C-N stretching and C-H bending peaks to higher wavenumbers. X-ray diffraction results proposed a decreased β-sheet content for proteins because of the acid-induced particulation. The nanoparticles made from the enzymatically cross-linked protein were more stable against the in vitro gastrointestinal digestion and retained almost 19% of the entrapped gallic acid after 300 min sequential gastric and intestinal digestions.

  18. Relationship of ischemic times and left atrial volume and function in patients with ST-segment elevation myocardial infarction treated with primary percutaneous coronary intervention.

    PubMed

    Ilic, Ivan; Stankovic, Ivan; Vidakovic, Radosav; Jovanovic, Vladimir; Vlahovic Stipac, Alja; Putnikovic, BiIjana; Neskovic, Aleksandar N

    2015-04-01

    Little is known about the impact of duration of ischemia on left atrial (LA) volumes and function during acute phase of myocardial infarction. We investigated the relationship of ischemic times, echocardiographic indices of diastolic function and LA volumes in patients with ST-segment elevation myocardial infarction (STEMI) treated with primary percutaneous coronary intervention (PCI). A total of 433 consecutive STEMI patients underwent echocardiographic examination within 48 h of primary PCI, including the measurement of LA volumes and the ratio of mitral peak velocity of early filling to early diastolic mitral annular velocity (E/e'). Time intervals from onset of chest pain to hospital admission and reperfusion were collected and magnitude of Troponin I release was used to assess infarct size. Patients with LA volume index (LAVI) ≥28 ml/m(2) had longer total ischemic time (410 ± 347 vs. 303 ± 314 min, p = 0.007) and higher E/e' ratio (15 ± 5 vs. 10 ± 3, p < 0.001) than those with LAVI <28 ml/m(2), while the indices of LA function were similar between the study groups (p > 0.05, for all). Significant correlation was found between E/e' and LA volumes at all stages of LA filling and contraction (r = 0.363-0.434; p < 0.001, for all) while total ischemic time along with E/e' and restrictive filling pattern remained independent predictor of LA enlargement. Increased LA volume is associated with longer ischemic times and may be a sensitive marker of increased left ventricular filling pressures in STEMI patients treated with primary PCI.

  19. The application of multilevel modeling in the analysis of longitudinal periodontal data--part I: absolute levels of disease.

    PubMed

    Tu, Yu-Kang; Gilthorpe, Mark S; Griffiths, Gareth S; Maddick, Ian H; Eaton, Kenneth A; Johnson, Newell W

    2004-01-01

    Statistical analyses of periodontal data that average site measurements to subject mean values are unable to explore the site-specific nature of periodontal diseases. Multilevel modeling (MLM) overcomes this, taking hierarchical structure into account. MLM was used to investigate longitudinal relationships between the outcomes of lifetime cumulative attachment loss (LCAL) and probing depth (PD) in relation to potential risk factors for periodontal disease progression. One hundred males (mean age 17 years) received a comprehensive periodontal examination at baseline and at 12 and 30 months. The resulting data were analyzed in two stages. In stage one (reported here), the absolute levels of disease were analyzed in relation to potential risk factors; in stage two (reported in a second paper), changes in disease patterns over time were analyzed in relation to the same risk factors. Each approach yielded substantially different insights. For absolute levels of disease, subject-level risk factors (covariates) had limited prediction for LCAL/PD throughout the 30-month observation period. Tooth position demonstrated a near linear relationship for both outcomes, with disease increasing from anterior to posterior teeth. Sites with subgingival calculus and bleeding on probing demonstrated more LCAL and PD, and supragingival calculus had an apparently protective effect. Covariates had more "explanatory power" for the variation in PD than for the variation in LCAL, suggesting that LCAL and PD might be generally associated with a different profile of covariates. This study provides, for a relatively young cohort, considerable insights into the factors associated with early-life periodontal disease and its progression at all levels of the natural hierarchy of sites within teeth within subjects.

  20. Differences in the intramolecular structure of structured oils do not affect pancreatic lipase activity in vitro or the absorption by rats of (n-3) fatty acids.

    PubMed

    Porsgaard, Trine; Xu, Xuebing; Göttsche, Jesper; Mu, Huiling

    2005-07-01

    The fatty acid composition and intramolecular structure of dietary triacylglycerols (TAGs) influence their absorption. We compared the in vitro pancreatic lipase activity and the lymphatic transport in rats of fish oil and 2 enzymatically interesterified oils containing 10:0 and (n-3) PUFAs of marine origin to investigate whether the positional distribution of fatty acids influenced the overall bioavailability of (n-3) PUFAs in the body. The structured oils had the (n-3) PUFA either mainly at the sn-1,3 position (LML, M = medium-chain fatty acid, L = long-chain fatty acid) or mainly at the sn-2 position (MLM). Oils were administered to lymph-cannulated rats and lymph was collected for 24 h. The fatty acid composition as well as the lipid class distribution of lymph samples was determined. In vitro pancreatic lipase activity was greater when fish oil was the substrate than when the structured oils were the substrates (P < 0.001 at 40 min). This was consistent with a greater 8-h recovery of total fatty acids from fish oil compared with the 2 structured oils (P < 0.05). The absorption profiles of MLM and LML in rats and their in vitro rates of lipase activity did not differ. This indicates that the absorption rate is highly influenced by the lipase activity, which in turn is affected by the fatty acid composition and intramolecular structure. The lipid class distribution in lymph collected from the 3 groups of rats did not differ. In conclusion, the intramolecular structure did not affect the overall absorption of (n-3) PUFAs.

  1. Adsorption of three-domain antifreeze proteins on ice: a study using LGMMAS theory and Monte Carlo simulations.

    PubMed

    Lopez Ortiz, Juan Ignacio; Torres, Paola; Quiroga, Evelina; Narambuena, Claudio F; Ramirez-Pastor, Antonio J

    2017-11-29

    In the present work, the adsorption of three-domain antifreeze proteins on ice is studied by combining a statistical thermodynamics based theory and Monte Carlo simulations. The three-domain protein is modeled by a trimer, and the ice surface is represented by a lattice of adsorption sites. The statistical theory, obtained from the exact partition function of non-interacting trimers adsorbed in one dimension and its extension to two dimensions, includes the configuration of the molecule in the adsorbed state, and allows the existence of multiple adsorption states for the protein. We called this theory "lattice-gas model of molecules with multiple adsorption states" (LGMMAS). The main thermodynamics functions (partial and total adsorption isotherms, Helmholtz free energy and configurational entropy) are obtained by solving a non-linear system of j equations, where j is the total number of possible adsorption states of the protein. The theoretical results are contrasted with Monte Carlo simulations, and a modified Langmuir model (MLM) where the arrangement of the adsorption sites in space is immaterial. The formalism introduced here provides exact results in one-dimensional lattices, and offers a very accurate description in two dimensions (2D). In addition, the scheme is capable of predicting the proportion between coverage degrees corresponding to different conformations in the same energetic state. In contrast, the MLM does not distinguish between different adsorption states, and shows severe discrepancies with the 2D simulation results. These findings indicate that the adsorbate structure and the lattice geometry play fundamental roles in determining the statistics of multistate adsorbed molecules, and consequently, must be included in the theory.

  2. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  3. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  4. Generalizing Terwilliger's likelihood approach: a new score statistic to test for genetic association.

    PubMed

    el Galta, Rachid; Uitte de Willige, Shirley; de Visser, Marieke C H; Helmer, Quinta; Hsu, Li; Houwing-Duistermaat, Jeanine J

    2007-09-24

    In this paper, we propose a one degree of freedom test for association between a candidate gene and a binary trait. This method is a generalization of Terwilliger's likelihood ratio statistic and is especially powerful for the situation of one associated haplotype. As an alternative to the likelihood ratio statistic, we derive a score statistic, which has a tractable expression. For haplotype analysis, we assume that phase is known. By means of a simulation study, we compare the performance of the score statistic to Pearson's chi-square statistic and the likelihood ratio statistic proposed by Terwilliger. We illustrate the method on three candidate genes studied in the Leiden Thrombophilia Study. We conclude that the statistic follows a chi square distribution under the null hypothesis and that the score statistic is more powerful than Terwilliger's likelihood ratio statistic when the associated haplotype has frequency between 0.1 and 0.4 and has a small impact on the studied disorder. With regard to Pearson's chi-square statistic, the score statistic has more power when the associated haplotype has frequency above 0.2 and the number of variants is above five.

  5. A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*

    PubMed Central

    Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.

    2013-01-01

    This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186

  6. An efficient algorithm for accurate computation of the Dirichlet-multinomial log-likelihood function.

    PubMed

    Yu, Peng; Shaw, Chad A

    2014-06-01

    The Dirichlet-multinomial (DMN) distribution is a fundamental model for multicategory count data with overdispersion. This distribution has many uses in bioinformatics including applications to metagenomics data, transctriptomics and alternative splicing. The DMN distribution reduces to the multinomial distribution when the overdispersion parameter ψ is 0. Unfortunately, numerical computation of the DMN log-likelihood function by conventional methods results in instability in the neighborhood of [Formula: see text]. An alternative formulation circumvents this instability, but it leads to long runtimes that make it impractical for large count data common in bioinformatics. We have developed a new method for computation of the DMN log-likelihood to solve the instability problem without incurring long runtimes. The new approach is composed of a novel formula and an algorithm to extend its applicability. Our numerical experiments show that this new method both improves the accuracy of log-likelihood evaluation and the runtime by several orders of magnitude, especially in high-count data situations that are common in deep sequencing data. Using real metagenomic data, our method achieves manyfold runtime improvement. Our method increases the feasibility of using the DMN distribution to model many high-throughput problems in bioinformatics. We have included in our work an R package giving access to this method and a vingette applying this approach to metagenomic data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Climate reconstruction analysis using coexistence likelihood estimation (CRACLE): a method for the estimation of climate using vegetation.

    PubMed

    Harbert, Robert S; Nixon, Kevin C

    2015-08-01

    • Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.

  8. Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets

    PubMed Central

    Cha, Kenny H.; Hadjiiski, Lubomir; Samala, Ravi K.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.

    2016-01-01

    Purpose: The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. Methods: A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. Results: With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13.5%, respectively. Conclusions: The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder. PMID:27036584

  9. Graphene-copper composite with micro-layered grains and ultrahigh strength

    PubMed Central

    Wang, Lidong; Yang, Ziyue; Cui, Ye; Wei, Bing; Xu, Shichong; Sheng, Jie; Wang, Miao; Zhu, Yunpeng; Fei, Weidong

    2017-01-01

    Graphene with ultrahigh intrinsic strength and excellent thermal physical properties has the potential to be used as the reinforcement of many kinds of composites. Here, we show that very high tensile strength can be obtained in the copper matrix composite reinforced by reduced graphene oxide (RGO) when micro-layered structure is achieved. RGO-Cu powder with micro-layered structure is fabricated from the reduction of the micro-layered graphene oxide (GO) and Cu(OH)2 composite sheets, and RGO-Cu composites are sintered by spark plasma sintering process. The tensile strength of the 5 vol.% RGO-Cu composite is as high as 608 MPa, which is more than three times higher than that of the Cu matrix. The apparent strengthening efficiency of RGO in the 2.5 vol.% RGO-Cu composite is as high as 110, even higher than that of carbon nanotube, multilayer graphene, carbon nano fiber and RGO in the copper matrix composites produced by conventional MLM method. The excellent tensile and compressive strengths, high hardness and good electrical conductivity are obtained simultaneously in the RGO-Cu composites. The results shown in the present study provide an effective method to design graphene based composites with layered structure and high performance. PMID:28169306

  10. Estimation of brood and nest survival: Comparative methods in the presence of heterogeneity

    USGS Publications Warehouse

    Manly, Bryan F.J.; Schmutz, Joel A.

    2001-01-01

    The Mayfield method has been widely used for estimating survival of nests and young animals, especially when data are collected at irregular observation intervals. However, this method assumes survival is constant throughout the study period, which often ignores biologically relevant variation and may lead to biased survival estimates. We examined the bias and accuracy of 1 modification to the Mayfield method that allows for temporal variation in survival, and we developed and similarly tested 2 additional methods. One of these 2 new methods is simply an iterative extension of Klett and Johnson's method, which we refer to as the Iterative Mayfield method and bears similarity to Kaplan-Meier methods. The other method uses maximum likelihood techniques for estimation and is best applied to survival of animals in groups or families, rather than as independent individuals. We also examined how robust these estimators are to heterogeneity in the data, which can arise from such sources as dependent survival probabilities among siblings, inherent differences among families, and adoption. Testing of estimator performance with respect to bias, accuracy, and heterogeneity was done using simulations that mimicked a study of survival of emperor goose (Chen canagica) goslings. Assuming constant survival for inappropriately long periods of time or use of Klett and Johnson's methods resulted in large bias or poor accuracy (often >5% bias or root mean square error) compared to our Iterative Mayfield or maximum likelihood methods. Overall, estimator performance was slightly better with our Iterative Mayfield than our maximum likelihood method, but the maximum likelihood method provides a more rigorous framework for testing covariates and explicity models a heterogeneity factor. We demonstrated use of all estimators with data from emperor goose goslings. We advocate that future studies use the new methods outlined here rather than the traditional Mayfield method or its previous modifications.

  11. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  12. Maximum-likelihood techniques for joint segmentation-classification of multispectral chromosome images.

    PubMed

    Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L

    2005-12-01

    Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.

  13. Mortality table construction

    NASA Astrophysics Data System (ADS)

    Sutawanir

    2015-12-01

    Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.

  14. An alternative method to measure the likelihood of a financial crisis in an emerging market

    NASA Astrophysics Data System (ADS)

    Özlale, Ümit; Metin-Özcan, Kıvılcım

    2007-07-01

    This paper utilizes an early warning system in order to measure the likelihood of a financial crisis in an emerging market economy. We introduce a methodology, where we can both obtain a likelihood series and analyze the time-varying effects of several macroeconomic variables on this likelihood. Since the issue is analyzed in a non-linear state space framework, the extended Kalman filter emerges as the optimal estimation algorithm. Taking the Turkish economy as our laboratory, the results indicate that both the derived likelihood measure and the estimated time-varying parameters are meaningful and can successfully explain the path that the Turkish economy had followed between 2000 and 2006. The estimated parameters also suggest that overvalued domestic currency, current account deficit and the increase in the default risk increase the likelihood of having an economic crisis in the economy. Overall, the findings in this paper suggest that the estimation methodology introduced in this paper can also be applied to other emerging market economies as well.

  15. Tests for detecting overdispersion in models with measurement error in covariates.

    PubMed

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Efficient simulation and likelihood methods for non-neutral multi-allele models.

    PubMed

    Joyce, Paul; Genz, Alan; Buzbas, Erkan Ozge

    2012-06-01

    Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a , 1994b , 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 10(9) rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection.

  17. A general methodology for maximum likelihood inference from band-recovery data

    USGS Publications Warehouse

    Conroy, M.J.; Williams, B.K.

    1984-01-01

    A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.

  18. Development and validation of a highly sensitive LC-ESI-MS/MS method for estimation of IIIM-MCD-211, a novel nitrofuranyl methyl piperazine derivative with potential activity against tuberculosis: Application to drug development.

    PubMed

    Magotra, Asmita; Sharma, Anjna; Gupta, Ajai Prakash; Wazir, Priya; Sharma, Shweta; Singh, Parvinder Pal; Tikoo, Manoj Kumar; Vishwakarma, Ram A; Singh, Gurdarshan; Nandi, Utpal

    2017-08-15

    In the present study, a simple, sensitive, specific and rapid liquid chromatography (LC) tandem mass spectrometry (MS/MS) method was developed and validated according to the Food and Drug Administration (FDA) guidelines for estimation of IIIM-MCD-211 (a potent oral candidate with promising action against tuberculosis) in mice plasma using carbamazepine as internal standard (IS). Bioanalytical method consisted of one step protein precipitation for sample preparation followed by quantitation in LC-MS/MS using positive electrospray ionization technique (ESI) operating in multiple reaction monitoring (MRM) mode. Elution was achieved in gradient mode on High Resolution Chromolith RP-18e column with mobile phase comprised of acetonitrile and 0.1% (v/v) formic acid in water at the flow rate of 0.4mL/min. Precursor to product ion transitions (m/z 344.5/218.4 and m/z 237.3/194.2) were used to measure analyte and IS, respectively. All validation parameters were well within the limit of acceptance criteria. The method was successfully applied to assess the pharmacokinetics of the candidate in mice following oral (10mg/kg) and intravenous (IV; 2.5mg/kg) administration. It was also effectively used to quantitate metabolic stability of the compound in mouse liver microsomes (MLM) and human liver microsomes (HLM) followed by its in-vitro-in-vivo extrapolation. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Patch-based image reconstruction for PET using prior-image derived dictionaries

    NASA Astrophysics Data System (ADS)

    Tahaei, Marzieh S.; Reader, Andrew J.

    2016-09-01

    In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.

  20. Free energy reconstruction from steered dynamics without post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athenes, Manuel, E-mail: Manuel.Athenes@cea.f; Condensed Matter and Materials Division, Physics and Life Sciences Directorate, LLNL, Livermore, CA 94551; Marinica, Mihai-Cosmin

    2010-09-20

    Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, wemore » accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.« less

  1. Orally available stilbene derivatives as potent HDAC inhibitors with antiproliferative activities and antitumor effects in human tumor xenografts.

    PubMed

    Kachhadia, Virendra; Rajagopal, Sridharan; Ponpandian, Thanasekaran; Vignesh, Radhakrishnan; Anandhan, Karnambaram; Prabhu, Daivasigamani; Rajendran, Praveen; Nidhyanandan, Saranya; Roy, Anshu Mittal; Ahamed, Fakrudeen Ali; Surendran, Narayanan; Rajagopal, Sriram; Narayanan, Shridhar; Gopalan, Balasubramanian

    2016-01-27

    Herein we report the synthesis and activity of a novel class of HDAC inhibitors based on 2, 3-diphenyl acrylic acid derivatives. The compounds in this series have shown to be potent HDAC inhibitors possessing significant antiproliferative activity. Further compounds in this series were subjected to metabolic stability in human liver microsomes (HLM), mouse liver microsomes (MLM), and exhibits promising stability in both. These efforts culminated with the identification of a developmental candidate (5a), which displayed desirable PK/PD relationships, significant efficacy in the xenograft models and attractive ADME profiles. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  2. Quinolone-based HDAC inhibitors.

    PubMed

    Balasubramanian, Gopalan; Kilambi, Narasimhan; Rathinasamy, Suresh; Rajendran, Praveen; Narayanan, Shridhar; Rajagopal, Sridharan

    2014-08-01

    HDAC inhibitors emerged as promising drug candidates in combating wide variety of cancers. At present, two of the compounds SAHA and Romidepsin were approved by FDA for cutaneous T-cell lymphoma and many are in various clinical phases. A new quinolone cap structure was explored with hydroxamic acid as zinc-binding group (ZBG). The pan HDAC inhibitory and antiproliferative activities against three human cancer cell lines HCT-116 (colon), NCI-H460 (lung) and U251 (glioblastoma) of the compounds (4a-4w) were evaluated. Introduction of heterocyclic amines in CAP region increased the enzyme inhibitory and antiproliferative activities and few of the compounds tested are metabolically stable in both MLM and HLM.

  3. A Selective Overview of Variable Selection in High Dimensional Feature Space

    PubMed Central

    Fan, Jianqing

    2010-01-01

    High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods. PMID:21572976

  4. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  5. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  6. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    PubMed

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  7. Pre-test probability of obstructive coronary stenosis in patients undergoing coronary CT angiography: Comparative performance of the modified diamond-Forrester algorithm versus methods incorporating cardiovascular risk factors.

    PubMed

    Ferreira, António Miguel; Marques, Hugo; Tralhão, António; Santos, Miguel Borges; Santos, Ana Rita; Cardoso, Gonçalo; Dores, Hélder; Carvalho, Maria Salomé; Madeira, Sérgio; Machado, Francisco Pereira; Cardim, Nuno; de Araújo Gonçalves, Pedro

    2016-11-01

    Current guidelines recommend the use of the Modified Diamond-Forrester (MDF) method to assess the pre-test likelihood of obstructive coronary artery disease (CAD). We aimed to compare the performance of the MDF method with two contemporary algorithms derived from multicenter trials that additionally incorporate cardiovascular risk factors: the calculator-based 'CAD Consortium 2' method, and the integer-based CONFIRM score. We assessed 1069 consecutive patients without known CAD undergoing coronary CT angiography (CCTA) for stable chest pain. Obstructive CAD was defined as the presence of coronary stenosis ≥50% on 64-slice dual-source CT. The three methods were assessed for calibration, discrimination, net reclassification, and changes in proposed downstream testing based upon calculated pre-test likelihoods. The observed prevalence of obstructive CAD was 13.8% (n=147). Overestimations of the likelihood of obstructive CAD were 140.1%, 9.8%, and 18.8%, respectively, for the MDF, CAD Consortium 2 and CONFIRM methods. The CAD Consortium 2 showed greater discriminative power than the MDF method, with a C-statistic of 0.73 vs. 0.70 (p<0.001), while the CONFIRM score did not (C-statistic 0.71, p=0.492). Reclassification of pre-test likelihood using the 'CAD Consortium 2' or CONFIRM scores resulted in a net reclassification improvement of 0.19 and 0.18, respectively, which would change the diagnostic strategy in approximately half of the patients. Newer risk factor-encompassing models allow for a more precise estimation of pre-test probabilities of obstructive CAD than the guideline-recommended MDF method. Adoption of these scores may improve disease prediction and change the diagnostic pathway in a significant proportion of patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Extreme data compression for the CMB

    NASA Astrophysics Data System (ADS)

    Zablocki, Alan; Dodelson, Scott

    2016-04-01

    We apply the Karhunen-Loéve methods to cosmic microwave background (CMB) data sets, and show that we can recover the input cosmology and obtain the marginalized likelihoods in Λ cold dark matter cosmologies in under a minute, much faster than Markov chain Monte Carlo methods. This is achieved by forming a linear combination of the power spectra at each multipole l , and solving a system of simultaneous equations such that the Fisher matrix is locally unchanged. Instead of carrying out a full likelihood evaluation over the whole parameter space, we need evaluate the likelihood only for the parameter of interest, with the data compression effectively marginalizing over all other parameters. The weighting vectors contain insight about the physical effects of the parameters on the CMB anisotropy power spectrum Cl . The shape and amplitude of these vectors give an intuitive feel for the physics of the CMB, the sensitivity of the observed spectrum to cosmological parameters, and the relative sensitivity of different experiments to cosmological parameters. We test this method on exact theory Cl as well as on a Wilkinson Microwave Anisotropy Probe (WMAP)-like CMB data set generated from a random realization of a fiducial cosmology, comparing the compression results to those from a full likelihood analysis using CosmoMC. After showing that the method works, we apply it to the temperature power spectrum from the WMAP seven-year data release, and discuss the successes and limitations of our method as applied to a real data set.

  9. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  10. Robust Multipoint Water-Fat Separation Using Fat Likelihood Analysis

    PubMed Central

    Yu, Huanzhou; Reeder, Scott B.; Shimakawa, Ann; McKenzie, Charles A.; Brittain, Jean H.

    2016-01-01

    Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. PMID:21842498

  11. A multi-valued neutrosophic qualitative flexible approach based on likelihood for multi-criteria decision-making problems

    NASA Astrophysics Data System (ADS)

    Peng, Juan-juan; Wang, Jian-qiang; Yang, Wu-E.

    2017-01-01

    In this paper, multi-criteria decision-making (MCDM) problems based on the qualitative flexible multiple criteria method (QUALIFLEX), in which the criteria values are expressed by multi-valued neutrosophic information, are investigated. First, multi-valued neutrosophic sets (MVNSs), which allow the truth-membership function, indeterminacy-membership function and falsity-membership function to have a set of crisp values between zero and one, are introduced. Then the likelihood of multi-valued neutrosophic number (MVNN) preference relations is defined and the corresponding properties are also discussed. Finally, an extended QUALIFLEX approach based on likelihood is explored to solve MCDM problems where the assessments of alternatives are in the form of MVNNs; furthermore an example is provided to illustrate the application of the proposed method, together with a comparison analysis.

  12. Estimation of parameters of dose volume models and their confidence limits

    NASA Astrophysics Data System (ADS)

    van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.

    2003-07-01

    Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.

  13. Incorrect likelihood methods were used to infer scaling laws of marine predator search behaviour.

    PubMed

    Edwards, Andrew M; Freeman, Mervyn P; Breed, Greg A; Jonsen, Ian D

    2012-01-01

    Ecologists are collecting extensive data concerning movements of animals in marine ecosystems. Such data need to be analysed with valid statistical methods to yield meaningful conclusions. We demonstrate methodological issues in two recent studies that reached similar conclusions concerning movements of marine animals (Nature 451:1098; Science 332:1551). The first study analysed vertical movement data to conclude that diverse marine predators (Atlantic cod, basking sharks, bigeye tuna, leatherback turtles and Magellanic penguins) exhibited "Lévy-walk-like behaviour", close to a hypothesised optimal foraging strategy. By reproducing the original results for the bigeye tuna data, we show that the likelihood of tested models was calculated from residuals of regression fits (an incorrect method), rather than from the likelihood equations of the actual probability distributions being tested. This resulted in erroneous Akaike Information Criteria, and the testing of models that do not correspond to valid probability distributions. We demonstrate how this led to overwhelming support for a model that has no biological justification and that is statistically spurious because its probability density function goes negative. Re-analysis of the bigeye tuna data, using standard likelihood methods, overturns the original result and conclusion for that data set. The second study observed Lévy walk movement patterns by mussels. We demonstrate several issues concerning the likelihood calculations (including the aforementioned residuals issue). Re-analysis of the data rejects the original Lévy walk conclusion. We consequently question the claimed existence of scaling laws of the search behaviour of marine predators and mussels, since such conclusions were reached using incorrect methods. We discourage the suggested potential use of "Lévy-like walks" when modelling consequences of fishing and climate change, and caution that any resulting advice to managers of marine ecosystems would be problematic. For reproducibility and future work we provide R source code for all calculations.

  14. Evidence and Clinical Trials.

    NASA Astrophysics Data System (ADS)

    Goodman, Steven N.

    1989-11-01

    This dissertation explores the use of a mathematical measure of statistical evidence, the log likelihood ratio, in clinical trials. The methods and thinking behind the use of an evidential measure are contrasted with traditional methods of analyzing data, which depend primarily on a p-value as an estimate of the statistical strength of an observed data pattern. It is contended that neither the behavioral dictates of Neyman-Pearson hypothesis testing methods, nor the coherency dictates of Bayesian methods are realistic models on which to base inference. The use of the likelihood alone is applied to four aspects of trial design or conduct: the calculation of sample size, the monitoring of data, testing for the equivalence of two treatments, and meta-analysis--the combining of results from different trials. Finally, a more general model of statistical inference, using belief functions, is used to see if it is possible to separate the assessment of evidence from our background knowledge. It is shown that traditional and Bayesian methods can be modeled as two ends of a continuum of structured background knowledge, methods which summarize evidence at the point of maximum likelihood assuming no structure, and Bayesian methods assuming complete knowledge. Both schools are seen to be missing a concept of ignorance- -uncommitted belief. This concept provides the key to understanding the problem of sampling to a foregone conclusion and the role of frequency properties in statistical inference. The conclusion is that statistical evidence cannot be defined independently of background knowledge, and that frequency properties of an estimator are an indirect measure of uncommitted belief. Several likelihood summaries need to be used in clinical trials, with the quantitative disparity between summaries being an indirect measure of our ignorance. This conclusion is linked with parallel ideas in the philosophy of science and cognitive psychology.

  15. SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.

    PubMed

    Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman

    2017-03-01

    We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).

  16. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  17. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  18. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  19. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  20. Measurement of the Top Quark Mass by Dynamical Likelihood Method using the Lepton plus Jets Events in 1.96 Tev Proton-Antiproton Collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yorita, Kohei

    2005-03-01

    We have measured the top quark mass with the dynamical likelihood method (DLM) using the CDF II detector at the Fermilab Tevatron. The Tevatron produces top and anti-top pairs in pp collisions at a center of mass energy of 1.96 TeV. The data sample used in this paper was accumulated from March 2002 through August 2003 which corresponds to an integrated luminosity of 162 pb -1.

  1. An evaluation of percentile and maximum likelihood estimators of weibull paremeters

    Treesearch

    Stanley J. Zarnoch; Tommy R. Dell

    1985-01-01

    Two methods of estimating the three-parameter Weibull distribution were evaluated by computer simulation and field data comparison. Maximum likelihood estimators (MLB) with bias correction were calculated with the computer routine FITTER (Bailey 1974); percentile estimators (PCT) were those proposed by Zanakis (1979). The MLB estimators had superior smaller bias and...

  2. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  3. A Note on Three Statistical Tests in the Logistic Regression DIF Procedure

    ERIC Educational Resources Information Center

    Paek, Insu

    2012-01-01

    Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…

  4. Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data

    ERIC Educational Resources Information Center

    Xi, Nuo; Browne, Michael W.

    2014-01-01

    A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…

  5. Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2010-01-01

    Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…

  6. Evaluation of Smoking Prevention Television Messages Based on the Elaboration Likelihood Model

    ERIC Educational Resources Information Center

    Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.

    2011-01-01

    Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from…

  7. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations.

    PubMed

    Can, Seda; van de Schoot, Rens; Hox, Joop

    2015-06-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.

  8. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  9. Compatibility of pedigree-based and marker-based relationship matrices for single-step genetic evaluation.

    PubMed

    Christensen, Ole F

    2012-12-03

    Single-step methods provide a coherent and conceptually simple approach to incorporate genomic information into genetic evaluations. An issue with single-step methods is compatibility between the marker-based relationship matrix for genotyped animals and the pedigree-based relationship matrix. Therefore, it is necessary to adjust the marker-based relationship matrix to the pedigree-based relationship matrix. Moreover, with data from routine evaluations, this adjustment should in principle be based on both observed marker genotypes and observed phenotypes, but until now this has been overlooked. In this paper, I propose a new method to address this issue by 1) adjusting the pedigree-based relationship matrix to be compatible with the marker-based relationship matrix instead of the reverse and 2) extending the single-step genetic evaluation using a joint likelihood of observed phenotypes and observed marker genotypes. The performance of this method is then evaluated using two simulated datasets. The method derived here is a single-step method in which the marker-based relationship matrix is constructed assuming all allele frequencies equal to 0.5 and the pedigree-based relationship matrix is constructed using the unusual assumption that animals in the base population are related and inbred with a relationship coefficient γ and an inbreeding coefficient γ / 2. Taken together, this γ parameter and a parameter that scales the marker-based relationship matrix can handle the issue of compatibility between marker-based and pedigree-based relationship matrices. The full log-likelihood function used for parameter inference contains two terms. The first term is the REML-log-likelihood for the phenotypes conditional on the observed marker genotypes, whereas the second term is the log-likelihood for the observed marker genotypes. Analyses of the two simulated datasets with this new method showed that 1) the parameters involved in adjusting marker-based and pedigree-based relationship matrices can depend on both observed phenotypes and observed marker genotypes and 2) a strong association between these two parameters exists. Finally, this method performed at least as well as a method based on adjusting the marker-based relationship matrix. Using the full log-likelihood and adjusting the pedigree-based relationship matrix to be compatible with the marker-based relationship matrix provides a new and interesting approach to handle the issue of compatibility between the two matrices in single-step genetic evaluation.

  10. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less

  11. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  12. A scan statistic for binary outcome based on hypergeometric probability model, with an application to detecting spatial clusters of Japanese encephalitis.

    PubMed

    Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong

    2013-01-01

    As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.

  13. A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.

    PubMed

    Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua

    2017-07-01

    Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.

  14. Superfast maximum-likelihood reconstruction for quantum tomography

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  15. Statistical methods for analysis of radiation effects with tumor and dose location-specific information with application to the WECARE study of asynchronous contralateral breast cancer

    PubMed Central

    Langholz, Bryan; Thomas, Duncan C.; Stovall, Marilyn; Smith, Susan A.; Boice, John D.; Shore, Roy E.; Bernstein, Leslie; Lynch, Charles F.; Zhang, Xinbo; Bernstein, Jonine L.

    2009-01-01

    Summary Methods for the analysis of individually matched case-control studies with location-specific radiation dose and tumor location information are described. These include likelihood methods for analyses that just use cases with precise location of tumor information and methods that also include cases with imprecise tumor location information. The theory establishes that each of these likelihood based methods estimates the same radiation rate ratio parameters, within the context of the appropriate model for location and subject level covariate effects. The underlying assumptions are characterized and the potential strengths and limitations of each method are described. The methods are illustrated and compared using the WECARE study of radiation and asynchronous contralateral breast cancer. PMID:18647297

  16. A matrix-based method of moments for fitting the multivariate random effects model for meta-analysis and meta-regression

    PubMed Central

    Jackson, Dan; White, Ian R; Riley, Richard D

    2013-01-01

    Multivariate meta-analysis is becoming more commonly used. Methods for fitting the multivariate random effects model include maximum likelihood, restricted maximum likelihood, Bayesian estimation and multivariate generalisations of the standard univariate method of moments. Here, we provide a new multivariate method of moments for estimating the between-study covariance matrix with the properties that (1) it allows for either complete or incomplete outcomes and (2) it allows for covariates through meta-regression. Further, for complete data, it is invariant to linear transformations. Our method reduces to the usual univariate method of moments, proposed by DerSimonian and Laird, in a single dimension. We illustrate our method and compare it with some of the alternatives using a simulation study and a real example. PMID:23401213

  17. A maximum likelihood algorithm for genome mapping of cytogenetic loci from meiotic configuration data.

    PubMed Central

    Reyes-Valdés, M H; Stelly, D M

    1995-01-01

    Frequencies of meiotic configurations in cytogenetic stocks are dependent on chiasma frequencies in segments defined by centromeres, breakpoints, and telomeres. The expectation maximization algorithm is proposed as a general method to perform maximum likelihood estimations of the chiasma frequencies in the intervals between such locations. The estimates can be translated via mapping functions into genetic maps of cytogenetic landmarks. One set of observational data was analyzed to exemplify application of these methods, results of which were largely concordant with other comparable data. The method was also tested by Monte Carlo simulation of frequencies of meiotic configurations from a monotelodisomic translocation heterozygote, assuming six different sample sizes. The estimate averages were always close to the values given initially to the parameters. The maximum likelihood estimation procedures can be extended readily to other kinds of cytogenetic stocks and allow the pooling of diverse cytogenetic data to collectively estimate lengths of segments, arms, and chromosomes. Images Fig. 1 PMID:7568226

  18. DECONV-TOOL: An IDL based deconvolution software package

    NASA Technical Reports Server (NTRS)

    Varosi, F.; Landsman, W. B.

    1992-01-01

    There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.

  19. Use of Bayesian Networks to Probabilistically Model and Improve the Likelihood of Validation of Microarray Findings by RT-PCR

    PubMed Central

    English, Sangeeta B.; Shih, Shou-Ching; Ramoni, Marco F.; Smith, Lois E.; Butte, Atul J.

    2014-01-01

    Though genome-wide technologies, such as microarrays, are widely used, data from these methods are considered noisy; there is still varied success in downstream biological validation. We report a method that increases the likelihood of successfully validating microarray findings using real time RT-PCR, including genes at low expression levels and with small differences. We use a Bayesian network to identify the most relevant sources of noise based on the successes and failures in validation for an initial set of selected genes, and then improve our subsequent selection of genes for validation based on eliminating these sources of noise. The network displays the significant sources of noise in an experiment, and scores the likelihood of validation for every gene. We show how the method can significantly increase validation success rates. In conclusion, in this study, we have successfully added a new automated step to determine the contributory sources of noise that determine successful or unsuccessful downstream biological validation. PMID:18790084

  20. Extreme data compression for the CMB

    DOE PAGES

    Zablocki, Alan; Dodelson, Scott

    2016-04-28

    We apply the Karhunen-Loéve methods to cosmic microwave background (CMB) data sets, and show that we can recover the input cosmology and obtain the marginalized likelihoods in Λ cold dark matter cosmologies in under a minute, much faster than Markov chain Monte Carlo methods. This is achieved by forming a linear combination of the power spectra at each multipole l, and solving a system of simultaneous equations such that the Fisher matrix is locally unchanged. Instead of carrying out a full likelihood evaluation over the whole parameter space, we need evaluate the likelihood only for the parameter of interest, with themore » data compression effectively marginalizing over all other parameters. The weighting vectors contain insight about the physical effects of the parameters on the CMB anisotropy power spectrum C l. The shape and amplitude of these vectors give an intuitive feel for the physics of the CMB, the sensitivity of the observed spectrum to cosmological parameters, and the relative sensitivity of different experiments to cosmological parameters. We test this method on exact theory C l as well as on a Wilkinson Microwave Anisotropy Probe (WMAP)-like CMB data set generated from a random realization of a fiducial cosmology, comparing the compression results to those from a full likelihood analysis using CosmoMC. Furthermore, after showing that the method works, we apply it to the temperature power spectrum from the WMAP seven-year data release, and discuss the successes and limitations of our method as applied to a real data set.« less

  1. Mapping Quantitative Traits in Unselected Families: Algorithms and Examples

    PubMed Central

    Dupuis, Josée; Shi, Jianxin; Manning, Alisa K.; Benjamin, Emelia J.; Meigs, James B.; Cupples, L. Adrienne; Siegmund, David

    2009-01-01

    Linkage analysis has been widely used to identify from family data genetic variants influencing quantitative traits. Common approaches have both strengths and limitations. Likelihood ratio tests typically computed in variance component analysis can accommodate large families but are highly sensitive to departure from normality assumptions. Regression-based approaches are more robust but their use has primarily been restricted to nuclear families. In this paper, we develop methods for mapping quantitative traits in moderately large pedigrees. Our methods are based on the score statistic which in contrast to the likelihood ratio statistic, can use nonparametric estimators of variability to achieve robustness of the false positive rate against departures from the hypothesized phenotypic model. Because the score statistic is easier to calculate than the likelihood ratio statistic, our basic mapping methods utilize relatively simple computer code that performs statistical analysis on output from any program that computes estimates of identity-by-descent. This simplicity also permits development and evaluation of methods to deal with multivariate and ordinal phenotypes, and with gene-gene and gene-environment interaction. We demonstrate our methods on simulated data and on fasting insulin, a quantitative trait measured in the Framingham Heart Study. PMID:19278016

  2. A likelihood-based time series modeling approach for application in dendrochronology to examine the growth-climate relations and forest disturbance history.

    PubMed

    Lee, E Henry; Wickham, Charlotte; Beedlow, Peter A; Waschmann, Ronald S; Tingey, David T

    2017-10-01

    A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for climate and forest disturbances (i.e., pests, diseases, fire). The statistical method is illustrated with a tree-ring width time series for a mature closed-canopy Douglas-fir stand on the west slopes of the Cascade Mountains of Oregon, USA that is impacted by Swiss needle cast disease caused by the foliar fungus, Phaecryptopus gaeumannii (Rhode) Petrak. The likelihood-based TSIA method is proposed for the field of dendrochronology to understand the interaction of temperature, water, and forest disturbances that are important in forest ecology and climate change studies.

  3. U.S. cannabis legalization and use of vaping and edible products among youth.

    PubMed

    Borodovsky, Jacob T; Lee, Dustin C; Crosier, Benjamin S; Gabrielli, Joy L; Sargent, James D; Budney, Alan J

    2017-08-01

    Alternative methods for consuming cannabis (e.g., vaping and edibles) have become more popular in the wake of U.S. cannabis legalization. Specific provisions of legal cannabis laws (LCL) (e.g., dispensary regulations) may impact the likelihood that youth will use alternative methods and the age at which they first try the method - potentially magnifying or mitigating the developmental harms of cannabis use. This study examined associations between LCL provisions and how youth consume cannabis. An online cannabis use survey was distributed using Facebook advertising, and data were collected from 2630 cannabis-using youth (ages 14-18). U.S. states were coded for LCL status and various LCL provisions. Regression analyses tested associations among lifetime use and age of onset of cannabis vaping and edibles and LCL provisions. Longer LCL duration (OR vaping : 2.82, 95% CI: 2.24, 3.55; OR edibles : 3.82, 95% CI: 2.96, 4.94), and higher dispensary density (OR vaping : 2.68, 95% CI: 2.12, 3.38; OR edibles : 3.31, 95% CI: 2.56, 4.26), were related to higher likelihood of trying vaping and edibles. Permitting home cultivation was related to higher likelihood (OR: 1.93, 95% CI: 1.50, 2.48) and younger age of onset (β: -0.30, 95% CI: -0.45, -0.15) of edibles. Specific provisions of LCL appear to impact the likelihood, and age at which, youth use alternative methods to consume cannabis. These methods may carry differential risks for initiation and escalation of cannabis use. Understanding associations between LCL provisions and methods of administration can inform the design of effective cannabis regulatory strategies. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    USGS Publications Warehouse

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  5. Cross-validation to select Bayesian hierarchical models in phylogenetics.

    PubMed

    Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C

    2016-05-26

    Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.

  6. Method and system for diagnostics of apparatus

    NASA Technical Reports Server (NTRS)

    Gorinevsky, Dimitry (Inventor)

    2012-01-01

    Proposed is a method, implemented in software, for estimating fault state of an apparatus outfitted with sensors. At each execution period the method processes sensor data from the apparatus to obtain a set of parity parameters, which are further used for estimating fault state. The estimation method formulates a convex optimization problem for each fault hypothesis and employs a convex solver to compute fault parameter estimates and fault likelihoods for each fault hypothesis. The highest likelihoods and corresponding parameter estimates are transmitted to a display device or an automated decision and control system. The obtained accurate estimate of fault state can be used to improve safety, performance, or maintenance processes for the apparatus.

  7. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-07-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.

  8. Stabilized and tunable single-longitudinal-mode erbium fiber laser employing ytterbium-doped fiber based interference filter

    NASA Astrophysics Data System (ADS)

    Yeh, Chien-Hung; Tsai, Ning; Zhuang, Yuan-Hong; Chow, Chi-Wai; Chen, Jing-Heng

    2017-02-01

    In this demonstration, to achieve stabilized and wavelength-selectable single-longitudinal-mode (SLM) erbium-doped fiber (EDF) laser, a short length of ytterbium-doped fiber (YDF) is utilized to serve as a spatial multi-mode interference (MMI) inside a fiber cavity for suppressing multi-longitudinal-mode (MLM) significantly. In the measurement, the output powers and optical signal to noise ratios (OSNRs) of proposed EDF ring laser are measured between -9.85 and -5.71 dBm; and 38.03 and 47.95 dB, respectively, in the tuning range of 1530.0-1560.0 nm. In addition, the output SLM and stability performance are also analyzed and discussed experimentally.

  9. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  10. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  11. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  12. Optimal Methods for Classification of Digitally Modulated Signals

    DTIC Science & Technology

    2013-03-01

    of using a ratio of likelihood functions, the proposed approach uses the Kullback - Leibler (KL) divergence. KL...58 List of Acronyms ALRT Average LRT BPSK Binary Shift Keying BPSK-SS BPSK Spread Spectrum or CDMA DKL Kullback - Leibler Information Divergence...blind demodulation for develop classification algorithms for wider set of signals types. Two methodologies were used : Likelihood Ratio Test

  13. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  14. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  15. Rate of convergence of k-step Newton estimators to efficient likelihood estimators

    Treesearch

    Steve Verrill

    2007-01-01

    We make use of Cramer conditions together with the well-known local quadratic convergence of Newton?s method to establish the asymptotic closeness of k-step Newton estimators to efficient likelihood estimators. In Verrill and Johnson [2007. Confidence bounds and hypothesis tests for normal distribution coefficients of variation. USDA Forest Products Laboratory Research...

  16. Quasi- and pseudo-maximum likelihood estimators for discretely observed continuous-time Markov branching processes

    PubMed Central

    Chen, Rui; Hyrien, Ollivier

    2011-01-01

    This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356

  17. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  18. Algorithms of maximum likelihood data clustering with applications

    NASA Astrophysics Data System (ADS)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  19. Estimating Divergence Parameters With Small Samples From a Large Number of Loci

    PubMed Central

    Wang, Yong; Hey, Jody

    2010-01-01

    Most methods for studying divergence with gene flow rely upon data from many individuals at few loci. Such data can be useful for inferring recent population history but they are unlikely to contain sufficient information about older events. However, the growing availability of genome sequences suggests a different kind of sampling scheme, one that may be more suited to studying relatively ancient divergence. Data sets extracted from whole-genome alignments may represent very few individuals but contain a very large number of loci. To take advantage of such data we developed a new maximum-likelihood method for genomic data under the isolation-with-migration model. Unlike many coalescent-based likelihood methods, our method does not rely on Monte Carlo sampling of genealogies, but rather provides a precise calculation of the likelihood by numerical integration over all genealogies. We demonstrate that the method works well on simulated data sets. We also consider two models for accommodating mutation rate variation among loci and find that the model that treats mutation rates as random variables leads to better estimates. We applied the method to the divergence of Drosophila melanogaster and D. simulans and detected a low, but statistically significant, signal of gene flow from D. simulans to D. melanogaster. PMID:19917765

  20. Quantitative PET Imaging in Drug Development: Estimation of Target Occupancy.

    PubMed

    Naganawa, Mika; Gallezot, Jean-Dominique; Rossano, Samantha; Carson, Richard E

    2017-12-11

    Positron emission tomography, an imaging tool using radiolabeled tracers in humans and preclinical species, has been widely used in recent years in drug development, particularly in the central nervous system. One important goal of PET in drug development is assessing the occupancy of various molecular targets (e.g., receptors, transporters, enzymes) by exogenous drugs. The current linear mathematical approaches used to determine occupancy using PET imaging experiments are presented. These algorithms use results from multiple regions with different target content in two scans, a baseline (pre-drug) scan and a post-drug scan. New mathematical estimation approaches to determine target occupancy, using maximum likelihood, are presented. A major challenge in these methods is the proper definition of the covariance matrix of the regional binding measures, accounting for different variance of the individual regional measures and their nonzero covariance, factors that have been ignored by conventional methods. The novel methods are compared to standard methods using simulation and real human occupancy data. The simulation data showed the expected reduction in variance and bias using the proper maximum likelihood methods, when the assumptions of the estimation method matched those in simulation. Between-method differences for data from human occupancy studies were less obvious, in part due to small dataset sizes. These maximum likelihood methods form the basis for development of improved PET covariance models, in order to minimize bias and variance in PET occupancy studies.

  1. Safe semi-supervised learning based on weighted likelihood.

    PubMed

    Kawakita, Masanori; Takeuchi, Jun'ichi

    2014-05-01

    We are interested in developing a safe semi-supervised learning that works in any situation. Semi-supervised learning postulates that n(') unlabeled data are available in addition to n labeled data. However, almost all of the previous semi-supervised methods require additional assumptions (not only unlabeled data) to make improvements on supervised learning. If such assumptions are not met, then the methods possibly perform worse than supervised learning. Sokolovska, Cappé, and Yvon (2008) proposed a semi-supervised method based on a weighted likelihood approach. They proved that this method asymptotically never performs worse than supervised learning (i.e., it is safe) without any assumption. Their method is attractive because it is easy to implement and is potentially general. Moreover, it is deeply related to a certain statistical paradox. However, the method of Sokolovska et al. (2008) assumes a very limited situation, i.e., classification, discrete covariates, n(')→∞ and a maximum likelihood estimator. In this paper, we extend their method by modifying the weight. We prove that our proposal is safe in a significantly wide range of situations as long as n≤n('). Further, we give a geometrical interpretation of the proof of safety through the relationship with the above-mentioned statistical paradox. Finally, we show that the above proposal is asymptotically safe even when n(')

  2. Detecting the contagion effect in mass killings; a constructive example of the statistical advantages of unbinned likelihood methods

    PubMed Central

    Mubayi, Anuj; Castillo-Chavez, Carlos

    2018-01-01

    Background When attempting to statistically distinguish between a null and an alternative hypothesis, many researchers in the life and social sciences turn to binned statistical analysis methods, or methods that are simply based on the moments of a distribution (such as the mean, and variance). These methods have the advantage of simplicity of implementation, and simplicity of explanation. However, when null and alternative hypotheses manifest themselves in subtle differences in patterns in the data, binned analysis methods may be insensitive to these differences, and researchers may erroneously fail to reject the null hypothesis when in fact more sensitive statistical analysis methods might produce a different result when the null hypothesis is actually false. Here, with a focus on two recent conflicting studies of contagion in mass killings as instructive examples, we discuss how the use of unbinned likelihood methods makes optimal use of the information in the data; a fact that has been long known in statistical theory, but perhaps is not as widely appreciated amongst general researchers in the life and social sciences. Methods In 2015, Towers et al published a paper that quantified the long-suspected contagion effect in mass killings. However, in 2017, Lankford & Tomek subsequently published a paper, based upon the same data, that claimed to contradict the results of the earlier study. The former used unbinned likelihood methods, and the latter used binned methods, and comparison of distribution moments. Using these analyses, we also discuss how visualization of the data can aid in determination of the most appropriate statistical analysis methods to distinguish between a null and alternate hypothesis. We also discuss the importance of assessment of the robustness of analysis results to methodological assumptions made (for example, arbitrary choices of number of bins and bin widths when using binned methods); an issue that is widely overlooked in the literature, but is critical to analysis reproducibility and robustness. Conclusions When an analysis cannot distinguish between a null and alternate hypothesis, care must be taken to ensure that the analysis methodology itself maximizes the use of information in the data that can distinguish between the two hypotheses. The use of binned methods by Lankford & Tomek (2017), that examined how many mass killings fell within a 14 day window from a previous mass killing, substantially reduced the sensitivity of their analysis to contagion effects. The unbinned likelihood methods used by Towers et al (2015) did not suffer from this problem. While a binned analysis might be favorable for simplicity and clarity of presentation, unbinned likelihood methods are preferable when effects might be somewhat subtle. PMID:29742115

  3. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    PubMed

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).

  4. A long-term earthquake rate model for the central and eastern United States from smoothed seismicity

    USGS Publications Warehouse

    Moschetti, Morgan P.

    2015-01-01

    I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.

  5. The impact of symptom stability on time frame and recall reliability in CFS.

    PubMed

    Evans, Meredyth; Jason, Leonard A

    This study is an investigation of the potential impact of perceived symptom stability on the recall reliability of symptom severity and frequency as reported by individuals with chronic fatigue syndrome (CFS). Symptoms were recalled using three different recall timeframes (the past week, the past month, and the past six months) and at two assessment points (with one week in between each assessment). Participants were 51 adults (45 women and 6 men), between the ages of 29 and 66 with a current diagnosis of CFS. Multilevel Model (MLM) Analyses were used to determine the optimal recall timeframe (in terms of test-retest reliability) for reporting symptoms perceived as variable and as stable over time. Headaches were recalled more reliably when they were reported as stable over time. Furthermore, the optimal timeframe in terms of test-retest reliability for stable symptoms was highly uniform, such that all Fukuda 1 CFS symptoms were more reliably recalled at the six month timeframe. Furthermore, the optimal timeframe for CFS symptoms perceived as variable, differed across symptoms. Symptom stability and recall timeframe are important to consider in order to improve the accuracy and reliability of the current methods for diagnosing this illness.

  6. Analysis of crackling noise using the maximum-likelihood method: Power-law mixing and exponential damping.

    PubMed

    Salje, Ekhard K H; Planes, Antoni; Vives, Eduard

    2017-10-01

    Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.

  7. A survey of kernel-type estimators for copula and their applications

    NASA Astrophysics Data System (ADS)

    Sumarjaya, I. W.

    2017-10-01

    Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.

  8. Detecting the contagion effect in mass killings; a constructive example of the statistical advantages of unbinned likelihood methods.

    PubMed

    Towers, Sherry; Mubayi, Anuj; Castillo-Chavez, Carlos

    2018-01-01

    When attempting to statistically distinguish between a null and an alternative hypothesis, many researchers in the life and social sciences turn to binned statistical analysis methods, or methods that are simply based on the moments of a distribution (such as the mean, and variance). These methods have the advantage of simplicity of implementation, and simplicity of explanation. However, when null and alternative hypotheses manifest themselves in subtle differences in patterns in the data, binned analysis methods may be insensitive to these differences, and researchers may erroneously fail to reject the null hypothesis when in fact more sensitive statistical analysis methods might produce a different result when the null hypothesis is actually false. Here, with a focus on two recent conflicting studies of contagion in mass killings as instructive examples, we discuss how the use of unbinned likelihood methods makes optimal use of the information in the data; a fact that has been long known in statistical theory, but perhaps is not as widely appreciated amongst general researchers in the life and social sciences. In 2015, Towers et al published a paper that quantified the long-suspected contagion effect in mass killings. However, in 2017, Lankford & Tomek subsequently published a paper, based upon the same data, that claimed to contradict the results of the earlier study. The former used unbinned likelihood methods, and the latter used binned methods, and comparison of distribution moments. Using these analyses, we also discuss how visualization of the data can aid in determination of the most appropriate statistical analysis methods to distinguish between a null and alternate hypothesis. We also discuss the importance of assessment of the robustness of analysis results to methodological assumptions made (for example, arbitrary choices of number of bins and bin widths when using binned methods); an issue that is widely overlooked in the literature, but is critical to analysis reproducibility and robustness. When an analysis cannot distinguish between a null and alternate hypothesis, care must be taken to ensure that the analysis methodology itself maximizes the use of information in the data that can distinguish between the two hypotheses. The use of binned methods by Lankford & Tomek (2017), that examined how many mass killings fell within a 14 day window from a previous mass killing, substantially reduced the sensitivity of their analysis to contagion effects. The unbinned likelihood methods used by Towers et al (2015) did not suffer from this problem. While a binned analysis might be favorable for simplicity and clarity of presentation, unbinned likelihood methods are preferable when effects might be somewhat subtle.

  9. Developing a non-point source P loss indicator in R and its parameter uncertainty assessment using GLUE: a case study in northern China.

    PubMed

    Su, Jingjun; Du, Xinzhong; Li, Xuyong

    2018-05-16

    Uncertainty analysis is an important prerequisite for model application. However, the existing phosphorus (P) loss indexes or indicators were rarely evaluated. This study applied generalized likelihood uncertainty estimation (GLUE) method to assess the uncertainty of parameters and modeling outputs of a non-point source (NPS) P indicator constructed in R language. And the influences of subjective choices of likelihood formulation and acceptability threshold of GLUE on model outputs were also detected. The results indicated the following. (1) Parameters RegR 2 , RegSDR 2 , PlossDP fer , PlossDP man , DPDR, and DPR were highly sensitive to overall TP simulation and their value ranges could be reduced by GLUE. (2) Nash efficiency likelihood (L 1 ) seemed to present better ability in accentuating high likelihood value simulations than the exponential function (L 2 ) did. (3) The combined likelihood integrating the criteria of multiple outputs acted better than single likelihood in model uncertainty assessment in terms of reducing the uncertainty band widths and assuring the fitting goodness of whole model outputs. (4) A value of 0.55 appeared to be a modest choice of threshold value to balance the interests between high modeling efficiency and high bracketing efficiency. Results of this study could provide (1) an option to conduct NPS modeling under one single computer platform, (2) important references to the parameter setting for NPS model development in similar regions, (3) useful suggestions for the application of GLUE method in studies with different emphases according to research interests, and (4) important insights into the watershed P management in similar regions.

  10. Regression estimators for generic health-related quality of life and quality-adjusted life years.

    PubMed

    Basu, Anirban; Manca, Andrea

    2012-01-01

    To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.

  11. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  12. Do-it-yourself statistics: A computer-assisted likelihood approach to analysis of data from genetic crosses.

    PubMed Central

    Robbins, L G

    2000-01-01

    Graduate school programs in genetics have become so full that courses in statistics have often been eliminated. In addition, typical introductory statistics courses for the "statistics user" rather than the nascent statistician are laden with methods for analysis of measured variables while genetic data are most often discrete numbers. These courses are often seen by students and genetics professors alike as largely irrelevant cookbook courses. The powerful methods of likelihood analysis, although commonly employed in human genetics, are much less often used in other areas of genetics, even though current computational tools make this approach readily accessible. This article introduces the MLIKELY.PAS computer program and the logic of do-it-yourself maximum-likelihood statistics. The program itself, course materials, and expanded discussions of some examples that are only summarized here are available at http://www.unisi. it/ricerca/dip/bio_evol/sitomlikely/mlikely.h tml. PMID:10628965

  13. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  14. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  15. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    ERIC Educational Resources Information Center

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  16. Empirical likelihood method for non-ignorable missing data problems.

    PubMed

    Guan, Zhong; Qin, Jing

    2017-01-01

    Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.

  17. A hybrid model for combining case-control and cohort studies in systematic reviews of diagnostic tests

    PubMed Central

    Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao

    2014-01-01

    Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179

  18. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  19. SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction

    PubMed Central

    Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.

    2015-01-01

    Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831

  20. FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods.

    PubMed

    Zierke, Stephanie; Bakos, Jason D

    2010-04-12

    Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10x speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs).

  1. Regular endurance training in adolescents impacts atrial and ventricular size and function.

    PubMed

    Rundqvist, Louise; Engvall, Jan; Faresjö, Maria; Carlsson, Emma; Blomstrand, Peter

    2017-06-01

    The aims of the study were to explore the effects of long-term endurance exercise on atrial and ventricular size and function in adolescents and to examine whether these changes are related to maximal oxygen uptake (VO2max). Twenty-seven long-term endurance-trained adolescents aged 13-19 years were individually matched by age and gender with 27 controls. All participants, 22 girls and 32 boys, underwent an echocardiographic examination at rest, including standard and colour tissue Doppler investigation. VO2max was assessed during treadmill exercise. All heart dimensions indexed for body size were larger in the physically active group compared with controls: left ventricular end-diastolic volume 60 vs. 50 mL/m2 (P <0.001), left atrial volume 27 vs. 19 mL/m2 (P < 0.001), and right ventricular (RV) and right atrial area 15 vs. 13 and 9 vs. 7 cm2/m2, respectively (P <0.001 for both). There were strong associations between the size of the cardiac chambers and VO2max. Further, we found improved systolic function in the active group compared with controls: left ventricular ejection fraction 61 vs. 59% (P= 0.036), tricuspid annular plane systolic excursion 12 vs. 10 mm/m2 (P= 0.008), and RV early peak systolic velocity s' 11 vs. 10 cm/s (P = 0.031). Cardiac remodelling to long-term endurance exercise in adolescents is manifested by an increase in atrial as well as ventricular dimensions. The physically active group also demonstrated functional remodelling with an increase in TAPSE and systolic RV wall velocity. These findings have practical implications when assessing cardiac enlargement and function in physically active youngsters. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.

  2. Debris- and radiation-induced damage effects on EUV nanolithography source collector mirror optics performance

    NASA Astrophysics Data System (ADS)

    Allain, J. P.; Nieto, M.; Hendricks, M.; Harilal, S. S.; Hassanein, A.

    2007-05-01

    Exposure of collector mirrors facing the hot, dense pinch plasma in plasma-based EUV light sources to debris (fast ions, neutrals, off-band radiation, droplets) remains one of the highest critical issues of source component lifetime and commercial feasibility of nanolithography at 13.5-nm. Typical radiators used at 13.5-nm include Xe and Sn. Fast particles emerging from the pinch region of the lamp are known to induce serious damage to nearby collector mirrors. Candidate collector configurations include either multi-layer mirrors (MLM) or single-layer mirrors (SLM) used at grazing incidence. Studies at Argonne have focused on understanding the underlying mechanisms that hinder collector mirror performance at 13.5-nm under fast Sn or Xe exposure. This is possible by a new state-of-the-art in-situ EUV reflectometry system that measures real time relative EUV reflectivity (15-degree incidence and 13.5-nm) variation during fast particle exposure. Intense EUV light and off-band radiation is also known to contribute to mirror damage. For example offband radiation can couple to the mirror and induce heating affecting the mirror's surface properties. In addition, intense EUV light can partially photo-ionize background gas (e.g., Ar or He) used for mitigation in the source device. This can lead to local weakly ionized plasma creating a sheath and accelerating charged gas particles to the mirror surface and inducing sputtering. In this paper we study several aspects of debris and radiation-induced damage to candidate EUVL source collector optics materials. The first study concerns the use of IMD simulations to study the effect of surface roughness on EUV reflectivity. The second studies the effect of fast particles on MLM reflectivity at 13.5-nm. And lastly the third studies the effect of multiple energetic sources with thermal Sn on 13.5-nm reflectivity. These studies focus on conditions that simulate the EUVL source environment in a controlled way.

  3. Implantation of the Medtronic Harmony Transcatheter Pulmonary Valve Improves Right Ventricular Size and Function in an Ovine Model of Postoperative Chronic Pulmonary Insufficiency.

    PubMed

    Schoonbeek, Rosanne C; Takebayashi, Satoshi; Aoki, Chikashi; Shimaoka, Toru; Harris, Matthew A; Fu, Gregory L; Kim, Timothy S; Dori, Yoav; McGarvey, Jeremy; Litt, Harold; Bouma, Wobbe; Zsido, Gerald; Glatz, Andrew C; Rome, Jonathan J; Gorman, Robert C; Gorman, Joseph H; Gillespie, Matthew J

    2016-10-01

    Pulmonary insufficiency is the nexus of late morbidity and mortality after transannular patch repair of tetralogy of Fallot. This study aimed to establish the feasibility of implantation of the novel Medtronic Harmony transcatheter pulmonary valve (hTPV) and to assess its effect on pulmonary insufficiency and ventricular function in an ovine model of chronic postoperative pulmonary insufficiency. Thirteen sheep underwent baseline cardiac magnetic resonance imaging, surgical pulmonary valvectomy, and transannular patch repair. One month after transannular patch repair, the hTPV was implanted, followed by serial magnetic resonance imaging and computed tomography imaging at 1, 5, and 8 month(s). hTPV implantation was successful in 11 animals (85%). There were 2 procedural deaths related to ventricular fibrillation. Seven animals survived the entire follow-up protocol, 5 with functioning hTPV devices. Two animals had occlusion of hTPV with aneurysm of main pulmonary artery. A strong decline in pulmonary regurgitant fraction was observed after hTPV implantation (40.5% versus 8.3%; P=0.011). Right ventricular end diastolic volume increased by 49.4% after transannular patch repair (62.3-93.1 mL/m 2 ; P=0.028) but was reversed to baseline values after hTPV implantation (to 65.1 mL/m 2 at 8 months, P=0.045). Both right ventricular ejection fraction and left ventricular ejection fraction were preserved after hTPV implantation. hTPV implantation is feasible, significantly reduces pulmonary regurgitant fraction, facilitates right ventricular volume improvements, and preserves biventricular function in an ovine model of chronic pulmonary insufficiency. This percutaneous strategy could potentially offer an alternative for standard surgical pulmonary valve replacement in dilated right ventricular outflow tracts, permitting lower risk, nonsurgical pulmonary valve replacement in previously prohibitive anatomies. © 2016 American Heart Association, Inc.

  4. Right ventricular remodelling after transcatheter pulmonary valve implantation.

    PubMed

    Pagourelias, Efstathios D; Daraban, Ana M; Mada, Razvan O; Duchenne, Jürgen; Mirea, Oana; Cools, Bjorn; Heying, Ruth; Boshoff, Derize; Bogaert, Jan; Budts, Werner; Gewillig, Marc; Voigt, Jens-Uwe

    2017-09-01

    To define the optimal timing for percutaneous pulmonary valve implantation (PPVI) in patients with severe pulmonary regurgitation (PR) after Fallot's Tetralogy (ToF) correction. PPVI among the aforementioned patients is mainly driven by symptoms or by severe right ventricular (RV) dilatation/dysfunction. The optimal timing for PPVI is still disputed. Twenty patients [age 13.9 ± 9.2 years, (range 4.3-44.9), male 70%] with severe PR (≥3 grade) secondary to previous correction of ToF, underwent Melody valve (Medtronic, Minneapolis, MN) implantation, after a pre-stent placement. Full echocardiographic assessment (traditional and deformation analysis) and cardiovascular magnetic resonance evaluation were performed before and at 3 months after the intervention. 'Favorable remodelling' was considered the upper quartile of RV size decrease (>20% in 3 months). After PPVI, indexed RV effective stroke volume increased from 38.4 ± 9.5 to 51.4 ± 10.7 mL/m 2 , (P = 0.005), while RV end-diastolic volume and strain indices decreased (123.1 ± 24.1-101.5 ± 18.3 mL/m 2 , P = 0.005 and -23.5 ± 2.5 to -21 ± 2.5%, P = 0.002, respectively). After inserting pre-PPVI clinical, RV volumetric and deformation parameters in a multiple regression model, only time after last surgical correction causing PR remained as significant regressor of RV remodelling [R 2  = 0.60, beta = 0.387, 95%CI(0.07-0.7), P = 0.019]. Volume reduction and functional improvement were more pronounced in patients treated with PPVI earlier than 7 years after last RV outflow tract (RVOT) correction, reaching close-to-normal values. Early PPVI (<7 years after last RVOT operation) is associated with a more favorable RV reverse remodelling toward normal range and should be considered, before symptoms or RV damage become apparent. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Right and Left Ventricular Function and Mass in Male Elite Master Athletes: A Controlled Contrast-Enhanced Cardiovascular Magnetic Resonance Study.

    PubMed

    Bohm, Philipp; Schneider, Günther; Linneweber, Lutz; Rentzsch, Axel; Krämer, Nadine; Abdul-Khaliq, Hashim; Kindermann, Wilfried; Meyer, Tim; Scharhag, Jürgen

    2016-05-17

    It is under debate whether the cumulative effects of intensive endurance exercise induce chronic cardiac damage, mainly involving the right heart. The aim of this study was to examine the cardiac structure and function in long-term elite master endurance athletes with special focus on the right ventricle by contrast-enhanced cardiovascular magnetic resonance. Thirty-three healthy white competitive elite male master endurance athletes (age range, 30-60 years) with a training history of 29±8 years, and 33 white control subjects pair-matched for age, height, and weight underwent cardiopulmonary exercise testing, echocardiography including tissue-Doppler imaging and speckle tracking, and cardiovascular magnetic resonance. Indexed left ventricular mass and right ventricular mass (left ventricular mass/body surface area, 96±13 and 62±10 g/m(2); P<0.001; right ventricular mass/body surface area, 36±7 and 24±5 g/m(2); P<0.001) and indexed left ventricular end-diastolic volume and right ventricular end-diastolic volume (left ventricular end-diastolic volume/body surface area, 104±13 and 69±18 mL/m(2); P<0.001; right ventricular end-diastolic volume/body surface area, 110±22 and 66±16 mL/m(2); P<0.001) were significantly increased in athletes in comparison with control subjects. Right ventricular ejection fraction did not differ between athletes and control subjects (52±8 and 54±6%; P=0.26). Pathological late enhancement was detected in 1 athlete. No correlations were found for left ventricular and right ventricular volumes and ejection fraction with N-terminal pro-brain natriuretic peptide, and high-sensitive troponin was negative in all subjects. Based on our results, chronic right ventricular damage in elite endurance master athletes with lifelong high training volumes seems to be unlikely. Thus, the hypothesis of an exercise-induced arrhythmogenic right ventricular cardiomyopathy has to be questioned. © 2016 American Heart Association, Inc.

  6. Flow-gradient patterns in severe aortic stenosis with preserved ejection fraction: clinical characteristics and predictors of survival.

    PubMed

    Eleid, Mackram F; Sorajja, Paul; Michelena, Hector I; Malouf, Joseph F; Scott, Christopher G; Pellikka, Patricia A

    2013-10-15

    Among patients with severe aortic stenosis (AS) and preserved ejection fraction, those with low gradient (LG) and reduced stroke volume may have an adverse prognosis. We investigated the prognostic impact of stroke volume using the recently proposed flow-gradient classification. We examined 1704 consecutive patients with severe AS (aortic valve area <1.0 cm(2)) and preserved ejection fraction (≥50%) using 2-dimensional and Doppler echocardiography. Patients were stratified by stroke volume index (<35 mL/m(2) [low flow, LF] versus ≥35 mL/m(2) [normal flow, NF]) and aortic gradient (<40 mm Hg [LG] versus ≥40 mm Hg [high gradient, HG]) into 4 groups: NF/HG, NF/LG, LF/HG, and LF/LG. NF/LG (n=352, 21%), was associated with favorable survival with medical management (2-year estimate, 82% versus 67% in NF/HG; P<0.0001). LF/LG severe AS (n=53, 3%) was characterized by lower ejection fraction, more prevalent atrial fibrillation and heart failure, reduced arterial compliance, and reduced survival (2-year estimate, 60% versus 82% in NF/HG; P<0.001). In multivariable analysis, the LF/LG pattern was the strongest predictor of mortality (hazard ratio, 3.26; 95% confidence interval, 1.71-6.22; P<0.001 versus NF/LG). Aortic valve replacement was associated with a 69% mortality reduction (hazard ratio, 0.31; 95% confidence interval, 0.25-0.39; P<0.0001) in LF/LG and NF/HG, with no survival benefit associated with aortic valve replacement in NF/LG and LF/HG. NF/LG severe AS with preserved ejection fraction exhibits favorable survival with medical management, and the impact of aortic valve replacement on survival was neutral. LF/LG severe AS is characterized by a high prevalence of atrial fibrillation, heart failure, and reduced survival, and aortic valve replacement was associated with improved survival. These findings have implications for the evaluation and subsequent management of AS severity.

  7. Low Plasma Volume in Normotensive Formerly Preeclamptic Women Predisposes to Hypertension.

    PubMed

    Scholten, Ralph R; Lotgering, Fred K; Hopman, Maria T; Van Dijk, Arie; Van de Vlugt, Maureen; Janssen, Mirian C H; Spaanderman, Marc E A

    2015-11-01

    Formerly preeclamptic women are at risk for cardiovascular disease. Low plasma volume may reflect latent hypertension and potentially links preeclampsia with chronic cardiovascular disease. We hypothesized that low plasma volume in normotensive formerly preeclamptic women predisposes to hypertension. We longitudinally studied n=104 formerly preeclamptic women in whom plasma volume was measured 3 to 30 months after the preeclamptic pregnancy. Cardiovascular variables were assessed at 2 points in time (3-30 months postpartum and 2-5 years thereafter). Study population was divided into low plasma volume (≤1373 mL/m(2)) and normal plasma volume (>1373 mL/m(2)). Primary end point was hypertension at the second visit: defined as ≥140 mm Hg systolic or ≥90 mm Hg diastolic. Secondary outcome of this study was change in traditional cardiovascular risk profile between visits. Variables correlating univariately with change in blood pressure between visits were introduced in regression analysis. Eighteen of 104 (17%) formerly preeclamptic women who were normotensive at first visit had hypertension at second evaluation 2 to 5 years later. Hypertension developed more often in women with low plasma volume (10/35 [29%]) than in women with normal plasma volume (8/69 [12%]; odds ratio, 3.2; 95% confidence interval, 1.4-8.6). After adjustments, relationship between plasma volume status and subsequent hypertension persisted (adjusted odds ratio, 3.0; 95% confidence interval, 1.1-8.5). Mean arterial pressure at second visit correlated inverse linearly with plasma volume (r=-0.49; P<0.01). Initially normotensive formerly preeclamptic women have 17% chance to develop hypertension within 5 years. Women with low plasma volume have higher chance to develop hypertension than women with normal plasma volume. Clinically, follow-up of blood pressure seems warranted in women with history of preeclampsia, even when initially normotensive. © 2015 American Heart Association, Inc.

  8. Efficacy of Aquatain, a Monomolecular Film, for the Control of Malaria Vectors in Rice Paddies

    PubMed Central

    Bukhari, Tullu; Takken, Willem; Githeko, Andrew K.; Koenraadt, Constantianus J. M.

    2011-01-01

    Background Rice paddies harbour a large variety of organisms including larvae of malaria mosquitoes. These paddies are challenging for mosquito control because their large size, slurry and vegetation make it difficult to effectively apply a control agent. Aquatain, a monomolecular surface film, can be considered a suitable mosquito control agent for such breeding habitats due to its physical properties. The properties allow Aquatain to self-spread over a water surface and affect multiple stages of the mosquito life cycle. Methodology/Principal Findings A trial based on a pre-test/post-test control group design evaluated the potential of Aquatain as a mosquito control agent at Ahero rice irrigation scheme in Kenya. After Aquatain application at a dose of 2 ml/m2 on rice paddies, early stage anopheline larvae were reduced by 36%, and late stage anopheline larvae by 16%. However, even at a lower dose of 1 ml/m2 there was a 93.2% reduction in emergence of anopheline adults and 69.5% reduction in emergence of culicine adults. No pupation was observed in treated buckets that were part of a field bio-assay carried out parallel to the trial. Aquatain application saved nearly 1.7 L of water in six days from a water surface of 0.2 m2 under field conditions. Aquatain had no negative effect on rice plants as well as on a variety of non-target organisms, except backswimmers. Conclusions/Significance We demonstrated that Aquatain is an effective agent for the control of anopheline and culicine mosquitoes in irrigated rice paddies. The agent reduced densities of aquatic larval stages and, more importantly, strongly impacted the emergence of adult mosquitoes. Aquatain also reduced water loss due to evaporation. No negative impacts were found on either abundance of non-target organisms, or growth and development of rice plants. Aquatain, therefore, appears a suitable mosquito control tool for use in rice agro-ecosystems. PMID:21738774

  9. Significantly Elevated C-Reactive Protein Levels After Epicardial Clipping of the Left Atrial Appendage.

    PubMed

    Verberkmoes, Niels J; Akca, Ferdi; Vandevenne, Ann-Sofie; Jacobs, Luuk; Soliman Hamad, Mohamed A; van Straten, Albert H M

    Besides mechanical and anatomical changes of the left atrium, epicardial closure of the left atrial appendage has also possible homeostatic effects. The aim of this study was to assess whether epicardial clipping of the left atrial appendage has different biochemical effects compared with complete removal of the left atrial appendage. Eighty-two patients were included and underwent a totally thoracoscopic AF ablation procedure. As part of the procedure, the left atrial appendage was excluded with an epicardial clip (n = 57) or the left atrial appendage was fully amputated with an endoscopic vascular stapler (n = 25). From all patients' preprocedural and postprocedural blood pressure, electrolytes and inflammatory parameters were collected. The mean age and left atrial volume index were comparable between the epicardial clip and stapler group (64 ± 8 years vs. 60 ± 9 years, P = non-significant; 44 ± 15 mL/m vs. 40 ± 13 mL/m, P = non-significant). Patients receiving left atrial appendage clipping had significantly elevated C-reactive protein levels compared with patients who had left atrial appendage stapling at the second, third, and fourth postoperative day (225 ± 84 mg/L vs. 149 ± 76 mg/L, P = 0.002, 244 ± 78 vs. 167 ± 76, P = 0.004, 190 ± 74 vs. 105 ± 48, P < 0.001, respectively). Patients had a significant decrease in sodium levels, systolic, and diastolic blood pressure at 24 and 72 hours after left atrial appendage closure. However, this was comparable for both the left atrial appendage clipping and stapling group. Increased activation of the inflammatory response was observed after left atrial appendage clipping compared with left atrial appendage stapling. Furthermore, a significant decrease in blood pressure was observed after surgical removal of the left atrial appendage. Whether the inflammatory response affects the outcome of arrhythmia surgery needs to be further evaluated.

  10. Using Arden Syntax for the creation of a multi-patient surveillance dashboard.

    PubMed

    Kraus, Stefan; Drescher, Caroline; Sedlmayr, Martin; Castellanos, Ixchel; Prokosch, Hans-Ulrich; Toddenroth, Dennis

    2015-10-09

    Most practically deployed Arden-Syntax-based clinical decision support (CDS) modules process data from individual patients. The specification of Arden Syntax, however, would in principle also support multi-patient CDS. The patient data management system (PDMS) at our local intensive care units does not natively support patient overviews from customizable CDS routines, but local physicians indicated a demand for multi-patient tabular overviews of important clinical parameters such as key laboratory measurements. As our PDMS installation provides Arden Syntax support, we set out to explore the capability of Arden Syntax for multi-patient CDS by implementing a prototypical dashboard for visualizing laboratory findings from patient sets. Our implementation leveraged the object data type, supported by later versions of Arden, which turned out to be serviceable for representing complex input data from several patients. For our prototype, we designed a modularized architecture that separates the definition of technical operations, in particular the control of the patient context, from the actual clinical knowledge. Individual Medical Logic Modules (MLMs) for processing single patient attributes could then be developed according to well-tried Arden Syntax conventions. We successfully implemented a working dashboard prototype entirely in Arden Syntax. The architecture consists of a controller MLM to handle the patient context, a presenter MLM to generate a dashboard view, and a set of traditional MLMs containing the clinical decision logic. Our prototype could be integrated into the graphical user interface of the local PDMS. We observed that with realistic input data the average execution time of about 200ms for generating dashboard views attained applicable performance. Our study demonstrated the general feasibility of creating multi-patient CDS routines in Arden Syntax. We believe that our prototypical dashboard also suggests that such implementations can be relatively easy, and may simultaneously hold promise for sharing dashboards between institutions and reusing elementary components for additional dashboards. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    NASA Astrophysics Data System (ADS)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  12. Species delimitation using Bayes factors: simulations and application to the Sceloporus scalaris species group (Squamata: Phrynosomatidae).

    PubMed

    Grummer, Jared A; Bryson, Robert W; Reeder, Tod W

    2014-03-01

    Current molecular methods of species delimitation are limited by the types of species delimitation models and scenarios that can be tested. Bayes factors allow for more flexibility in testing non-nested species delimitation models and hypotheses of individual assignment to alternative lineages. Here, we examined the efficacy of Bayes factors in delimiting species through simulations and empirical data from the Sceloporus scalaris species group. Marginal-likelihood scores of competing species delimitation models, from which Bayes factor values were compared, were estimated with four different methods: harmonic mean estimation (HME), smoothed harmonic mean estimation (sHME), path-sampling/thermodynamic integration (PS), and stepping-stone (SS) analysis. We also performed model selection using a posterior simulation-based analog of the Akaike information criterion through Markov chain Monte Carlo analysis (AICM). Bayes factor species delimitation results from the empirical data were then compared with results from the reversible-jump MCMC (rjMCMC) coalescent-based species delimitation method Bayesian Phylogenetics and Phylogeography (BP&P). Simulation results show that HME and sHME perform poorly compared with PS and SS marginal-likelihood estimators when identifying the true species delimitation model. Furthermore, Bayes factor delimitation (BFD) of species showed improved performance when species limits are tested by reassigning individuals between species, as opposed to either lumping or splitting lineages. In the empirical data, BFD through PS and SS analyses, as well as the rjMCMC method, each provide support for the recognition of all scalaris group taxa as independent evolutionary lineages. Bayes factor species delimitation and BP&P also support the recognition of three previously undescribed lineages. In both simulated and empirical data sets, harmonic and smoothed harmonic mean marginal-likelihood estimators provided much higher marginal-likelihood estimates than PS and SS estimators. The AICM displayed poor repeatability in both simulated and empirical data sets, and produced inconsistent model rankings across replicate runs with the empirical data. Our results suggest that species delimitation through the use of Bayes factors with marginal-likelihood estimates via PS or SS analyses provide a useful and complementary alternative to existing species delimitation methods.

  13. A simulation study on Bayesian Ridge regression models for several collinearity levels

    NASA Astrophysics Data System (ADS)

    Efendi, Achmad; Effrihan

    2017-12-01

    When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.

  14. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation

    PubMed Central

    Li, Hong; Lu, Mingquan

    2017-01-01

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318

  15. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.

    PubMed

    Wang, Fei; Li, Hong; Lu, Mingquan

    2017-06-30

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.

  16. Empirical Bayes Gaussian likelihood estimation of exposure distributions from pooled samples in human biomonitoring.

    PubMed

    Li, Xiang; Kuk, Anthony Y C; Xu, Jinfeng

    2014-12-10

    Human biomonitoring of exposure to environmental chemicals is important. Individual monitoring is not viable because of low individual exposure level or insufficient volume of materials and the prohibitive cost of taking measurements from many subjects. Pooling of samples is an efficient and cost-effective way to collect data. Estimation is, however, complicated as individual values within each pool are not observed but are only known up to their average or weighted average. The distribution of such averages is intractable when the individual measurements are lognormally distributed, which is a common assumption. We propose to replace the intractable distribution of the pool averages by a Gaussian likelihood to obtain parameter estimates. If the pool size is large, this method produces statistically efficient estimates, but regardless of pool size, the method yields consistent estimates as the number of pools increases. An empirical Bayes (EB) Gaussian likelihood approach, as well as its Bayesian analog, is developed to pool information from various demographic groups by using a mixed-effect formulation. We also discuss methods to estimate the underlying mean-variance relationship and to select a good model for the means, which can be incorporated into the proposed EB or Bayes framework. By borrowing strength across groups, the EB estimator is more efficient than the individual group-specific estimator. Simulation results show that the EB Gaussian likelihood estimates outperform a previous method proposed for the National Health and Nutrition Examination Surveys with much smaller bias and better coverage in interval estimation, especially after correction of bias. Copyright © 2014 John Wiley & Sons, Ltd.

  17. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  18. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  19. Neutron Tomography of a Fuel Cell: Statistical Learning Implementation of a Penalized Likelihood Method

    NASA Astrophysics Data System (ADS)

    Coakley, Kevin J.; Vecchia, Dominic F.; Hussey, Daniel S.; Jacobson, David L.

    2013-10-01

    At the NIST Neutron Imaging Facility, we collect neutron projection data for both the dry and wet states of a Proton-Exchange-Membrane (PEM) fuel cell. Transmitted thermal neutrons captured in a scintillator doped with lithium-6 produce scintillation light that is detected by an amorphous silicon detector. Based on joint analysis of the dry and wet state projection data, we reconstruct a residual neutron attenuation image with a Penalized Likelihood method with an edge-preserving Huber penalty function that has two parameters that control how well jumps in the reconstruction are preserved and how well noisy fluctuations are smoothed out. The choice of these parameters greatly influences the resulting reconstruction. We present a data-driven method that objectively selects these parameters, and study its performance for both simulated and experimental data. Before reconstruction, we transform the projection data so that the variance-to-mean ratio is approximately one. For both simulated and measured projection data, the Penalized Likelihood method reconstruction is visually sharper than a reconstruction yielded by a standard Filtered Back Projection method. In an idealized simulation experiment, we demonstrate that the cross validation procedure selects regularization parameters that yield a reconstruction that is nearly optimal according to a root-mean-square prediction error criterion.

  20. Bayesian inference for OPC modeling

    NASA Astrophysics Data System (ADS)

    Burbine, Andrew; Sturtevant, John; Fryer, David; Smith, Bruce W.

    2016-03-01

    The use of optical proximity correction (OPC) demands increasingly accurate models of the photolithographic process. Model building and inference techniques in the data science community have seen great strides in the past two decades which make better use of available information. This paper aims to demonstrate the predictive power of Bayesian inference as a method for parameter selection in lithographic models by quantifying the uncertainty associated with model inputs and wafer data. Specifically, the method combines the model builder's prior information about each modelling assumption with the maximization of each observation's likelihood as a Student's t-distributed random variable. Through the use of a Markov chain Monte Carlo (MCMC) algorithm, a model's parameter space is explored to find the most credible parameter values. During parameter exploration, the parameters' posterior distributions are generated by applying Bayes' rule, using a likelihood function and the a priori knowledge supplied. The MCMC algorithm used, an affine invariant ensemble sampler (AIES), is implemented by initializing many walkers which semiindependently explore the space. The convergence of these walkers to global maxima of the likelihood volume determine the parameter values' highest density intervals (HDI) to reveal champion models. We show that this method of parameter selection provides insights into the data that traditional methods do not and outline continued experiments to vet the method.

  1. MrBayes tgMC3++: A High Performance and Resource-Efficient GPU-Oriented Phylogenetic Analysis Method.

    PubMed

    Ling, Cheng; Hamada, Tsuyoshi; Gao, Jingyang; Zhao, Guoguang; Sun, Donghong; Shi, Weifeng

    2016-01-01

    MrBayes is a widespread phylogenetic inference tool harnessing empirical evolutionary models and Bayesian statistics. However, the computational cost on the likelihood estimation is very expensive, resulting in undesirably long execution time. Although a number of multi-threaded optimizations have been proposed to speed up MrBayes, there are bottlenecks that severely limit the GPU thread-level parallelism of likelihood estimations. This study proposes a high performance and resource-efficient method for GPU-oriented parallelization of likelihood estimations. Instead of having to rely on empirical programming, the proposed novel decomposition storage model implements high performance data transfers implicitly. In terms of performance improvement, a speedup factor of up to 178 can be achieved on the analysis of simulated datasets by four Tesla K40 cards. In comparison to the other publicly available GPU-oriented MrBayes, the tgMC 3 ++ method (proposed herein) outperforms the tgMC 3 (v1.0), nMC 3 (v2.1.1) and oMC 3 (v1.00) methods by speedup factors of up to 1.6, 1.9 and 2.9, respectively. Moreover, tgMC 3 ++ supports more evolutionary models and gamma categories, which previous GPU-oriented methods fail to take into analysis.

  2. Effect of sampling rate and record length on the determination of stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Brenner, M. J.; Iliff, K. W.; Whitman, R. K.

    1978-01-01

    Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.

  3. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  4. Discoveries far from the lamppost with matrix elements and ranking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Debnath, Dipsikha; Gainer, James S.; Matchev, Konstantin T.

    2015-04-01

    The prevalence of null results in searches for new physics at the LHC motivates the effort to make these searches as model-independent as possible. We describe procedures for adapting the Matrix Element Method for situations where the signal hypothesis is not known a priori. We also present general and intuitive approaches for performing analyses and presenting results, which involve the flattening of background distributions using likelihood information. The first flattening method involves ranking events by background matrix element, the second involves quantile binning with respect to likelihood (and other) variables, and the third method involves reweighting histograms by the inversemore » of the background distribution.« less

  5. Nonlinear phase noise tolerance for coherent optical systems using soft-decision-aided ML carrier phase estimation enhanced with constellation partitioning

    NASA Astrophysics Data System (ADS)

    Li, Yan; Wu, Mingwei; Du, Xinwei; Xu, Zhuoran; Gurusamy, Mohan; Yu, Changyuan; Kam, Pooi-Yuen

    2018-02-01

    A novel soft-decision-aided maximum likelihood (SDA-ML) carrier phase estimation method and its simplified version, the decision-aided and soft-decision-aided maximum likelihood (DA-SDA-ML) methods are tested in a nonlinear phase noise-dominant channel. The numerical performance results show that both the SDA-ML and DA-SDA-ML methods outperform the conventional DA-ML in systems with constant-amplitude modulation formats. In addition, modified algorithms based on constellation partitioning are proposed. With partitioning, the modified SDA-ML and DA-SDA-ML are shown to be useful for compensating the nonlinear phase noise in multi-level modulation systems.

  6. A Computer Program for Solving a Set of Conditional Maximum Likelihood Equations Arising in the Rasch Model for Questionnaires.

    ERIC Educational Resources Information Center

    Andersen, Erling B.

    A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…

  7. Likelihood-based confidence intervals for estimating floods with given return periods

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo Sávio P. R.; Clarke, Robin T.

    1993-06-01

    This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.

  8. Statistical inference of static analysis rules

    NASA Technical Reports Server (NTRS)

    Engler, Dawson Richards (Inventor)

    2009-01-01

    Various apparatus and methods are disclosed for identifying errors in program code. Respective numbers of observances of at least one correctness rule by different code instances that relate to the at least one correctness rule are counted in the program code. Each code instance has an associated counted number of observances of the correctness rule by the code instance. Also counted are respective numbers of violations of the correctness rule by different code instances that relate to the correctness rule. Each code instance has an associated counted number of violations of the correctness rule by the code instance. A respective likelihood of the validity is determined for each code instance as a function of the counted number of observances and counted number of violations. The likelihood of validity indicates a relative likelihood that a related code instance is required to observe the correctness rule. The violations may be output in order of the likelihood of validity of a violated correctness rule.

  9. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images.

    PubMed

    Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test.

  10. Methods, apparatus and system for selective duplication of subtasks

    DOEpatents

    Andrade Costa, Carlos H.; Cher, Chen-Yong; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2016-03-29

    A method for selective duplication of subtasks in a high-performance computing system includes: monitoring a health status of one or more nodes in a high-performance computing system, where one or more subtasks of a parallel task execute on the one or more nodes; identifying one or more nodes as having a likelihood of failure which exceeds a first prescribed threshold; selectively duplicating the one or more subtasks that execute on the one or more nodes having a likelihood of failure which exceeds the first prescribed threshold; and notifying a messaging library that one or more subtasks were duplicated.

  11. Gyre and gimble: a maximum-likelihood replacement for Patterson correlation refinement.

    PubMed

    McCoy, Airlie J; Oeffner, Robert D; Millán, Claudia; Sammito, Massimo; Usón, Isabel; Read, Randy J

    2018-04-01

    Descriptions are given of the maximum-likelihood gyre method implemented in Phaser for optimizing the orientation and relative position of rigid-body fragments of a model after the orientation of the model has been identified, but before the model has been positioned in the unit cell, and also the related gimble method for the refinement of rigid-body fragments of the model after positioning. Gyre refinement helps to lower the root-mean-square atomic displacements between model and target molecular-replacement solutions for the test case of antibody Fab(26-10) and improves structure solution with ARCIMBOLDO_SHREDDER.

  12. Quantum state estimation when qubits are lost: a no-data-left-behind approach

    DOE PAGES

    Williams, Brian P.; Lougovski, Pavel

    2017-04-06

    We present an approach to Bayesian mean estimation of quantum states using hyperspherical parametrization and an experiment-specific likelihood which allows utilization of all available data, even when qubits are lost. With this method, we report the first closed-form Bayesian mean and maximum likelihood estimates for the ideal single qubit. Due to computational constraints, we utilize numerical sampling to determine the Bayesian mean estimate for a photonic two-qubit experiment in which our novel analysis reduces burdens associated with experimental asymmetries and inefficiencies. This method can be applied to quantum states of any dimension and experimental complexity.

  13. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  14. Gaussian Mixture Models of Between-Source Variation for Likelihood Ratio Computation from Multivariate Data

    PubMed Central

    Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680

  15. UPLC/MS MS data of testosterone metabolites in human and zebrafish liver microsomes and whole zebrafish larval microsomes.

    PubMed

    Saad, Moayad; Bijttebier, Sebastiaan; Matheeussen, An; Verbueken, Evy; Pype, Casper; Casteleyn, Christophe; Van Ginneken, Chris; Maes, Louis; Cos, Paul; Van Cruchten, Steven

    2018-02-01

    This article represents data regarding a study published in Toxicology in vitro entitled " in vitro CYP-mediated drug metabolism in the zebrafish (embryo) using human reference compounds" (Saad et al., 2017) [1]. Data were acquired with ultra-performance liquid chromatography - accurate mass mass spectrometry (UPLC-amMS). A full spectrum scan was conducted for the testosterone (TST) metabolites from the microsomal stability assay in zebrafish and humans. The microsomal proteins were extracted from adult zebrafish male (MLM) and female (FLM) livers, whole body homogenates of 96 h post fertilization larvae (EM) and a pool of human liver microsomes from 50 donors (HLM). Data are expressed as the abundance from the extracted ion chromatogram of the metabolites.

  16. [A possibility of using increased oxygen consumption as a criterion for mechanical respiration weaning in pediatric practice].

    PubMed

    Grigoliia, G N; Chokhonelidze, I K; Gvelesiani, L G; Sulakvelidze, K R; Tutberidze, K N

    2007-01-01

    The body oxygen consumption and the oxygen cost of breathing (which is the difference in oxygen consumption measured during controlled ventilation and again during spontaneous ventilation) were measured in 46 children with congenital heart diseases after open-heart surgery. There was a significant exponential correlation between the body oxygen consumption (ml/m(2)/min) and the oxygen cost of breathing as a percentage of total oxygen consumption during spontaneous ventilation and the duration of weaning in minutes (r=+0,882, p<0,02). Therefore, as the oxygen cost of breathing was correlated with the total weaning time, this may be a useful index on the weaning process (sensitivity 92%, specificity 85%).

  17. GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no

    2013-11-10

    We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less

  18. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  19. Multivariate Phylogenetic Comparative Methods: Evaluations, Comparisons, and Recommendations.

    PubMed

    Adams, Dean C; Collyer, Michael L

    2018-01-01

    Recent years have seen increased interest in phylogenetic comparative analyses of multivariate data sets, but to date the varied proposed approaches have not been extensively examined. Here we review the mathematical properties required of any multivariate method, and specifically evaluate existing multivariate phylogenetic comparative methods in this context. Phylogenetic comparative methods based on the full multivariate likelihood are robust to levels of covariation among trait dimensions and are insensitive to the orientation of the data set, but display increasing model misspecification as the number of trait dimensions increases. This is because the expected evolutionary covariance matrix (V) used in the likelihood calculations becomes more ill-conditioned as trait dimensionality increases, and as evolutionary models become more complex. Thus, these approaches are only appropriate for data sets with few traits and many species. Methods that summarize patterns across trait dimensions treated separately (e.g., SURFACE) incorrectly assume independence among trait dimensions, resulting in nearly a 100% model misspecification rate. Methods using pairwise composite likelihood are highly sensitive to levels of trait covariation, the orientation of the data set, and the number of trait dimensions. The consequences of these debilitating deficiencies are that a user can arrive at differing statistical conclusions, and therefore biological inferences, simply from a dataspace rotation, like principal component analysis. By contrast, algebraic generalizations of the standard phylogenetic comparative toolkit that use the trace of covariance matrices are insensitive to levels of trait covariation, the number of trait dimensions, and the orientation of the data set. Further, when appropriate permutation tests are used, these approaches display acceptable Type I error and statistical power. We conclude that methods summarizing information across trait dimensions, as well as pairwise composite likelihood methods should be avoided, whereas algebraic generalizations of the phylogenetic comparative toolkit provide a useful means of assessing macroevolutionary patterns in multivariate data. Finally, we discuss areas in which multivariate phylogenetic comparative methods are still in need of future development; namely highly multivariate Ornstein-Uhlenbeck models and approaches for multivariate evolutionary model comparisons. © The Author(s) 2017. Published by Oxford University Press on behalf of the Systematic Biology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Preexposure Prophylaxis Modality Preferences Among Men Who Have Sex With Men and Use Social Media in the United States.

    PubMed

    Hall, Eric William; Heneine, Walid; Sanchez, Travis; Sineath, Robert Craig; Sullivan, Patrick

    2016-05-19

    Preexposure prophylaxis (PrEP) is available as a daily pill for preventing infection with the human immunodeficiency virus (HIV). Innovative methods of administering PrEP systemically or topically are being discussed and developed. The objective of our study was to assess attitudes toward different experimental modalities of PrEP administration. From April to July 2015, we recruited 1106 HIV-negative men who have sex with men through online social media advertisements and surveyed them about their likelihood of using different PrEP modalities. Participants responded to 5-point Likert-scale items indicating how likely they were to use each of the following PrEP modalities: a daily oral pill, on-demand pills, periodic injection, penile gel (either before or after intercourse), rectal gel (before/after), and rectal suppository (before/after). We used Wilcoxon signed rank tests to determine whether the stated likelihood of using any modality differed from daily oral PrEP. Related items were combined to assess differences in likelihood of use based on tissue or time of administration. Participants also ranked their interest in using each modality, and we used the modified Borda count method to determine consensual rankings. Most participants indicated they would be somewhat likely or very likely to use PrEP as an on-demand pill (685/1105, 61.99%), daily oral pill (528/1036, 50.97%), injection (575/1091, 52.70%), or penile gel (438/755, 58.01% before intercourse; 408/751, 54.33% after). The stated likelihoods of using on-demand pills (median score 4) and of using a penile gel before intercourse (median 4) were both higher than that of using a daily oral pill (median 4, P<.001 and P=.001, respectively). Compared with a daily oral pill, participants reported a significantly lower likelihood of using any of the 4 rectal modalities (Wilcoxon signed rank test, all P<.001). On 10-point Likert scales created by combining application methods, the reported likelihood of using a penile gel (median 7) was higher than that of using a rectal gel (median 6, P<.001), which was higher than the likelihood of using a rectal suppository (median 6, P<.001). The modified Borda count ranked on-demand pills as the most preferred modality. There was no difference in likelihood of use of PrEP (gel or suppository) before or after intercourse. Participants typically prefer systemic PrEP and are less likely to use a modality that is administered rectally. Although most of these modalities are seen as favorable or neutral, attitudes may change as information about efficacy and application becomes available. Further data on modality preference across risk groups will better inform PrEP development.

  1. Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.

    2016-12-01

    With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.

  2. Average Likelihood Methods for Code Division Multiple Access (CDMA)

    DTIC Science & Technology

    2014-05-01

    lengths in the range of 22 to 213 and possibly higher. Keywords: DS / CDMA signals, classification, balanced CDMA load, synchronous CDMA , decision...likelihood ratio test (ALRT). We begin this classification problem by finding the size of the spreading matrix that generated the DS - CDMA signal. As...Theoretical Background The classification of DS / CDMA signals should not be confused with the problem of multiuser detection. The multiuser detection deals

  3. Accurate Structural Correlations from Maximum Likelihood Superpositions

    PubMed Central

    Theobald, Douglas L; Wuttke, Deborah S

    2008-01-01

    The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091

  4. Modeling abundance effects in distance sampling

    USGS Publications Warehouse

    Royle, J. Andrew; Dawson, D.K.; Bates, S.

    2004-01-01

    Distance-sampling methods are commonly used in studies of animal populations to estimate population density. A common objective of such studies is to evaluate the relationship between abundance or density and covariates that describe animal habitat or other environmental influences. However, little attention has been focused on methods of modeling abundance covariate effects in conventional distance-sampling models. In this paper we propose a distance-sampling model that accommodates covariate effects on abundance. The model is based on specification of the distance-sampling likelihood at the level of the sample unit in terms of local abundance (for each sampling unit). This model is augmented with a Poisson regression model for local abundance that is parameterized in terms of available covariates. Maximum-likelihood estimation of detection and density parameters is based on the integrated likelihood, wherein local abundance is removed from the likelihood by integration. We provide an example using avian point-transect data of Ovenbirds (Seiurus aurocapillus) collected using a distance-sampling protocol and two measures of habitat structure (understory cover and basal area of overstory trees). The model yields a sensible description (positive effect of understory cover, negative effect on basal area) of the relationship between habitat and Ovenbird density that can be used to evaluate the effects of habitat management on Ovenbird populations.

  5. A Review of Methods for Missing Data.

    ERIC Educational Resources Information Center

    Pigott, Therese D.

    2001-01-01

    Reviews methods for handling missing data in a research study. Model-based methods, such as maximum likelihood using the EM algorithm and multiple imputation, hold more promise than ad hoc methods. Although model-based methods require more specialized computer programs and assumptions about the nature of missing data, these methods are appropriate…

  6. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.

    2013-12-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  7. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    USGS Publications Warehouse

    Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.

    2014-01-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  8. Assessing performance and validating finite element simulations using probabilistic knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolin, Ronald M.; Rodriguez, E. A.

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less

  9. Predicting Rotator Cuff Tears Using Data Mining and Bayesian Likelihood Ratios

    PubMed Central

    Lu, Hsueh-Yi; Huang, Chen-Yuan; Su, Chwen-Tzeng; Lin, Chen-Chiang

    2014-01-01

    Objectives Rotator cuff tear is a common cause of shoulder diseases. Correct diagnosis of rotator cuff tears can save patients from further invasive, costly and painful tests. This study used predictive data mining and Bayesian theory to improve the accuracy of diagnosing rotator cuff tears by clinical examination alone. Methods In this retrospective study, 169 patients who had a preliminary diagnosis of rotator cuff tear on the basis of clinical evaluation followed by confirmatory MRI between 2007 and 2011 were identified. MRI was used as a reference standard to classify rotator cuff tears. The predictor variable was the clinical assessment results, which consisted of 16 attributes. This study employed 2 data mining methods (ANN and the decision tree) and a statistical method (logistic regression) to classify the rotator cuff diagnosis into “tear” and “no tear” groups. Likelihood ratio and Bayesian theory were applied to estimate the probability of rotator cuff tears based on the results of the prediction models. Results Our proposed data mining procedures outperformed the classic statistical method. The correction rate, sensitivity, specificity and area under the ROC curve of predicting a rotator cuff tear were statistical better in the ANN and decision tree models compared to logistic regression. Based on likelihood ratios derived from our prediction models, Fagan's nomogram could be constructed to assess the probability of a patient who has a rotator cuff tear using a pretest probability and a prediction result (tear or no tear). Conclusions Our predictive data mining models, combined with likelihood ratios and Bayesian theory, appear to be good tools to classify rotator cuff tears as well as determine the probability of the presence of the disease to enhance diagnostic decision making for rotator cuff tears. PMID:24733553

  10. Using DNA fingerprints to infer familial relationships within NHANES III households

    PubMed Central

    Katki, Hormuzd A.; Sanders, Christopher L.; Graubard, Barry I.; Bergen, Andrew W.

    2009-01-01

    Developing, targeting, and evaluating genomic strategies for population-based disease prevention require population-based data. In response to this urgent need, genotyping has been conducted within the Third National Health and Nutrition Examination (NHANES III), the nationally-representative household-interview health survey in the U.S. However, before these genetic analyses can occur, family relationships within households must be accurately ascertained. Unfortunately, reported family relationships within NHANES III households based on questionnaire data are incomplete and inconclusive with regards to actual biological relatedness of family members. We inferred family relationships within households using DNA fingerprints (Identifiler®) that contain the DNA loci used by law enforcement agencies for forensic identification of individuals. However, performance of these loci for relationship inference is not well understood. We evaluated two competing statistical methods for relationship inference on pairs of household members: an exact likelihood ratio relying on allele frequencies to an Identical By State (IBS) likelihood ratio that only requires matching alleles. We modified these methods to account for genotyping errors and population substructure. The two methods usually agree on the rankings of the most likely relationships. However, the IBS method underestimates the likelihood ratio by not accounting for the informativeness of matching rare alleles. The likelihood ratio is sensitive to estimates of population substructure, and parent-child relationships are sensitive to the specified genotyping error rate. These loci were unable to distinguish second-degree relationships and cousins from being unrelated. The genetic data is also useful for verifying reported relationships and identifying data quality issues. An important by-product is the first explicitly nationally-representative estimates of allele frequencies at these ubiquitous forensic loci. PMID:20664713

  11. Application of the Bootstrap Methods in Factor Analysis.

    ERIC Educational Resources Information Center

    Ichikawa, Masanori; Konishi, Sadanori

    1995-01-01

    A Monte Carlo experiment was conducted to investigate the performance of bootstrap methods in normal theory maximum likelihood factor analysis when the distributional assumption was satisfied or unsatisfied. Problems arising with the use of bootstrap methods are highlighted. (SLD)

  12. A Review of System Identification Methods Applied to Aircraft

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1983-01-01

    Airplane identification, equation error method, maximum likelihood method, parameter estimation in frequency domain, extended Kalman filter, aircraft equations of motion, aerodynamic model equations, criteria for the selection of a parsimonious model, and online aircraft identification are addressed.

  13. Hybrid pairwise likelihood analysis of animal behavior experiments.

    PubMed

    Cattelan, Manuela; Varin, Cristiano

    2013-12-01

    The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons. © 2013, The International Biometric Society.

  14. INFERRING THE ECCENTRICITY DISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogg, David W.; Bovy, Jo; Myers, Adam D., E-mail: david.hogg@nyu.ed

    2010-12-20

    Standard maximum-likelihood estimators for binary-star and exoplanet eccentricities are biased high, in the sense that the estimated eccentricity tends to be larger than the true eccentricity. As with most non-trivial observables, a simple histogram of estimated eccentricities is not a good estimate of the true eccentricity distribution. Here, we develop and test a hierarchical probabilistic method for performing the relevant meta-analysis, that is, inferring the true eccentricity distribution, taking as input the likelihood functions for the individual star eccentricities, or samplings of the posterior probability distributions for the eccentricities (under a given, uninformative prior). The method is a simple implementationmore » of a hierarchical Bayesian model; it can also be seen as a kind of heteroscedastic deconvolution. It can be applied to any quantity measured with finite precision-other orbital parameters, or indeed any astronomical measurements of any kind, including magnitudes, distances, or photometric redshifts-so long as the measurements have been communicated as a likelihood function or a posterior sampling.« less

  15. Order-restricted inference for means with missing values.

    PubMed

    Wang, Heng; Zhong, Ping-Shou

    2017-09-01

    Missing values appear very often in many applications, but the problem of missing values has not received much attention in testing order-restricted alternatives. Under the missing at random (MAR) assumption, we impute the missing values nonparametrically using kernel regression. For data with imputation, the classical likelihood ratio test designed for testing the order-restricted means is no longer applicable since the likelihood does not exist. This article proposes a novel method for constructing test statistics for assessing means with an increasing order or a decreasing order based on jackknife empirical likelihood (JEL) ratio. It is shown that the JEL ratio statistic evaluated under the null hypothesis converges to a chi-bar-square distribution, whose weights depend on missing probabilities and nonparametric imputation. Simulation study shows that the proposed test performs well under various missing scenarios and is robust for normally and nonnormally distributed data. The proposed method is applied to an Alzheimer's disease neuroimaging initiative data set for finding a biomarker for the diagnosis of the Alzheimer's disease. © 2017, The International Biometric Society.

  16. An efficient algorithm to compute marginal posterior genotype probabilities for every member of a pedigree with loops

    PubMed Central

    2009-01-01

    Background Marginal posterior genotype probabilities need to be computed for genetic analyses such as geneticcounseling in humans and selective breeding in animal and plant species. Methods In this paper, we describe a peeling based, deterministic, exact algorithm to compute efficiently genotype probabilities for every member of a pedigree with loops without recourse to junction-tree methods from graph theory. The efficiency in computing the likelihood by peeling comes from storing intermediate results in multidimensional tables called cutsets. Computing marginal genotype probabilities for individual i requires recomputing the likelihood for each of the possible genotypes of individual i. This can be done efficiently by storing intermediate results in two types of cutsets called anterior and posterior cutsets and reusing these intermediate results to compute the likelihood. Examples A small example is used to illustrate the theoretical concepts discussed in this paper, and marginal genotype probabilities are computed at a monogenic disease locus for every member in a real cattle pedigree. PMID:19958551

  17. Evoked Emotions Predict Food Choice

    PubMed Central

    Dalenberg, Jelle R.; Gutjar, Swetlana; ter Horst, Gert J.; de Graaf, Kees; Renken, Remco J.; Jager, Gerry

    2014-01-01

    In the current study we show that non-verbal food-evoked emotion scores significantly improve food choice prediction over merely liking scores. Previous research has shown that liking measures correlate with choice. However, liking is no strong predictor for food choice in real life environments. Therefore, the focus within recent studies shifted towards using emotion-profiling methods that successfully can discriminate between products that are equally liked. However, it is unclear how well scores from emotion-profiling methods predict actual food choice and/or consumption. To test this, we proposed to decompose emotion scores into valence and arousal scores using Principal Component Analysis (PCA) and apply Multinomial Logit Models (MLM) to estimate food choice using liking, valence, and arousal as possible predictors. For this analysis, we used an existing data set comprised of liking and food-evoked emotions scores from 123 participants, who rated 7 unlabeled breakfast drinks. Liking scores were measured using a 100-mm visual analogue scale, while food-evoked emotions were measured using 2 existing emotion-profiling methods: a verbal and a non-verbal method (EsSense Profile and PrEmo, respectively). After 7 days, participants were asked to choose 1 breakfast drink from the experiment to consume during breakfast in a simulated restaurant environment. Cross validation showed that we were able to correctly predict individualized food choice (1 out of 7 products) for over 50% of the participants. This number increased to nearly 80% when looking at the top 2 candidates. Model comparisons showed that evoked emotions better predict food choice than perceived liking alone. However, the strongest predictive strength was achieved by the combination of evoked emotions and liking. Furthermore we showed that non-verbal food-evoked emotion scores more accurately predict food choice than verbal food-evoked emotions scores. PMID:25521352

  18. Evoked emotions predict food choice.

    PubMed

    Dalenberg, Jelle R; Gutjar, Swetlana; Ter Horst, Gert J; de Graaf, Kees; Renken, Remco J; Jager, Gerry

    2014-01-01

    In the current study we show that non-verbal food-evoked emotion scores significantly improve food choice prediction over merely liking scores. Previous research has shown that liking measures correlate with choice. However, liking is no strong predictor for food choice in real life environments. Therefore, the focus within recent studies shifted towards using emotion-profiling methods that successfully can discriminate between products that are equally liked. However, it is unclear how well scores from emotion-profiling methods predict actual food choice and/or consumption. To test this, we proposed to decompose emotion scores into valence and arousal scores using Principal Component Analysis (PCA) and apply Multinomial Logit Models (MLM) to estimate food choice using liking, valence, and arousal as possible predictors. For this analysis, we used an existing data set comprised of liking and food-evoked emotions scores from 123 participants, who rated 7 unlabeled breakfast drinks. Liking scores were measured using a 100-mm visual analogue scale, while food-evoked emotions were measured using 2 existing emotion-profiling methods: a verbal and a non-verbal method (EsSense Profile and PrEmo, respectively). After 7 days, participants were asked to choose 1 breakfast drink from the experiment to consume during breakfast in a simulated restaurant environment. Cross validation showed that we were able to correctly predict individualized food choice (1 out of 7 products) for over 50% of the participants. This number increased to nearly 80% when looking at the top 2 candidates. Model comparisons showed that evoked emotions better predict food choice than perceived liking alone. However, the strongest predictive strength was achieved by the combination of evoked emotions and liking. Furthermore we showed that non-verbal food-evoked emotion scores more accurately predict food choice than verbal food-evoked emotions scores.

  19. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  20. Phylogenetic place of guinea pigs: no support of the rodent-polyphyly hypothesis from maximum-likelihood analyses of multiple protein sequences.

    PubMed

    Cao, Y; Adachi, J; Yano, T; Hasegawa, M

    1994-07-01

    Graur et al.'s (1991) hypothesis that the guinea pig-like rodents have an evolutionary origin within mammals that is separate from that of other rodents (the rodent-polyphyly hypothesis) was reexamined by the maximum-likelihood method for protein phylogeny, as well as by the maximum-parsimony and neighbor-joining methods. The overall evidence does not support Graur et al.'s hypothesis, which radically contradicts the traditional view of rodent monophyly. This work demonstrates that we must be careful in choosing a proper method for phylogenetic inference and that an argument based on a small data set (with respect to the length of the sequence and especially the number of species) may be unstable.

  1. A parametric method for determining the number of signals in narrow-band direction finding

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Fuhrmann, Daniel R.

    1991-08-01

    A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).

  2. Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers

    USGS Publications Warehouse

    Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.

    2004-01-01

    LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.

  3. Likelihood-based gene annotations for gap filling and quality assessment in genome-scale metabolic models

    DOE PAGES

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...

    2014-10-16

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less

  4. Likelihood-Based Gene Annotations for Gap Filling and Quality Assessment in Genome-Scale Metabolic Models

    PubMed Central

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.

    2014-01-01

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157

  5. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  6. Multifrequency InSAR height reconstruction through maximum likelihood estimation of local planes parameters.

    PubMed

    Pascazio, Vito; Schirinzi, Gilda

    2002-01-01

    In this paper, a technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities.

  7. Effects of time-shifted data on flight determined stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Steers, S. T.; Iliff, K. W.

    1975-01-01

    Flight data were shifted in time by various increments to assess the effects of time shifts on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there was a considerable time shift in the data. Time shifts degraded the estimates of the derivatives, but the degradation was in a consistent rather than a random pattern. Time shifts in the control variables caused the most degradation, and the lateral-directional rotary derivatives were affected the most by time shifts in any variable.

  8. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    PubMed

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  9. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    PubMed

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  10. Lack of correlation between left ventricular outflow tract velocity time integral and stroke volume index in mechanically ventilated patients.

    PubMed

    Blancas, R; Martínez-González, Ó; Ballesteros, D; Núñez, A; Luján, J; Rodríguez-Serrano, D; Hernández, A; Martínez-Díaz, C; Parra, C M; Matamala, B L; Alonso, M A; Chana, M

    2018-02-07

    To assess the correlation between left ventricular outflow tract velocity time integral (LVOT VTI) and stroke volume index (SVI) calculated by thermodilution methods in ventilated critically ill patients. A prospective, descriptive, multicenter study was performed. Five intensive care units from university hospitals. Patients older than 17 years needing mechanical ventilation and invasive hemodynamic monitoring were included. LVOT VTI was measured by pulsatile Doppler echocardiography. Calculations of SVI were performed through a floating pulmonary artery catheter (PAC) or a Pulse index Contour Cardiac Output (PiCCO ® ) thermodilution methods. The relation between LVOT VTI and SVI was tested by linear regression analysis. One hundred and fifty-six paired measurements were compared. Mean LVOT VTI was 20.83±4.86cm and mean SVI was 41.55±9.55mL/m 2 . Pearson correlation index for these variables was r=0.644, p<0.001; ICC was 0.52 (CI 95% 0.4-0.63). When maximum LVOT VTI was correlated with SVI, Pearson correlation index was r=0.62, p<0.001. Correlation worsened for extreme values, especially for those with higher LVOT VTI. LVOT VTI could be a complementary hemodynamic evaluation in selected patients, but does not eliminate the need for invasive monitoring at the present time. The weak correlation between LVOT VTI and invasive monitoring deserves additional assessment to identify the factors affecting this disagreement. Copyright © 2018 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.

  11. Matching mammographic regions in mediolateral oblique and cranio caudal views: a probabilistic approach

    NASA Astrophysics Data System (ADS)

    Samulski, Maurice; Karssemeijer, Nico

    2008-03-01

    Most of the current CAD systems detect suspicious mass regions independently in single views. In this paper we present a method to match corresponding regions in mediolateral oblique (MLO) and craniocaudal (CC) mammographic views of the breast. For every possible combination of mass regions in the MLO view and CC view, a number of features are computed, such as the difference in distance of a region to the nipple, a texture similarity measure, the gray scale correlation and the likelihood of malignancy of both regions computed by single-view analysis. In previous research, Linear Discriminant Analysis was used to discriminate between correct and incorrect links. In this paper we investigate if the performance can be improved by employing a statistical method in which four classes are distinguished. These four classes are defined by the combinations of view (MLO/CC) and pathology (TP/FP) labels. We use distance-weighted k-Nearest Neighbor density estimation to estimate the likelihood of a region combination. Next, a correspondence score is calculated as the likelihood that the region combination is a TP-TP link. The method was tested on 412 cases with a malignant lesion visible in at least one of the views. In 82.4% of the cases a correct link could be established between the TP detections in both views. In future work, we will use the framework presented here to develop a context dependent region matching scheme, which takes the number and likelihood of possible alternatives into account. It is expected that more accurate determination of matching probabilities will lead to improved CAD performance.

  12. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    PubMed

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  13. Spatial resolution properties of motion-compensated tomographic image reconstruction methods.

    PubMed

    Chun, Se Young; Fessler, Jeffrey A

    2012-07-01

    Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.

  14. Control of Risks Through the Use of Procedures: A Method for Evaluating the Change in Risk

    NASA Technical Reports Server (NTRS)

    Praino, Gregory T.; Sharit, Joseph

    2010-01-01

    This paper considers how procedures can be used to control risks faced by an organization and proposes a means of recognizing if a particular procedure reduces risk or contributes to the organization's exposure. The proposed method was developed out of the review of work documents and the governing procedures performed in the wake of the Columbia accident by NASA and the Space Shuttle prime contractor, United Space Alliance, LLC. A technique was needed to understand the rules, or procedural controls, in place at the time in the context of how important the role of each rule was. The proposed method assesses procedural risks, the residual risk associated with a hazard after a procedure's influence is accounted for, by considering each clause of a procedure as a unique procedural control that may be beneficial or harmful. For procedural risks with consequences severe enough to threaten the survival of the organization, the method measures the characteristics of each risk on a scale that is an alternative to the traditional consequence/likelihood couple. The dual benefits of the substitute scales are that they eliminate both the need to quantify a relationship between different consequence types and the need for the extensive history a probabilistic risk assessment would require. Control Value is used as an analog for the consequence, where the value of a rule is based on how well the control reduces the severity of the consequence when operating successfully. This value is composed of two parts: the inevitability of the consequence in the absence of the control, and the opportunity to intervene before the consequence is realized. High value controls will be ones where there is minimal need for intervention but maximum opportunity to actively prevent the outcome. Failure Likelihood is used as the substitute for the conventional likelihood of the outcome. For procedural controls, a failure is considered to be any non-malicious violation of the rule, whether intended or not. The model used for describing the Failure Likelihood considers how well a task was established by evaluating that task on five components. The components selected to define a well established task are: that it be defined, assigned to someone capable, that they be trained appropriately, that the actions be organized to enable proper completion and that some form of independent monitoring be performed. Validation of the method was based on the information provided by a group of experts in Space Shuttle ground processing when they were presented with 5 scenarios that identified a clause from a procedure. For each scenario, they recorded their perception of how important the associated rule was and how likely it was to fail. They then rated the components of Control Value and Failure Likelihood for all the scenarios. The order in which each reviewer ranked the scenarios Control Value and Failure Likelihood was compared to the order in which they ranked the scenarios for each of the associated components; inevitability and opportunity for Control Value and definition, assignment, training, organization and monitoring for Failure Likelihood. This order comparison showed how the components contributed to a relative relationship to the substitute risk element. With the relationship established for Space Shuttle ground processing, this method can be used to gauge if the introduction or removal of a particular rule will increase or decrease the .risk associated with the hazard it is intended to control.

  15. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures

    PubMed Central

    Theobald, Douglas L.; Wuttke, Deborah S.

    2008-01-01

    Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907

  16. Robust Likelihoods for Inflationary Gravitational Waves from Maps of Cosmic Microwave Background Polarization

    NASA Technical Reports Server (NTRS)

    Switzer, Eric Ryan; Watts, Duncan J.

    2016-01-01

    The B-mode polarization of the cosmic microwave background provides a unique window into tensor perturbations from inflationary gravitational waves. Survey effects complicate the estimation and description of the power spectrum on the largest angular scales. The pixel-space likelihood yields parameter distributions without the power spectrum as an intermediate step, but it does not have the large suite of tests available to power spectral methods. Searches for primordial B-modes must rigorously reject and rule out contamination. Many forms of contamination vary or are uncorrelated across epochs, frequencies, surveys, or other data treatment subsets. The cross power and the power spectrum of the difference of subset maps provide approaches to reject and isolate excess variance. We develop an analogous joint pixel-space likelihood. Contamination not modeled in the likelihood produces parameter-dependent bias and complicates the interpretation of the difference map. We describe a null test that consistently weights the difference map. Excess variance should either be explicitly modeled in the covariance or be removed through reprocessing the data.

  17. Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.

    PubMed

    Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier

    2017-02-01

    Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.

  18. MCMC multilocus lod scores: application of a new approach.

    PubMed

    George, Andrew W; Wijsman, Ellen M; Thompson, Elizabeth A

    2005-01-01

    On extended pedigrees with extensive missing data, the calculation of multilocus likelihoods for linkage analysis is often beyond the computational bounds of exact methods. Growing interest therefore surrounds the implementation of Monte Carlo estimation methods. In this paper, we demonstrate the speed and accuracy of a new Markov chain Monte Carlo method for the estimation of linkage likelihoods through an analysis of real data from a study of early-onset Alzheimer's disease. For those data sets where comparison with exact analysis is possible, we achieved up to a 100-fold increase in speed. Our approach is implemented in the program lm_bayes within the framework of the freely available MORGAN 2.6 package for Monte Carlo genetic analysis (http://www.stat.washington.edu/thompson/Genepi/MORGAN/Morgan.shtml).

  19. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree

    PubMed Central

    2010-01-01

    Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service. PMID:21034504

  20. A scaling transformation for classifier output based on likelihood ratio: Applications to a CAD workstation for diagnosis of breast cancer

    PubMed Central

    Horsch, Karla; Pesce, Lorenzo L.; Giger, Maryellen L.; Metz, Charles E.; Jiang, Yulei

    2012-01-01

    Purpose: The authors developed scaling methods that monotonically transform the output of one classifier to the “scale” of another. Such transformations affect the distribution of classifier output while leaving the ROC curve unchanged. In particular, they investigated transformations between radiologists and computer classifiers, with the goal of addressing the problem of comparing and interpreting case-specific values of output from two classifiers. Methods: Using both simulated and radiologists’ rating data of breast imaging cases, the authors investigated a likelihood-ratio-scaling transformation, based on “matching” classifier likelihood ratios. For comparison, three other scaling transformations were investigated that were based on matching classifier true positive fraction, false positive fraction, or cumulative distribution function, respectively. The authors explored modifying the computer output to reflect the scale of the radiologist, as well as modifying the radiologist’s ratings to reflect the scale of the computer. They also evaluated how dataset size affects the transformations. Results: When ROC curves of two classifiers differed substantially, the four transformations were found to be quite different. The likelihood-ratio scaling transformation was found to vary widely from radiologist to radiologist. Similar results were found for the other transformations. Our simulations explored the effect of database sizes on the accuracy of the estimation of our scaling transformations. Conclusions: The likelihood-ratio-scaling transformation that the authors have developed and evaluated was shown to be capable of transforming computer and radiologist outputs to a common scale reliably, thereby allowing the comparison of the computer and radiologist outputs on the basis of a clinically relevant statistic. PMID:22559651

  1. Factors Associated with Young Adults’ Pregnancy Likelihood

    PubMed Central

    Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan

    2014-01-01

    OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849

  2. Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence

    NASA Astrophysics Data System (ADS)

    Lewis, Nicholas; Grünwald, Peter

    2018-03-01

    Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).

  3. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    PubMed

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  4. Preserving Flow Variability in Watershed Model Calibrations

    EPA Science Inventory

    Background/Question/Methods Although watershed modeling flow calibration techniques often emphasize a specific flow mode, ecological conditions that depend on flow-ecology relationships often emphasize a range of flow conditions. We used informal likelihood methods to investig...

  5. Multivariate normal maximum likelihood with both ordinal and continuous variables, and data missing at random.

    PubMed

    Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C

    2018-04-01

    A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.

  6. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    PubMed

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  7. Identifying common donors in DNA mixtures, with applications to database searches.

    PubMed

    Slooten, K

    2017-01-01

    Several methods exist to compute the likelihood ratio LR(M, g) evaluating the possible contribution of a person of interest with genotype g to a mixed trace M. In this paper we generalize this LR to a likelihood ratio LR(M 1 , M 2 ) involving two possibly mixed traces M 1 and M 2 , where the question is whether there is a donor in common to both traces. In case one of the traces is in fact a single genotype, then this likelihood ratio reduces to the usual LR(M, g). We explain how our method conceptually is a logical consequence of the fact that LR calculations of the form LR(M, g) can be equivalently regarded as a probabilistic deconvolution of the mixture. Based on simulated data, and using a semi-continuous mixture evaluation model, we derive ROC curves of our method applied to various types of mixtures. From these data we conclude that searches for a common donor are often feasible in the sense that a very small false positive rate can be combined with a high probability to detect a common donor if there is one. We also show how database searches comparing all traces to each other can be carried out efficiently, as illustrated by the application of the method to the mixed traces in the Dutch DNA database. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. MultiPhyl: a high-throughput phylogenomics webserver using distributed computing

    PubMed Central

    Keane, Thomas M.; Naughton, Thomas J.; McInerney, James O.

    2007-01-01

    With the number of fully sequenced genomes increasing steadily, there is greater interest in performing large-scale phylogenomic analyses from large numbers of individual gene families. Maximum likelihood (ML) has been shown repeatedly to be one of the most accurate methods for phylogenetic construction. Recently, there have been a number of algorithmic improvements in maximum-likelihood-based tree search methods. However, it can still take a long time to analyse the evolutionary history of many gene families using a single computer. Distributed computing refers to a method of combining the computing power of multiple computers in order to perform some larger overall calculation. In this article, we present the first high-throughput implementation of a distributed phylogenetics platform, MultiPhyl, capable of using the idle computational resources of many heterogeneous non-dedicated machines to form a phylogenetics supercomputer. MultiPhyl allows a user to upload hundreds or thousands of amino acid or nucleotide alignments simultaneously and perform computationally intensive tasks such as model selection, tree searching and bootstrapping of each of the alignments using many desktop machines. The program implements a set of 88 amino acid models and 56 nucleotide maximum likelihood models and a variety of statistical methods for choosing between alternative models. A MultiPhyl webserver is available for public use at: http://www.cs.nuim.ie/distributed/multiphyl.php. PMID:17553837

  9. [Accuracy of three methods for the rapid diagnosis of oral candidiasis].

    PubMed

    Lyu, X; Zhao, C; Yan, Z M; Hua, H

    2016-10-09

    Objective: To explore a simple, rapid and efficient method for the diagnosis of oral candidiasis in clinical practice. Methods: Totally 124 consecutive patients with suspected oral candidiasis were enrolled from Department of Oral Medicine, Peking University School and Hospital of Stomatology, Beijing, China. Exfoliated cells of oral mucosa and saliva or concentrated oral rinse) obtained from all participants were tested by three rapid smear methods(10% KOH smear, gram-stained smear, Congo red stained smear). The diagnostic efficacy(sensitivity, specificity, Youden's index, likelihood ratio, consistency, predictive value and area under curve(AUC) of each of the above mentioned three methods was assessed by comparing the results with the gold standard(combination of clinical diagnosis, laboratory diagnosis and expert opinion). Results: Gram-stained smear of saliva(or concentrated oral rinse) demonstrated highest sensitivity(82.3%). Test of 10%KOH smear of exfoliated cells showed highest specificity(93.5%). Congo red stained smear of saliva(or concentrated oral rinse) displayed highest diagnostic efficacy(79.0% sensitivity, 80.6% specificity, 0.60 Youden's index, 4.08 positive likelihood ratio, 0.26 negative likelihood ratio, 80% consistency, 80.3% positive predictive value, 79.4% negative predictive value and 0.80 AUC). Conclusions: Test of Congo red stained smear of saliva(or concentrated oral rinse) could be used as a point-of-care tool for the rapid diagnosis of oral candidiasis in clinical practice. Trial registration: Chinese Clinical Trial Registry, ChiCTR-DDD-16008118.

  10. Silence That Can Be Dangerous: A Vignette Study to Assess Healthcare Professionals’ Likelihood of Speaking up about Safety Concerns

    PubMed Central

    Schwappach, David L. B.; Gehring, Katrin

    2014-01-01

    Purpose To investigate the likelihood of speaking up about patient safety in oncology and to clarify the effect of clinical and situational context factors on the likelihood of voicing concerns. Patients and Methods 1013 nurses and doctors in oncology rated four clinical vignettes describing coworkers’ errors and rule violations in a self-administered factorial survey (65% response rate). Multiple regression analysis was used to model the likelihood of speaking up as outcome of vignette attributes, responder’s evaluations of the situation and personal characteristics. Results Respondents reported a high likelihood of speaking up about patient safety but the variation between and within types of errors and rule violations was substantial. Staff without managerial function provided significantly higher levels of decision difficulty and discomfort to speak up. Based on the information presented in the vignettes, 74%−96% would speak up towards a supervisor failing to check a prescription, 45%−81% would point a coworker to a missed hand disinfection, 82%−94% would speak up towards nurses who violate a safety rule in medication preparation, and 59%−92% would question a doctor violating a safety rule in lumbar puncture. Several vignette attributes predicted the likelihood of speaking up. Perceived potential harm, anticipated discomfort, and decision difficulty were significant predictors of the likelihood of speaking up. Conclusions Clinicians’ willingness to speak up about patient safety is considerably affected by contextual factors. Physicians and nurses without managerial function report substantial discomfort with speaking up. Oncology departments should provide staff with clear guidance and trainings on when and how to voice safety concerns. PMID:25116338

  11. Genealogical Working Distributions for Bayesian Model Testing with Phylogenetic Uncertainty

    PubMed Central

    Baele, Guy; Lemey, Philippe; Suchard, Marc A.

    2016-01-01

    Marginal likelihood estimates to compare models using Bayes factors frequently accompany Bayesian phylogenetic inference. Approaches to estimate marginal likelihoods have garnered increased attention over the past decade. In particular, the introduction of path sampling (PS) and stepping-stone sampling (SS) into Bayesian phylogenetics has tremendously improved the accuracy of model selection. These sampling techniques are now used to evaluate complex evolutionary and population genetic models on empirical data sets, but considerable computational demands hamper their widespread adoption. Further, when very diffuse, but proper priors are specified for model parameters, numerical issues complicate the exploration of the priors, a necessary step in marginal likelihood estimation using PS or SS. To avoid such instabilities, generalized SS (GSS) has recently been proposed, introducing the concept of “working distributions” to facilitate—or shorten—the integration process that underlies marginal likelihood estimation. However, the need to fix the tree topology currently limits GSS in a coalescent-based framework. Here, we extend GSS by relaxing the fixed underlying tree topology assumption. To this purpose, we introduce a “working” distribution on the space of genealogies, which enables estimating marginal likelihoods while accommodating phylogenetic uncertainty. We propose two different “working” distributions that help GSS to outperform PS and SS in terms of accuracy when comparing demographic and evolutionary models applied to synthetic data and real-world examples. Further, we show that the use of very diffuse priors can lead to a considerable overestimation in marginal likelihood when using PS and SS, while still retrieving the correct marginal likelihood using both GSS approaches. The methods used in this article are available in BEAST, a powerful user-friendly software package to perform Bayesian evolutionary analyses. PMID:26526428

  12. Likelihood Ratios for Glaucoma Diagnosis Using Spectral Domain Optical Coherence Tomography

    PubMed Central

    Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M.; Weinreb, Robert N.; Medeiros, Felipe A.

    2014-01-01

    Purpose To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral domain optical coherence tomography (spectral-domain OCT). Design Observational cohort study. Methods 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the Receiver Operating Characteristic (ROC) curve. Results Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86μm were associated with positive LRs, i.e., LRs greater than 1; whereas RNFL thickness values higher than 86μm were associated with negative LRs, i.e., LRs smaller than 1. A modified Fagan nomogram was provided to assist calculation of post-test probability of disease from the calculated likelihood ratios and pretest probability of disease. Conclusion The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision-making. PMID:23972303

  13. A comprehensive assessment of collision likelihood in Geosynchronous Earth Orbit

    NASA Astrophysics Data System (ADS)

    Oltrogge, D. L.; Alfano, S.; Law, C.; Cacioni, A.; Kelso, T. S.

    2018-06-01

    Knowing the likelihood of collision for satellites operating in Geosynchronous Earth Orbit (GEO) is of extreme importance and interest to the global community and the operators of GEO spacecraft. Yet for all of its importance, a comprehensive assessment of GEO collision likelihood is difficult to do and has never been done. In this paper, we employ six independent and diverse assessment methods to estimate GEO collision likelihood. Taken in aggregate, this comprehensive assessment offer new insights into GEO collision likelihood that are within a factor of 3.5 of each other. These results are then compared to four collision and seven encounter rate estimates previously published. Collectively, these new findings indicate that collision likelihood in GEO is as much as four orders of magnitude higher than previously published by other researchers. Results indicate that a collision is likely to occur every 4 years for one satellite out of the entire GEO active satellite population against a 1 cm RSO catalogue, and every 50 years against a 20 cm RSO catalogue. Further, previous assertions that collision relative velocities are low (i.e., <1 km/s) in GEO are disproven, with some GEO relative velocities as high as 4 km/s identified. These new findings indicate that unless operators successfully mitigate this collision risk, the GEO orbital arc is and will remain at high risk of collision, with the potential for serious follow-on collision threats from post-collision debris when a substantial GEO collision occurs.

  14. Model criticism based on likelihood-free inference, with an application to protein network evolution.

    PubMed

    Ratmann, Oliver; Andrieu, Christophe; Wiuf, Carsten; Richardson, Sylvia

    2009-06-30

    Mathematical models are an important tool to explain and comprehend complex phenomena, and unparalleled computational advances enable us to easily explore them without any or little understanding of their global properties. In fact, the likelihood of the data under complex stochastic models is often analytically or numerically intractable in many areas of sciences. This makes it even more important to simultaneously investigate the adequacy of these models-in absolute terms, against the data, rather than relative to the performance of other models-but no such procedure has been formally discussed when the likelihood is intractable. We provide a statistical interpretation to current developments in likelihood-free Bayesian inference that explicitly accounts for discrepancies between the model and the data, termed Approximate Bayesian Computation under model uncertainty (ABCmicro). We augment the likelihood of the data with unknown error terms that correspond to freely chosen checking functions, and provide Monte Carlo strategies for sampling from the associated joint posterior distribution without the need of evaluating the likelihood. We discuss the benefit of incorporating model diagnostics within an ABC framework, and demonstrate how this method diagnoses model mismatch and guides model refinement by contrasting three qualitative models of protein network evolution to the protein interaction datasets of Helicobacter pylori and Treponema pallidum. Our results make a number of model deficiencies explicit, and suggest that the T. pallidum network topology is inconsistent with evolution dominated by link turnover or lateral gene transfer alone.

  15. Gaussian copula as a likelihood function for environmental models

    NASA Astrophysics Data System (ADS)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.

  16. Estimating population genetic parameters and comparing model goodness-of-fit using DNA sequences with error

    PubMed Central

    Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric

    2010-01-01

    It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140

  17. A novel retinal vessel extraction algorithm based on matched filtering and gradient vector flow

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Xia, Mingliang; Xuan, Li

    2013-10-01

    The microvasculature network of retina plays an important role in the study and diagnosis of retinal diseases (age-related macular degeneration and diabetic retinopathy for example). Although it is possible to noninvasively acquire high-resolution retinal images with modern retinal imaging technologies, non-uniform illumination, the low contrast of thin vessels and the background noises all make it difficult for diagnosis. In this paper, we introduce a novel retinal vessel extraction algorithm based on gradient vector flow and matched filtering to segment retinal vessels with different likelihood. Firstly, we use isotropic Gaussian kernel and adaptive histogram equalization to smooth and enhance the retinal images respectively. Secondly, a multi-scale matched filtering method is adopted to extract the retinal vessels. Then, the gradient vector flow algorithm is introduced to locate the edge of the retinal vessels. Finally, we combine the results of matched filtering method and gradient vector flow algorithm to extract the vessels at different likelihood levels. The experiments demonstrate that our algorithm is efficient and the intensities of vessel images exactly represent the likelihood of the vessels.

  18. Topics in inference and decision-making with partial knowledge

    NASA Technical Reports Server (NTRS)

    Safavian, S. Rasoul; Landgrebe, David

    1990-01-01

    Two essential elements needed in the process of inference and decision-making are prior probabilities and likelihood functions. When both of these components are known accurately and precisely, the Bayesian approach provides a consistent and coherent solution to the problems of inference and decision-making. In many situations, however, either one or both of the above components may not be known, or at least may not be known precisely. This problem of partial knowledge about prior probabilities and likelihood functions is addressed. There are at least two ways to cope with this lack of precise knowledge: robust methods, and interval-valued methods. First, ways of modeling imprecision and indeterminacies in prior probabilities and likelihood functions are examined; then how imprecision in the above components carries over to the posterior probabilities is examined. Finally, the problem of decision making with imprecise posterior probabilities and the consequences of such actions are addressed. Application areas where the above problems may occur are in statistical pattern recognition problems, for example, the problem of classification of high-dimensional multispectral remote sensing image data.

  19. Uncued Low SNR Detection with Likelihood from Image Multi Bernoulli Filter

    NASA Astrophysics Data System (ADS)

    Murphy, T.; Holzinger, M.

    2016-09-01

    Both SSA and SDA necessitate uncued, partially informed detection and orbit determination efforts for small space objects which often produce only low strength electro-optical signatures. General frame to frame detection and tracking of objects includes methods such as moving target indicator, multiple hypothesis testing, direct track-before-detect methods, and random finite set based multiobject tracking. This paper will apply the multi-Bernoilli filter to low signal-to-noise ratio (SNR), uncued detection of space objects for space domain awareness applications. The primary novel innovation in this paper is a detailed analysis of the existing state-of-the-art likelihood functions and a likelihood function, based on a binary hypothesis, previously proposed by the authors. The algorithm is tested on electro-optical imagery obtained from a variety of sensors at Georgia Tech, including the GT-SORT 0.5m Raven-class telescope, and a twenty degree field of view high frame rate CMOS sensor. In particular, a data set of an extended pass of the Hitomi Astro-H satellite approximately 3 days after loss of communication and potential break up is examined.

  20. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  1. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis

    PubMed Central

    van de Schoot, Rens; Hox, Joop

    2014-01-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827

  2. Subjective global assessment of nutritional status in children.

    PubMed

    Mahdavi, Aida Malek; Ostadrahimi, Alireza; Safaiyan, Abdolrasool

    2010-10-01

    This study was aimed to compare the subjective and objective nutritional assessments and to analyse the performance of subjective global assessment (SGA) of nutritional status in diagnosing undernutrition in paediatric patients. One hundred and forty children (aged 2-12 years) hospitalized consecutively in Tabriz Paediatric Hospital from June 2008 to August 2008 underwent subjective assessment using the SGA questionnaire and objective assessment, including anthropometric and biochemical measurements. Agreement between two assessment methods was analysed by the kappa (κ) statistic. Statistical indicators including (sensitivity, specificity, predictive values, error rates, accuracy, powers, likelihood ratios and odds ratio) between SGA and objective assessment method were determined. The overall prevalence of undernutrition according to the SGA (70.7%) was higher than that by objective assessment of nutritional status (48.5%). Agreement between the two evaluation methods was only fair to moderate (κ = 0.336, P < 0.001). The sensitivity, specificity, positive and negative predictive value of the SGA method for screening undernutrition in this population were 88.235%, 45.833%, 60.606% and 80.487%, respectively. Accuracy, positive and negative power of the SGA method were 66.428%, 56.074% and 41.25%, respectively. Likelihood ratio positive, likelihood ratio negative and odds ratio of the SGA method were 1.628, 0.256 and 6.359, respectively. Our findings indicated that in assessing nutritional status of children, there is not a good level of agreement between SGA and objective nutritional assessment. In addition, SGA is a highly sensitive tool for assessing nutritional status and could identify children at risk of developing undernutrition. © 2009 Blackwell Publishing Ltd.

  3. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    PubMed

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.

  4. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  5. Statistical methods for the beta-binomial model in teratology.

    PubMed Central

    Yamamoto, E; Yanagimoto, T

    1994-01-01

    The beta-binomial model is widely used for analyzing teratological data involving littermates. Recent developments in statistical analyses of teratological data are briefly reviewed with emphasis on the model. For statistical inference of the parameters in the beta-binomial distribution, separation of the likelihood introduces an likelihood inference. This leads to reducing biases of estimators and also to improving accuracy of empirical significance levels of tests. Separate inference of the parameters can be conducted in a unified way. PMID:8187716

  6. A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2012-01-01

    This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659

  7. A New Online Calibration Method Based on Lord's Bias-Correction.

    PubMed

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  8. Fuzzy Arden Syntax: A fuzzy programming language for medicine.

    PubMed

    Vetterlein, Thomas; Mandl, Harald; Adlassnig, Klaus-Peter

    2010-05-01

    The programming language Arden Syntax has been optimised for use in clinical decision support systems. We describe an extension of this language named Fuzzy Arden Syntax, whose original version was introduced in S. Tiffe's dissertation on "Fuzzy Arden Syntax: Representation and Interpretation of Vague Medical Knowledge by Fuzzified Arden Syntax" (Vienna University of Technology, 2003). The primary aim is to provide an easy means of processing vague or uncertain data, which frequently appears in medicine. For both propositional and number data types, fuzzy equivalents have been added to Arden Syntax. The Boolean data type was generalised to represent any truth degree between the two extremes 0 (falsity) and 1 (truth); fuzzy data types were introduced to represent fuzzy sets. The operations on truth values and real numbers were generalised accordingly. As the conditions to decide whether a certain programme unit is executed or not may be indeterminate, a Fuzzy Arden Syntax programme may split. The data in the different branches may be optionally aggregated subsequently. Fuzzy Arden Syntax offers the possibility to formulate conveniently Medical Logic Modules (MLMs) based on the principle of a continuously graded applicability of statements. Furthermore, ad hoc decisions about sharp value boundaries can be avoided. As an illustrative example shows, an MLM making use of the features of Fuzzy Arden Syntax is not significantly more complex than its Arden Syntax equivalent; in the ideal case, a programme handling crisp data remains practically unchanged when compared to its fuzzified version. In the latter case, the output data, which can be a set of weighted alternatives, typically depends continuously from the input data. In typical applications an Arden Syntax MLM can produce a different output after only slight changes of the input; discontinuities are in fact unavoidable when the input varies continuously but the output is taken from a discrete set of possibilities. This inconvenience can, however, be attenuated by means of certain mechanisms on which the programme flow under Fuzzy Arden Syntax is based. To write a programme making use of these possibilities is not significantly more difficult than to write a programme according to the usual practice. 2010 Elsevier B.V. All rights reserved.

  9. The measurement of the intrinsic impurities of molybdenum and carbon in the Alcator C-Mod tokamak plasma using low resolution spectroscopy

    NASA Astrophysics Data System (ADS)

    May, M. J.; Finkenthal, M.; Regan, S. P.; Moos, H. W.; Terry, J. L.; Goetz, J. A.; Graf, M. A.; Rice, J. E.; Marmar, E. S.; Fournier, K. B.; Goldstein, W. H.

    1997-06-01

    The intrinsic impurity content of molybdenum and carbon was measured in the Alcator C-Mod tokamak using low resolution, multilayer mirror (MLM) spectroscopy ( Delta lambda ~1-10 AA). Molybdenum was the dominant high-Z impurity and originated from the molybdenum armour tiles covering all of the plasma facing surfaces (including the inner column, the poloidal divertor plates and the ion cyclotron resonant frequency (ICRF) limiter) at Alcator C-Mod. Despite the all metal first wall, a carbon concentration of 1 to 2% existed in the plasma and was the major low-Z impurity in Alcator C-Mod. Thus, the behaviour of intrinsic molybdenum and carbon penetrating into the main plasma and the effect on the plasma must be measured and characterized during various modes of Alcator C-Mod operation. To this end, soft X-ray extreme ultraviolet (XUV) emission lines of charge states, ranging from hydrogen-like to helium-like lines of carbon (radius/minor radius, r/a~1) at the plasma edge to potassium to chlorine-like (0.4

  10. 99 mTc-MIBI washout as a complementary factor in the evaluation of idiopathic dilated cardiomyopathy (IDCM) using myocardial perfusion imaging.

    PubMed

    Shiroodi, Mohammad Kazem; Shafiei, Babak; Baharfard, Nastaran; Gheidari, Mohammad Esmail; Nazari, Babak; Pirayesh, Elaheh; Kiasat, Ali; Hoseinzadeh, Samaneh; Hashemi, Abolghassem; Akbarzadeh, Mohammad Ali; Javadi, Hamid; Nabipour, Iraj; Assadi, Majid

    2012-01-01

    Rapid technetium-99 m methoxyisobutylisonitrile (99 mTc-MIBI) washout has been shown to occur in impaired myocardia. This study is based on the hypothesis that scintigraphy can be applied to calculate the myocardial 99 mTc-MIBI washout rate (WR) to diagnose and evaluate heart failure severity and other left ventricular functional parameters specifically in idiopathic dilated cardiomyopathy (IDCM) patients. Patients with IDCMP (n = 17; 52.65 ± 11.47 years) and normal subjects (n = 6; 49.67 ± 10.15 years) were intravenously administered 99 mTc-hexakis-2-methoxyisobutylisonitrile (99 mTc-MIBI). Next, early and delayed planar data were acquired (at 3.5-h intervals), and electrocardiogram (ECG)-gated myocardial perfusion single photon emission computed tomography (SPECT) was performed. The 99 mTc-MIBI WR was calculated using early and delayed planar images. Left ventricular functional parameters were also analyzed using quantitative gated SPECT (QGS) data. In target group, myocardial WRs (29.13 ± 6.68%) were significantly higher than those of control subjects (14.17 ± 3.31%; P < 0.001). The 99 mTc-MIBI WR increased with the increasing severity of the NYHA functional class (23.16 ± 1.72% for class I, 30.25 ± 0.95% for class II, 32.60 ± 6.73% for class III, and 37.50 ± 7.77% for class IV; P = 0.02). The WR was positively correlated with the end-diastolic volume (EDV) index (r (2) = 0.216; β = 0.464; P = 0.02 [ml/m(2)], the end-systolic volume (ESV) index (r (2) = 0.234; β = 0.484; P = 0.01 [ml/m(2)]), the summed motion score (SMS) (r (2) = 0.544; β = 0.738; P = 0.00), and the summed thickening score (STS) (r (2) = 0.656; β = 0.810; P = 0.00); it was negatively correlated with the left ventricular ejection fraction (LVEF) (r (2) = 0.679; β = -0.824; P = 0.00). It can be concluded that 99 mTc-MIBI scintigraphy might be a valuable molecular imaging tool for the diagnosis and evaluation of myocardial damage or dysfunction severity.

  11. Pulmonary arterial stiffening in COPD and its implications for right ventricular remodelling.

    PubMed

    Weir-McCall, Jonathan R; Liu-Shiu-Cheong, Patrick Sk; Struthers, Allan D; Lipworth, Brian J; Houston, J Graeme

    2018-02-27

    Pulmonary pulse wave velocity (PWV) allows the non-invasive measurement of pulmonary arterial stiffening, but has not previously been assessed in COPD. The aim of the current study was to assess PWV in COPD and its association with right ventricular (RV) remodelling. Fifty-eight participants with COPD underwent pulmonary function tests, 6-min walk test and cardiac MRI, while 21 healthy controls (HCs) underwent cardiac MRI. Thirty-two COPD patients underwent a follow-up MRI to assess for longitudinal changes in RV metrics. Cardiac MRI was used to quantify RV mass, volumes and PWV. Differences in continuous variables between the COPD and HC groups was tested using an independent t-test, and associations between PWV and right ventricular parameters was examined using Pearson's correlation coefficient. Those with COPD had reduced pulsatility (COPD (mean±SD):24.88±8.84% vs. HC:30.55±11.28%, p=0.021), pulmonary acceleration time (COPD:104.0±22.9ms vs. HC: 128.1±32.2ms, p<0.001), higher PWV (COPD:2.62±1.29ms -1 vs. HC:1.78±0.72ms -1 , p=0.001), lower RV end diastolic volume (COPD:53.6±11.1ml vs. HC:59.9±13.0ml, p=0.037) and RV stroke volume (COPD:31.9±6.9ml/m 2 vs. HC:37.1±6.2ml/m 2 , p=0.003) with no difference in mass (p=0.53). PWV was not associated with right ventricular parameters. While pulmonary vascular remodelling is present in COPD, cardiac remodelling favours reduced filling rather than increased afterload. Treatment of obstructive lung disease may have greater effect on cardiac function than treatment of pulmonary vascular disease in most COPD patients KEY POINTS: • Pulmonary pulse wave velocity (PWV) is elevated in COPD. • Pulmonary PWV is not associated with right ventricular remodelling. • Right ventricular remodelling is more in keeping with that of reduced filling.

  12. Coronary artery bypass surgery with or without mitral valve annuloplasty in moderate functional ischemic mitral regurgitation: final results of the Randomized Ischemic Mitral Evaluation (RIME) trial.

    PubMed

    Chan, K M John; Punjabi, Prakash P; Flather, Marcus; Wage, Riccardo; Symmonds, Karen; Roussin, Isabelle; Rahman-Haley, Shelley; Pennell, Dudley J; Kilner, Philip J; Dreyfus, Gilles D; Pepper, John R

    2012-11-20

    The role of mitral valve repair (MVR) during coronary artery bypass grafting (CABG) in patients with moderate ischemic mitral regurgitation (MR) is uncertain. We conducted a randomized, controlled trial to determine whether repairing the mitral valve during CABG may improve functional capacity and left ventricular reverse remodeling compared with CABG alone. Seventy-three patients referred for CABG with moderate ischemic MR and an ejection fraction >30% were randomized to receive CABG plus MVR (34 patients) or CABG only (39 patients). The study was stopped early after review of interim data. At 1 year, there was a greater improvement in the primary end point of peak oxygen consumption in the CABG plus MVR group compared with the CABG group (3.3 mL/kg/min versus 0.8 mL/kg/min; P<0.001). There was also a greater improvement in the secondary end points in the CABG plus MVR group compared with the CABG group: left ventricular end-systolic volume index, MR volume, and plasma B-type natriuretic peptide reduction of 22.2 mL/m(2), 28.2 mL/beat, and 557.4 pg/mL, respectively versus 4.4 mL/m(2) (P=0.002), 9.2 mL/beat (P=0.001), and 394.7 pg/mL (P=0.003), respectively. Operation duration, blood transfusion, intubation duration, and hospital stay duration were greater in the CABG plus MVR group. Deaths at 30 days and 1 year were similar in both groups: 3% and 9%, respectively in the CABG plus MVR group, versus 3% (P=1.00) and 5% (P=0.66), respectively in the CABG group. Adding mitral annuloplasty to CABG in patients with moderate ischemic MR may improve functional capacity, left ventricular reverse remodeling, MR severity, and B-type natriuretic peptide levels, compared with CABG alone. The impact of these benefits on longer term clinical outcomes remains to be defined.

  13. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  14. The logistic model for predicting the non-gonoactive Aedes aegypti females.

    PubMed

    Reyes-Villanueva, Filiberto; Rodríguez-Pérez, Mario A

    2004-01-01

    To estimate, using logistic regression, the likelihood of occurrence of a non-gonoactive Aedes aegypti female, previously fed human blood, with relation to body size and collection method. This study was conducted in Monterrey, Mexico, between 1994 and 1996. Ten samplings of 60 mosquitoes of Ae. aegypti females were carried out in three dengue endemic areas: six of biting females, two of emerging mosquitoes, and two of indoor resting females. Gravid females, as well as those with blood in the gut were removed. Mosquitoes were taken to the laboratory and engorged on human blood. After 48 hours, ovaries were dissected to register whether they were gonoactive or non-gonoactive. Wing-length in mm was an indicator for body size. The logistic regression model was used to assess the likelihood of non-gonoactivity, as a binary variable, in relation to wing-length and collection method. Of the 600 females, 164 (27%) remained non-gonoactive, with a wing-length range of 1.9-3.2 mm, almost equal to that of all females (1.8-3.3 mm). The logistic regression model showed a significant likelihood of a female remaining non-gonoactive (Y=1). The collection method did not influence the binary response, but there was an inverse relationship between non-gonoactivity and wing-length. Dengue vector populations from Monterrey, Mexico display a wide-range body size. Logistic regression was a useful tool to estimate the likelihood for an engorged female to remain non-gonoactive. The necessity for a second blood meal is present in any female, but small mosquitoes are more likely to bite again within a 2-day interval, in order to attain egg maturation. The English version of this paper is available too at: http://www.insp.mx/salud/index.html.

  15. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  16. Bayesian image reconstruction - The pixon and optimal image modeling

    NASA Technical Reports Server (NTRS)

    Pina, R. K.; Puetter, R. C.

    1993-01-01

    In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.

  17. Simpson's paradox - aggregating and partitioning populations in health disparities of lung cancer patients.

    PubMed

    Fu, P; Panneerselvam, A; Clifford, B; Dowlati, A; Ma, P C; Zeng, G; Halmos, B; Leidner, R S

    2015-12-01

    It is well known that non-small cell lung cancer (NSCLC) is a heterogeneous group of diseases. Previous studies have demonstrated genetic variation among different ethnic groups in the epidermal growth factor receptor (EGFR) in NSCLC. Research by our group and others has recently shown a lower frequency of EGFR mutations in African Americans with NSCLC, as compared to their White counterparts. In this study, we use our original study data of EGFR pathway genetics in African American NSCLC as an example to illustrate that univariate analyses based on aggregation versus partition of data leads to contradictory results, in order to emphasize the importance of controlling statistical confounding. We further investigate analytic approaches in logistic regression for data with separation, as is the case in our example data set, and apply appropriate methods to identify predictors of EGFR mutation. Our simulation shows that with separated or nearly separated data, penalized maximum likelihood (PML) produces estimates with smallest bias and approximately maintains the nominal value with statistical power equal to or better than that from maximum likelihood and exact conditional likelihood methods. Application of the PML method in our example data set shows that race and EGFR-FISH are independently significant predictors of EGFR mutation. © The Author(s) 2011.

  18. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.

  19. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  20. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

Top