Sample records for comparison adjustment method

  1. Unbiased Causal Inference from an Observational Study: Results of a Within-Study Comparison

    ERIC Educational Resources Information Center

    Pohl, Steffi; Steiner, Peter M.; Eisermann, Jens; Soellner, Renate; Cook, Thomas D.

    2009-01-01

    Adjustment methods such as propensity scores and analysis of covariance are often used for estimating treatment effects in nonexperimental data. Shadish, Clark, and Steiner used a within-study comparison to test how well these adjustments work in practice. They randomly assigned participating students to a randomized or nonrandomized experiment.…

  2. Cost of Living and Taxation Adjustments in Salary Comparisons. AIR 1993 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Zeglen, Marie E.; Tesfagiorgis, Gebre

    This study examined faculty salaries at 50 higher education institutions using methods to adjust salaries for geographic differences, cost of living, and tax burdens so that comparisons were based on real rather than nominal value of salaries. The study sample consisted of one public doctorate granting institution from each state and used salary…

  3. Fast and Accurate Approximation to Significance Tests in Genome-Wide Association Studies

    PubMed Central

    Zhang, Yu; Liu, Jun S.

    2011-01-01

    Genome-wide association studies commonly involve simultaneous tests of millions of single nucleotide polymorphisms (SNP) for disease association. The SNPs in nearby genomic regions, however, are often highly correlated due to linkage disequilibrium (LD, a genetic term for correlation). Simple Bonferonni correction for multiple comparisons is therefore too conservative. Permutation tests, which are often employed in practice, are both computationally expensive for genome-wide studies and limited in their scopes. We present an accurate and computationally efficient method, based on Poisson de-clumping heuristics, for approximating genome-wide significance of SNP associations. Compared with permutation tests and other multiple comparison adjustment approaches, our method computes the most accurate and robust p-value adjustments for millions of correlated comparisons within seconds. We demonstrate analytically that the accuracy and the efficiency of our method are nearly independent of the sample size, the number of SNPs, and the scale of p-values to be adjusted. In addition, our method can be easily adopted to estimate false discovery rate. When applied to genome-wide SNP datasets, we observed highly variable p-value adjustment results evaluated from different genomic regions. The variation in adjustments along the genome, however, are well conserved between the European and the African populations. The p-value adjustments are significantly correlated with LD among SNPs, recombination rates, and SNP densities. Given the large variability of sequence features in the genome, we further discuss a novel approach of using SNP-specific (local) thresholds to detect genome-wide significant associations. This article has supplementary material online. PMID:22140288

  4. Comparison of Methods for Adjusting Incorrect Assignments of Items to Subtests: Oblique Multiple Group Method versus Confirmatory Common Factor Method

    ERIC Educational Resources Information Center

    Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E.

    2009-01-01

    A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare two methods on the quality of their suggestions to…

  5. Comparison of two stand-alone CADe systems at multiple operating points

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas

    2015-03-01

    Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.

  6. Method and apparatus for analog pulse pile-up rejection

    DOEpatents

    De Geronimo, Gianluigi

    2013-12-31

    A method and apparatus for pulse pile-up rejection are disclosed. The apparatus comprises a delay value application constituent configured to receive a threshold-crossing time value, and provide an adjustable value according to a delay value and the threshold-crossing time value; and a comparison constituent configured to receive a peak-occurrence time value and the adjustable value, compare the peak-occurrence time value with the adjustable value, indicate pulse acceptance if the peak-occurrence time value is less than or equal to the adjustable value, and indicate pulse rejection if the peak-occurrence time value is greater than the adjustable value.

  7. Method and apparatus for analog pulse pile-up rejection

    DOEpatents

    De Geronimo, Gianluigi

    2014-11-18

    A method and apparatus for pulse pile-up rejection are disclosed. The apparatus comprises a delay value application constituent configured to receive a threshold-crossing time value, and provide an adjustable value according to a delay value and the threshold-crossing time value; and a comparison constituent configured to receive a peak-occurrence time value and the adjustable value, compare the peak-occurrence time value with the adjustable value, indicate pulse acceptance if the peak-occurrence time value is less than or equal to the adjustable value, and indicate pulse rejection if the peak-occurrence time value is greater than the adjustable value.

  8. Two Models of Caregiver Strain and Bereavement Adjustment: A Comparison of Husband and Daughter Caregivers of Breast Cancer Hospice Patients

    ERIC Educational Resources Information Center

    Bernard, Lori L.; Guarnaccia, Charles A.

    2003-01-01

    Purpose: Caregiver bereavement adjustment literature suggests opposite models of impact of role strain on bereavement adjustment after care-recipient death--a Complicated Grief Model and a Relief Model. This study tests these competing models for husband and adult-daughter caregivers of breast cancer hospice patients. Design and Methods: This…

  9. (abstract) A Comparison Between Measurements of the F-layer Critical Frequency and Values Derived from the PRISM Adjustment Algorithm Applied to Total Electron Content Data in the Equatorial Region

    NASA Technical Reports Server (NTRS)

    Mannucci, A. J.; Anderson, D. N.; Abdu, A. M.

    1994-01-01

    The Parametrized Real-Time Ionosphere Specification Model (PRISM) is a global ionospheric specification model that can incorporate real-time data to compute accurate electron density profiles. Time series of computed and measured data are compared in this paper. This comparison can be used to suggest methods of optimizing the PRISM adjustment algorithm for TEC data obtained at low altitudes.

  10. Improved Conjugate Gradient Bundle Adjustment of Dunhuang Wall Painting Images

    NASA Astrophysics Data System (ADS)

    Hu, K.; Huang, X.; You, H.

    2017-09-01

    Bundle adjustment with additional parameters is identified as a critical step for precise orthoimage generation and 3D reconstruction of Dunhuang wall paintings. Due to the introduction of self-calibration parameters and quasi-planar constraints, the structure of coefficient matrix of the reduced normal equation is banded-bordered, making the solving process of bundle adjustment complex. In this paper, Conjugate Gradient Bundle Adjustment (CGBA) method is deduced by calculus of variations. A preconditioning method based on improved incomplete Cholesky factorization is adopt to reduce the condition number of coefficient matrix, as well as to accelerate the iteration rate of CGBA. Both theoretical analysis and experimental results comparison with conventional method indicate that, the proposed method can effectively conquer the ill-conditioned problem of normal equation and improve the calculation efficiency of bundle adjustment with additional parameters considerably, while maintaining the actual accuracy.

  11. Marginal Mean Weighting through Stratification: Adjustment for Selection Bias in Multilevel Data

    ERIC Educational Resources Information Center

    Hong, Guanglei

    2010-01-01

    Defining causal effects as comparisons between marginal population means, this article introduces marginal mean weighting through stratification (MMW-S) to adjust for selection bias in multilevel educational data. The article formally shows the inherent connections among the MMW-S method, propensity score stratification, and…

  12. Methods to Identify Changes in Background Water-Quality Conditions Using Dissolved-Solids Concentrations and Loads as Indicators, Arkansas River and Fountain Creek, in the Vicinity of Pueblo, Colorado

    USGS Publications Warehouse

    Ortiz, Roderick F.

    2004-01-01

    Effective management of existing water-storage capacity in the Arkansas River Basin is anticipated to help satisfy the need for water in southeastern Colorado. A strategy to meet these needs has been developed, but implementation could affect the water quality of the Arkansas River and Fountain Creek in the vicinity of Pueblo, Colorado. Because no known methods are available to determine what effects future changes in operations will have on water quality, the U.S. Geological Survey, in cooperation with the Southeastern Colorado Water Activity Enterprise, began a study in 2002 to develop methods that could identify if future water-quality conditions have changed significantly from background (preexisting) water-quality conditions. A method was developed to identify when significant departures from background (preexisting) water-quality conditions occur in the lower Arkansas River and Fountain Creek in the vicinity of Pueblo, Colorado. Additionally, the methods described in this report provide information that can be used by various water-resource agencies for an internet-based decision-support tool. Estimated dissolved-solids concentrations at five sites in the study area were evaluated to designate historical background conditions and to calculate tolerance limits used to identify statistical departures from background conditions. This method provided a tool that could be applied with defined statistical probabilities associated with specific tolerance limits. Drought data from 2002 were used to test the method. Dissolved-solids concentrations exceeded the tolerance limits at all four sites on the Arkansas River at some point during 2002. The number of exceedances was particularly evident when streamflow from Pueblo Reservoir was reduced, and return flows and ground-water influences to the river were more prevalent. No exceedances were observed at the site on Fountain Creek. These comparisons illustrated the need to adjust the concentration data to account for varying streamflow. As such, similar comparisons between flow-adjusted data were done. At the site Arkansas River near Avondale, nearly all the 2002 flow-adjusted concentration data were less than the flow-adjusted tolerance limit which illustrated the effects of using flow-adjusted concentrations. Numerous exceedances of the flow-adjusted tolerance limits, however, were observed at the sites Arkansas River above Pueblo and Arkansas River at Pueblo. These results indicated that the method was able to identify a change in the ratio of source waters under drought conditions. Additionally, tolerance limits were calculated for daily dissolved-solids load and evaluated in a similar manner. Several other mass-load approaches were presented to help identify long-term changes in water quality. These included comparisons of cumulative mass load at selected sites and comparisons of mass load contributed at the Arkansas River near Avondale site by measured and unmeasured sources.

  13. Mode-Stirred Method Implementation for HIRF Susceptibility Testing and Results Comparison with Anechoic Method

    NASA Technical Reports Server (NTRS)

    Nguyen, Truong X.; Ely, Jay J.; Koppen, Sandra V.

    2001-01-01

    This paper describes the implementation of mode-stirred method for susceptibility testing according to the current DO-160D standard. Test results on an Engine Data Processor using the implemented procedure and the comparisons with the standard anechoic test results are presented. The comparison experimentally shows that the susceptibility thresholds found in mode-stirred method are consistently higher than anechoic. This is consistent with the recent statistical analysis finding by NIST that the current calibration procedure overstates field strength by a fixed amount. Once the test results are adjusted for this value, the comparisons with the anechoic results are excellent. The results also show that test method has excellent chamber to chamber repeatability. Several areas for improvements to the current procedure are also identified and implemented.

  14. Application of adjusted data in calculating fission-product decay energies and spectra

    NASA Astrophysics Data System (ADS)

    George, D. C.; Labauve, R. J.; England, T. R.

    1982-06-01

    The code ADENA, which approximately calculates fussion-product beta and gamma decay energies and spectra in 19 or fewer energy groups from a mixture of U235 and Pu239 fuels, is described. The calculation uses aggregate, adjusted data derived from a combination of several experiments and summation results based on the ENDF/B-V fission product file. The method used to obtain these adjusted data and the method used by ADENA to calculate fission-product decay energy with an absorption correction are described, and an estimate of the uncertainty of the ADENA results is given. Comparisons of this approximate method are made to experimental measurements, to the ANSI/ANS 5.1-1979 standard, and to other calculational methods. A listing of the complete computer code (ADENA) is contained in an appendix. Included in the listing are data statements containing the adjusted data in the form of parameters to be used in simple analytic functions.

  15. Comparison of a Full Food-Frequency Questionnaire with the Three-Day Unweighted Food Records in Young Polish Adult Women: Implications for Dietary Assessment

    PubMed Central

    Kowalkowska, Joanna; Slowinska, Malgorzata A.; Slowinski, Dariusz; Dlugosz, Anna; Niedzwiedzka, Ewa; Wadolowska, Lidia

    2013-01-01

    The food frequency questionnaire (FFQ) and the food record (FR) are among the most common methods used in dietary research. It is important to know that is it possible to use both methods simultaneously in dietary assessment and prepare a single, comprehensive interpretation. The aim of this study was to compare the energy and nutritional value of diets, determined by the FFQ and by the three-day food records of young women. The study involved 84 female students aged 21–26 years (mean of 22.2 ± 0.8 years). Completing the FFQ was preceded by obtaining unweighted food records covering three consecutive days. Energy and nutritional value of diets was assessed for both methods (FFQ-crude, FR-crude). Data obtained for FFQ-crude were adjusted with beta-coefficient equaling 0.5915 (FFQ-adjusted) and regression analysis (FFQ-regressive). The FFQ-adjusted was calculated as FR-crude/FFQ-crude ratio of mean daily energy intake. FFQ-regressive was calculated for energy and each nutrient separately using regression equation, including FFQ-crude and FR-crude as covariates. For FR-crude and FFQ-crude the energy value of diets was standardized to 2000 kcal (FR-standardized, FFQ-standardized). Methods of statistical comparison included a dependent samples t-test, a chi-square test, and the Bland-Altman method. The mean energy intake in FFQ-crude was significantly higher than FR-crude (2740.5 kcal vs. 1621.0 kcal, respectively). For FR-standardized and FFQ-standardized, significance differences were found in the mean intake of 18 out of 31 nutrients, for FR-crude and FFQ-adjusted in 13 out of 31 nutrients and FR-crude and FFQ-regressive in 11 out of 31 nutrients. The Bland-Altman method showed an overestimation of energy and nutrient intake by FFQ-crude in comparison to FR-crude, e.g., total protein was overestimated by 34.7 g/day (95% Confidence Interval, CI: −29.6, 99.0 g/day) and fat by 48.6 g/day (95% CI: −36.4, 133.6 g/day). After regressive transformation of FFQ, the absolute difference between FFQ-regressive and FR-crude equaled 0.0 g/day and 95% CI were much better (e.g., for total protein 95% CI: −32.7, 32.7 g/day, for fat 95% CI: −49.6, 49.6 g/day). In conclusion, differences in nutritional value of diets resulted from overestimating energy intake by the FFQ in comparison to the three-day unweighted food records. Adjustment of energy and nutrient intake applied for the FFQ using various methods, particularly regression equations, significantly improved the agreement between results obtained by both methods and dietary assessment. To obtain the most accurate results in future studies using this FFQ, energy and nutrient intake should be adjusted by the regression equations presented in this paper. PMID:23877089

  16. Inter-provider comparison of patient-reported outcomes: developing an adjustment to account for differences in patient case mix.

    PubMed

    Nuttall, David; Parkin, David; Devlin, Nancy

    2015-01-01

    This paper describes the development of a methodology for the case-mix adjustment of patient-reported outcome measures (PROMs) data permitting the comparison of outcomes between providers on a like-for-like basis. Statistical models that take account of provider-specific effects form the basis of the proposed case-mix adjustment methodology. Indirect standardisation provides a transparent means of case mix adjusting the PROMs data, which are updated on a monthly basis. Recently published PROMs data for patients undergoing unilateral knee replacement are used to estimate empirical models and to demonstrate the application of the proposed case-mix adjustment methodology in practice. The results are illustrative and are used to highlight a number of theoretical and empirical issues that warrant further exploration. For example, because of differences between PROMs instruments, case-mix adjustment methodologies may require instrument-specific approaches. A number of key assumptions are made in estimating the empirical models, which could be open to challenge. The covariates of post-operative health status could be expanded, and alternative econometric methods could be employed. © 2013 Crown copyright.

  17. Are comparisons of patient experiences across hospitals fair? A study in Veterans Health Administration hospitals.

    PubMed

    Cleary, Paul D; Meterko, Mark; Wright, Steven M; Zaslavsky, Alan M

    2014-07-01

    Surveys are increasingly used to assess patient experiences with health care. Comparisons of hospital scores based on patient experience surveys should be adjusted for patient characteristics that might affect survey results. Such characteristics are commonly drawn from patient surveys that collect little, if any, clinical information. Consequently some hospitals, especially those treating particularly complex patients, have been concerned that standard adjustment methods do not adequately reflect the challenges of treating their patients. To compare scores for different types of hospitals after making adjustments using only survey-reported patient characteristics and using more complete clinical and hospital information. We used clinical and survey data from a national sample of 1858 veterans hospitalized for an initial acute myocardial infarction (AMI) in a Department of Veterans Affairs (VA) medical center during fiscal years 2003 and 2004. We used VA administrative data to characterize hospitals. The survey asked patients about their experiences with hospital care. The clinical data included 14 measures abstracted from medical records that are predictive of survival after an AMI. Comparisons of scores across hospitals adjusted only for patient-reported health status and sociodemographic characteristics were similar to those that also adjusted for patient clinical characteristics; the Spearman rank-order correlations between the 2 sets of adjusted scores were >0.97 across 9 dimensions of inpatient experience. This study did not support concerns that measures of patient care experiences are unfair because commonly used models do not adjust adequately for potentially confounding patient clinical characteristics.

  18. Advantages of high-dose rate (HDR) brachytherapy in treatment of prostate cancer

    NASA Astrophysics Data System (ADS)

    Molokov, A. A.; Vanina, E. A.; Tseluyko, S. S.

    2017-09-01

    One of the modern methods of preserving organs radiation treatment is brachytherapy. This article analyzes the results of prostate brachytherapy. These studies of the advantages of high dose brachytherapy lead to the conclusion that this method of radiation treatment for prostate cancer has a favorable advantage in comparison with remote sensing methods, and is competitive, preserving organs in comparison to surgical methods of treatment. The use of the method of polyfocal transperineal biopsy during the brachytherapy session provides information on the volumetric spread of prostate cancer and adjust the dosimetry plan taking into account the obtained data.

  19. Adjusting Quality index Log Values to Represent Local and Regional Commercial Sawlog Product Values

    Treesearch

    Orris D. McCauley; Joseph J. Mendel; Joseph J. Mendel

    1969-01-01

    The primary purpose of this paper is not only to report the results of a comparative analysis as to how well the Q.I. method predicts log product values when compared to commercial sawmill log output values, but also to develop a methodology which will facilitate the comparison and provide the adjustments needed by the sawmill operator.

  20. Method for adjusting warp measurements to a different board dimension

    Treesearch

    William T. Simpson; John R. Shelly

    2000-01-01

    Warp in lumber is a common problem that occurs while lumber is being dried. In research or other testing programs, it is sometimes necessary to compare warp of different species or warp caused by different process variables. If lumber dimensions are not the same, then direct comparisons are not possible, and adjusting warp to a common dimension would be desirable so...

  1. Comparative Efficacy of Daratumumab Monotherapy and Pomalidomide Plus Low-Dose Dexamethasone in the Treatment of Multiple Myeloma: A Matching Adjusted Indirect Comparison.

    PubMed

    Van Sanden, Suzy; Ito, Tetsuro; Diels, Joris; Vogel, Martin; Belch, Andrew; Oriol, Albert

    2018-03-01

    Daratumumab (a human CD38-directed monoclonal antibody) and pomalidomide (an immunomodulatory drug) plus dexamethasone are both relatively new treatment options for patients with heavily pretreated multiple myeloma. A matching adjusted indirect comparison (MAIC) was used to compare absolute treatment effects of daratumumab versus pomalidomide + low-dose dexamethasone (LoDex; 40 mg) on overall survival (OS), while adjusting for differences between the trial populations. The MAIC method reduces the risk of bias associated with naïve indirect comparisons. Data from 148 patients receiving daratumumab (16 mg/kg), pooled from the GEN501 and SIRIUS studies, were compared separately with data from patients receiving pomalidomide + LoDex in the MM-003 and STRATUS studies. The MAIC-adjusted hazard ratio (HR) for OS of daratumumab versus pomalidomide + LoDex was 0.56 (95% confidence interval [CI], 0.38-0.83; p  = .0041) for MM-003 and 0.51 (95% CI, 0.37-0.69; p  < .0001) for STRATUS. The treatment benefit was even more pronounced when the daratumumab population was restricted to pomalidomide-naïve patients (MM-003: HR, 0.33; 95% CI, 0.17-0.66; p  = .0017; STRATUS: HR, 0.41; 95% CI, 0.21-0.79; p  = .0082). An additional analysis indicated a consistent trend of the OS benefit across subgroups based on M-protein level reduction (≥50%, ≥25%, and <25%). The MAIC results suggest that daratumumab improves OS compared with pomalidomide + LoDex in patients with heavily pretreated multiple myeloma. This matching adjusted indirect comparison of clinical trial data from four studies analyzes the survival outcomes of patients with heavily pretreated, relapsed/refractory multiple myeloma who received either daratumumab monotherapy or pomalidomide plus low-dose dexamethasone. Using this method, daratumumab conferred a significant overall survival benefit compared with pomalidomide plus low-dose dexamethasone. In the absence of head-to-head trials, these indirect comparisons provide useful insights to clinicians and reimbursement authorities around the relative efficacy of treatments. © AlphaMed Press 2017.

  2. A methodological approach to identify external factors for indicator-based risk adjustment illustrated by a cataract surgery register

    PubMed Central

    2014-01-01

    Background Risk adjustment is crucial for comparison of outcome in medical care. Knowledge of the external factors that impact measured outcome but that cannot be influenced by the physician is a prerequisite for this adjustment. To date, a universal and reproducible method for identification of the relevant external factors has not been published. The selection of external factors in current quality assurance programmes is mainly based on expert opinion. We propose and demonstrate a methodology for identification of external factors requiring risk adjustment of outcome indicators and we apply it to a cataract surgery register. Methods Defined test criteria to determine the relevance for risk adjustment are “clinical relevance” and “statistical significance”. Clinical relevance of the association is presumed when observed success rates of the indicator in the presence and absence of the external factor exceed a pre-specified range of 10%. Statistical significance of the association between the external factor and outcome indicators is assessed by univariate stratification and multivariate logistic regression adjustment. The cataract surgery register was set up as part of a German multi-centre register trial for out-patient cataract surgery in three high-volume surgical sites. A total of 14,924 patient follow-ups have been documented since 2005. Eight external factors potentially relevant for risk adjustment were related to the outcome indicators “refractive accuracy” and “visual rehabilitation” 2–5 weeks after surgery. Results The clinical relevance criterion confirmed 2 (“refractive accuracy”) and 5 (“visual rehabilitation”) external factors. The significance criterion was verified in two ways. Univariate and multivariate analyses revealed almost identical external factors: 4 were related to “refractive accuracy” and 7 (6) to “visual rehabilitation”. Two (“refractive accuracy”) and 5 (“visual rehabilitation”) factors conformed to both criteria and were therefore relevant for risk adjustment. Conclusion In a practical application, the proposed method to identify relevant external factors for risk adjustment for comparison of outcome in healthcare proved to be feasible and comprehensive. The method can also be adapted to other quality assurance programmes. However, the cut-off score for clinical relevance needs to be individually assessed when applying the proposed method to other indications or indicators. PMID:24965949

  3. Are Comparisons of Patient Experiences Across Hospitals Fair? A Study in Veterans Health Administration Hospitals

    PubMed Central

    Cleary, Paul D.; Meterko, Mark; Wright, Steven M.; Zaslavsky, Alan M.

    2015-01-01

    Background Surveys are increasingly used to assess patient experiences with health care. Comparisons of hospital scores based on patient experience surveys should be adjusted for patient characteristics that might affect survey results. Such characteristics are commonly drawn from patient surveys that collect little, if any, clinical information. Consequently some hospitals, especially those treating particularly complex patients, have been concerned that standard adjustment methods do not adequately reflect the challenges of treating their patients. Objectives To compare scores for different types of hospitals after making adjustments using only survey-reported patient characteristics and using more complete clinical and hospital information. Research Design We used clinical and survey data from a national sample of 1858 veterans hospitalized for an initial acute myocardial infarction (AMI) in a Department of Veterans Affairs (VA) medical center during fiscal years 2003 and 2004. We used VA administrative data to characterize hospitals. The survey asked patients about their experiences with hospital care. The clinical data included 14 measures abstracted from medical records that are predictive of survival after an AMI. Results Comparisons of scores across hospitals adjusted only for patient-reported health status and sociodemographic characteristics were similar to those that also adjusted for patient clinical characteristics; the Spearman rank-order correlations between the 2 sets of adjusted scores were >0.97 across 9 dimensions of inpatient experience. Conclusions This study did not support concerns that measures of patient care experiences are unfair because commonly used models do not adjust adequately for potentially confounding patient clinical characteristics. PMID:24926709

  4. Matched Comparison Group Design Standards in Systematic Reviews of Early Childhood Interventions.

    PubMed

    Thomas, Jaime; Avellar, Sarah A; Deke, John; Gleason, Philip

    2017-06-01

    Systematic reviews assess the quality of research on program effectiveness to help decision makers faced with many intervention options. Study quality standards specify criteria that studies must meet, including accounting for baseline differences between intervention and comparison groups. We explore two issues related to systematic review standards: covariate choice and choice of estimation method. To help systematic reviews develop/refine quality standards and support researchers in using nonexperimental designs to estimate program effects, we address two questions: (1) How well do variables that systematic reviews typically require studies to account for explain variation in key child and family outcomes? (2) What methods should studies use to account for preexisting differences between intervention and comparison groups? We examined correlations between baseline characteristics and key outcomes using Early Childhood Longitudinal Study-Birth Cohort data to address Question 1. For Question 2, we used simulations to compare two methods-matching and regression adjustment-to account for preexisting differences between intervention and comparison groups. A broad range of potential baseline variables explained relatively little of the variation in child and family outcomes. This suggests the potential for bias even after accounting for these variables, highlighting the need for systematic reviews to provide appropriate cautions about interpreting the results of moderately rated, nonexperimental studies. Our simulations showed that regression adjustment can yield unbiased estimates if all relevant covariates are used, even when the model is misspecified, and preexisting differences between the intervention and the comparison groups exist.

  5. A Comparison of the Multiscale Retinex With Other Image Enhancement Techniques

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-Ur; Woodell, Glenn A.; Jobson, Daniel J.

    1997-01-01

    The multiscale retinex with color restoration (MSRCR) has shown itself to be a very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition. A number of algorithms exist that provide one or more of these features, but not all. In this paper we compare the performance of the MSRCR with techniques that are widely used for image enhancement. Specifically, we compare the MSRCR with color adjustment methods such as gamma correction and gain/offset application, histogram modification techniques such as histogram equalization and manual histogram adjustment, and other more powerful techniques such as homomorphic filtering and 'burning and dodging'. The comparison is carried out by testing the suite of image enhancement methods on a set of diverse images. We find that though some of these techniques work well for some of these images, only the MSRCR performs universally well on the test set.

  6. The burden of mental disorders: a comparison of methods between the Australian burden of disease studies and the Global Burden of Disease study.

    PubMed Central

    Vos, T.; Mathers, C. D.

    2000-01-01

    The national and Victorian burden of disease studies in Australia set out to examine critically the methods used in the Global Burden of Disease study to estimate the burden of mental disorders. The main differences include the use of a different set of disability weights allowing estimates in greater detail by level of severity, adjustments for comorbidity between mental disorders, a greater number of mental disorders measured, and modelling of substance use disorders, anxiety disorders and bipolar disorder as chronic conditions. Uniform age-weighting in the Australian studies produces considerably lower estimates of the burden due to mental disorders in comparison with age-weighted disability-adjusted life years. A lack of follow-up data on people with mental disorders who are identified in cross-sectional surveys poses the greatest challenge in determining the burden of mental disorders more accurately. PMID:10885161

  7. Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates. NCEE 2012-4019

    ERIC Educational Resources Information Center

    Fortson, Kenneth; Verbitsky-Savitz, Natalya; Kopa, Emma; Gleason, Philip

    2012-01-01

    Randomized controlled trials (RCTs) are widely considered to be the gold standard in evaluating the impacts of a social program. When an RCT is infeasible, researchers often estimate program impacts by comparing outcomes of program participants with those of a nonexperimental comparison group, adjusting for observable differences between the two…

  8. Alternative evaluation metrics for risk adjustment methods.

    PubMed

    Park, Sungchul; Basu, Anirban

    2018-06-01

    Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.

  9. Construct and Compare Gene Coexpression Networks with DAPfinder and DAPview.

    PubMed

    Skinner, Jeff; Kotliarov, Yuri; Varma, Sudhir; Mine, Karina L; Yambartsev, Anatoly; Simon, Richard; Huyen, Yentram; Morgun, Andrey

    2011-07-14

    DAPfinder and DAPview are novel BRB-ArrayTools plug-ins to construct gene coexpression networks and identify significant differences in pairwise gene-gene coexpression between two phenotypes. Each significant difference in gene-gene association represents a Differentially Associated Pair (DAP). Our tools include several choices of filtering methods, gene-gene association metrics, statistical testing methods and multiple comparison adjustments. Network results are easily displayed in Cytoscape. Analyses of glioma experiments and microarray simulations demonstrate the utility of these tools. DAPfinder is a new friendly-user tool for reconstruction and comparison of biological networks.

  10. Impact of case-mix on comparisons of patient-reported experience in NHS acute hospital trusts in England.

    PubMed

    Raleigh, Veena; Sizmur, Steve; Tian, Yang; Thompson, James

    2015-04-01

    To examine the impact of patient-mix on National Health Service (NHS) acute hospital trust scores in two national NHS patient surveys. Secondary analysis of 2012 patient survey data for 57,915 adult inpatients at 142 NHS acute hospital trusts and 45,263 adult emergency department attendees at 146 NHS acute hospital trusts in England. Changes in trust scores for selected questions, ranks, inter-trust variance and score-based performance bands were examined using three methods: no adjustment for case-mix; the current standardization method with weighting for age, sex and, for inpatients only, admission method; and a regression model adjusting in addition for ethnicity, presence of a long-term condition, proxy response (inpatients only) and previous emergency attendances (emergency department survey only). For both surveys, all the variables examined were associated with patients' responses and affected inter-trust variance in scores, although the direction and strength of impact differed between variables. Inter-trust variance was generally greatest for the unadjusted scores and lowest for scores derived from the full regression model. Although trust scores derived from the three methods were highly correlated (Kendall's tau coefficients 0.70-0.94), up to 14% of trusts had discordant ranks of when the standardization and regression methods were compared. Depending on the survey and question, up to 14 trusts changed performance bands when the regression model with its fuller case-mix adjustment was used rather than the current standardization method. More comprehensive case-mix adjustment of patient survey data than the current limited adjustment reduces performance variation between NHS acute hospital trusts and alters the comparative performance bands of some trusts. Given the use of these data for high-impact purposes such as performance assessment, regulation, commissioning, quality improvement and patient choice, a review of the long-standing method for analysing patient survey data would be timely, and could improve rigour and comparability across the NHS. Performance comparisons need to be perceived as fair and scientifically robust to maintain confidence in publicly reported data, and to support their use by both the public and the NHS. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  11. A scoping review of indirect comparison methods and applications using individual patient data.

    PubMed

    Veroniki, Areti Angeliki; Straus, Sharon E; Soobiah, Charlene; Elliott, Meghan J; Tricco, Andrea C

    2016-04-27

    Several indirect comparison methods, including network meta-analyses (NMAs), using individual patient data (IPD) have been developed to synthesize evidence from a network of trials. Although IPD indirect comparisons are published with increasing frequency in health care literature, there is no guidance on selecting the appropriate methodology and on reporting the methods and results. In this paper we examine the methods and reporting of indirect comparison methods using IPD. We searched MEDLINE, Embase, the Cochrane Library, and CINAHL from inception until October 2014. We included published and unpublished studies reporting a method, application, or review of indirect comparisons using IPD and at least three interventions. We identified 37 papers, including a total of 33 empirical networks. Of these, only 9 (27 %) IPD-NMAs reported the existence of a study protocol, whereas 3 (9 %) studies mentioned that protocols existed without providing a reference. The 33 empirical networks included 24 (73 %) IPD-NMAs and 9 (27 %) matching adjusted indirect comparisons (MAICs). Of the 21 (64 %) networks with at least one closed loop, 19 (90 %) were IPD-NMAs, 13 (68 %) of which evaluated the prerequisite consistency assumption, and only 5 (38 %) of the 13 IPD-NMAs used statistical approaches. The median number of trials included per network was 10 (IQR 4-19) (IPD-NMA: 15 [IQR 8-20]; MAIC: 2 [IQR 3-5]), and the median number of IPD trials included in a network was 3 (IQR 1-9) (IPD-NMA: 6 [IQR 2-11]; MAIC: 2 [IQR 1-2]). Half of the networks (17; 52 %) applied Bayesian hierarchical models (14 one-stage, 1 two-stage, 1 used IPD as an informative prior, 1 unclear-stage), including either IPD alone or with aggregated data (AD). Models for dichotomous and continuous outcomes were available (IPD alone or combined with AD), as were models for time-to-event data (IPD combined with AD). One in three indirect comparison methods modeling IPD adjusted results from different trials to estimate effects as if they had come from the same, randomized, population. Key methodological and reporting elements (e.g., evaluation of consistency, existence of study protocol) were often missing from an indirect comparison paper.

  12. Methods of Costing in Universities. Brief Comparison Between the NCHEMS Approach and the Approach Used by the French-Speaking Research Group Associated with the IMHE Programme.

    ERIC Educational Resources Information Center

    Cossu, Claude

    1975-01-01

    A group of French universities modified the NCHEMS accounting method for use in a study of its budget control procedures and cost-evaluation methods. The conceptual differences in French university education (as compared to American higher education) are keyed to the adjustments in the accounting method. French universities, rather than being…

  13. Matching-adjusted indirect comparison of efficacy in patients with moderate-to-severe plaque psoriasis treated with ixekizumab vs. secukinumab.

    PubMed

    Warren, R B; Brnabic, A; Saure, D; Langley, R G; See, K; Wu, J J; Schacht, A; Mallbris, L; Nast, A

    2018-05-01

    Head-to-head randomized studies comparing ixekizumab and secukinumab in the treatment of psoriasis are not available. To assess efficacy and quality of life using matching-adjusted indirect comparisons for treatment with ixekizumab vs. secukinumab. Psoriasis Area and Severity Index (PASI) improvement of at least 75%, 90% and 100% and Dermatology Life Quality Index (DLQI) 0/1 response rates for approved dosages of ixekizumab (160 mg at Week 0, then 80 mg every two weeks for the first 12 weeks) and secukinumab (300 mg at Weeks 0, 1, 2, 3 and 4, then 300 mg every 4 weeks) treatment were compared using data from active (etanercept and ustekinumab) and placebo-controlled studies. Comparisons were made using the Bucher (BU) method and two modified versions of the Signorovitch (SG) method (SG total and SG separate). Subsequently, results based on active treatment common comparators were combined using generic inverse-variance meta-analysis. In the meta-analysis of studies with active comparators, PASI 90 response rates were 12·7% [95% confidence interval (CI) 5·5-19·8, P = 0·0005], 10·0% (95% CI 2·1-18·0, P = 0·01) and 11·2% (95% CI 3·2-19·1, P = 0·006) higher and PASI 100 response rates were 11·7% (95% CI 5·9-17·5, P < 0·001), 12·7% (95% CI 6·0-19·4, P < 0·001) and 13·1% (95% CI 6·3-19·9, P < 0·001) higher for ixekizumab compared with secukinumab using BU, SG total and SG separate methods. PASI 75 results were comparable when SG methods were used and favoured ixekizumab when the BU method was used. Week 12 DLQI 0/1 response rates did not differ significantly. Ixekizumab had higher PASI 90 and PASI 100 responses at week 12 compared with secukinumab using adjusted indirect comparisons. © 2017 The Authors. British Journal of Dermatology published by John Wiley & Sons Ltd on behalf of British Association of Dermatologists.

  14. Effects of using a posteriori methods for the conservation of integral invariants. [for weather forecasting

    NASA Technical Reports Server (NTRS)

    Takacs, Lawrence L.

    1988-01-01

    The nature and effect of using a posteriori adjustments to nonconservative finite-difference schemes to enforce integral invariants of the corresponding analytic system are examined. The method of a posteriori integral constraint restoration is analyzed for the case of linear advection, and the harmonic response associated with the a posteriori adjustments is examined in detail. The conservative properties of the shallow water system are reviewed, and the constraint restoration algorithm applied to the shallow water equations are described. A comparison is made between forecasts obtained using implicit and a posteriori methods for the conservation of mass, energy, and potential enstrophy in the complete nonlinear shallow-water system.

  15. Parental HIV/AIDS and Psychosocial Adjustment among Rural Chinese Children

    PubMed Central

    Fang, Xiaoyi; Stanton, Bonita; Hong, Yan; Zhang, Liying; Zhao, Guoxiang; Zhao, Junfeng; Lin, Xiuyun; Lin, Danhua

    2009-01-01

    Objective To assess the relationship between parental HIV/AIDS and psychosocial adjustment of children in rural central China. Methods Participants included 296 double AIDS orphans (children who had lost both their parents to AIDS), 459 single orphans (children who had lost one parent to AIDS), 466 vulnerable children who lived with HIV-infected parents, and 404 comparison children who did not experience HIV/AIDS-related illness and death in their families. The measures included depressive symptoms, loneliness, self-esteem, future expectations, hopefulness about the future, and perceived control over the future. Results AIDS orphans and vulnerable children consistently demonstrated poorer psychosocial adjustment than comparison children in the same community. The level of psychosocial adjustment was similar between single orphans and double orphans, but differed by care arrangement among double orphans. Conclusion The findings underscore the urgency and importance of culturally and developmentally appropriate intervention efforts targeting psychosocial problems among children affected by AIDS and call for more exploration of risk and resilience factors, both individual and contextual, affecting the psychosocial wellbeing of these children. PMID:19208701

  16. Method and apparatus for timing of laser beams in a multiple laser beam fusion system

    DOEpatents

    Eastman, Jay M.; Miller, Theodore L.

    1981-01-01

    The optical path lengths of a plurality of comparison laser beams directed to impinge upon a common target from different directions are compared to that of a master laser beam by using an optical heterodyne interferometric detection technique. The technique consists of frequency shifting the master laser beam and combining the master beam with a first one of the comparison laser beams to produce a time-varying heterodyne interference pattern which is detected by a photo-detector to produce an AC electrical signal indicative of the difference in the optical path lengths of the two beams which were combined. The optical path length of this first comparison laser beam is adjusted to compensate for the detected difference in the optical path lengths of the two beams. The optical path lengths of all of the comparison laser beams are made equal to the optical path length of the master laser beam by repeating the optical path length adjustment process for each of the comparison laser beams. In this manner, the comparison laser beams are synchronized or timed to arrive at the target within .+-.1.times.10.sup.-12 second of each other.

  17. Measurement Equivalence in ADL and IADL Difficulty Across International Surveys of Aging: Findings From the HRS, SHARE, and ELSA

    PubMed Central

    Kasper, Judith D.; Brandt, Jason; Pezzin, Liliana E.

    2012-01-01

    Objective. To examine the measurement equivalence of items on disability across three international surveys of aging. Method. Data for persons aged 65 and older were drawn from the Health and Retirement Survey (HRS, n = 10,905), English Longitudinal Study of Aging (ELSA, n = 5,437), and Survey of Health, Ageing and Retirement in Europe (SHARE, n = 13,408). Differential item functioning (DIF) was assessed using item response theory (IRT) methods for activities of daily living (ADL) and instrumental activities of daily living (IADL) items. Results. HRS and SHARE exhibited measurement equivalence, but 6 of 11 items in ELSA demonstrated meaningful DIF. At the scale level, this item-level DIF affected scores reflecting greater disability. IRT methods also spread out score distributions and shifted scores higher (toward greater disability). Results for mean disability differences by demographic characteristics, using original and DIF-adjusted scores, were the same overall but differed for some subgroup comparisons involving ELSA. Discussion. Testing and adjusting for DIF is one means of minimizing measurement error in cross-national survey comparisons. IRT methods were used to evaluate potential measurement bias in disability comparisons across three international surveys of aging. The analysis also suggested DIF was mitigated for scales including both ADL and IADL and that summary indexes (counts of limitations) likely underestimate mean disability in these international populations. PMID:22156662

  18. Occurrence of Conotruncal Heart Birth Defects in Texas: A Comparison of Urban/Rural Classifications

    ERIC Educational Resources Information Center

    Langlois, Peter H.; Jandle, Leigh; Scheuerle, Angela; Horel, Scott A.; Carozza, Susan E.

    2010-01-01

    Purpose: (1) Determine if there is an association between 3 conotruncal heart birth defects and urban/rural residence of mother. (2) Compare results using different methods of measuring urban/rural status. Methods: Data were taken from the Texas Birth Defects Registry, 1999-2003. Poisson regression was used to compare crude and adjusted birth…

  19. On assessing bioequivalence and interchangeability between generics based on indirect comparisons.

    PubMed

    Zheng, Jiayin; Chow, Shein-Chung; Yuan, Mengdie

    2017-08-30

    As more and more generics become available in the market place, the safety/efficacy concerns may arise as the result of interchangeably use of approved generics. However, bioequivalence assessment for regulatory approval among generics of the innovative drug product is not required. In practice, approved generics are often used interchangeably without any mechanism of safety monitoring. In this article, based on indirect comparisons, we proposed several methods to assessing bioequivalence and interchangeability between generics. The applicability of the methods and the similarity assumptions were discussed, as well as the inappropriateness of directly adopting adjusted indirect comparison to the field of generics' comparison. Besides, some extensions were given to take into consideration the important topics in clinical trials for bioequivalence assessments, for example, multiple comparisons and simultaneously testing bioequivalence among three generics. Extensive simulation studies were conducted to investigate the performances of the proposed methods. The studies of malaria generics and HIV/AIDS generics prequalified by the WHO were used as real examples to demonstrate the use of the methods. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models

    PubMed Central

    2013-01-01

    Background Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as “practically impossible”, and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Methods Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Results Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Conclusions Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons. PMID:24168424

  1. Direct comparison of risk-adjusted and non-risk-adjusted CUSUM analyses of coronary artery bypass surgery outcomes.

    PubMed

    Novick, Richard J; Fox, Stephanie A; Stitt, Larry W; Forbes, Thomas L; Steiner, Stefan

    2006-08-01

    We previously applied non-risk-adjusted cumulative sum methods to analyze coronary bypass outcomes. The objective of this study was to assess the incremental advantage of risk-adjusted cumulative sum methods in this setting. Prospective data were collected in 793 consecutive patients who underwent coronary bypass grafting performed by a single surgeon during a period of 5 years. The composite occurrence of an "adverse outcome" included mortality or any of 10 major complications. An institutional logistic regression model for adverse outcome was developed by using 2608 contemporaneous patients undergoing coronary bypass. The predicted risk of adverse outcome in each of the surgeon's 793 patients was then calculated. A risk-adjusted cumulative sum curve was then generated after specifying control limits and odds ratio. This risk-adjusted curve was compared with the non-risk-adjusted cumulative sum curve, and the clinical significance of this difference was assessed. The surgeon's adverse outcome rate was 96 of 793 (12.1%) versus 270 of 1815 (14.9%) for all the other institution's surgeons combined (P = .06). The non-risk-adjusted curve reached below the lower control limit, signifying excellent outcomes between cases 164 and 313, 323 and 407, and 667 and 793, but transgressed the upper limit between cases 461 and 478. The risk-adjusted cumulative sum curve never transgressed the upper control limit, signifying that cases preceding and including 461 to 478 were at an increased predicted risk. Furthermore, if the risk-adjusted cumulative sum curve was reset to zero whenever a control limit was reached, it still signaled a decrease in adverse outcome at 166, 653, and 782 cases. Risk-adjusted cumulative sum techniques provide incremental advantages over non-risk-adjusted methods by not signaling a decrement in performance when preoperative patient risk is high.

  2. Adjusting for multiple prognostic factors in the analysis of randomised trials

    PubMed Central

    2013-01-01

    Background When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method. Methods We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome. Results Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power. Conclusions It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not generally need to depend on the method of randomisation used. Most methods of analysis work well with large sample sizes, however treating strata as random effects should be the analysis method of choice with binary or time-to-event outcomes and a small sample size. PMID:23898993

  3. The flaws in the detail of an observational study on transcatheter aortic valve implantation versus surgical aortic valve replacement in intermediate-risks patients.

    PubMed

    Barili, Fabio; Freemantle, Nick; Folliguet, Thierry; Muneretto, Claudio; De Bonis, Michele; Czerny, Martin; Obadia, Jean Francois; Al-Attar, Nawwar; Bonaros, Nikolaos; Kluin, Jolanda; Lorusso, Roberto; Punjabi, Prakash; Sadaba, Rafael; Suwalski, Piotr; Benedetto, Umberto; Böning, Andreas; Falk, Volkmar; Sousa-Uva, Miguel; Kappetein, Pieter A; Menicanti, Lorenzo

    2017-06-01

    The PARTNER group recently published a comparison between the latest generation SAPIEN 3 transcatheter aortic valve implantation (TAVI) system (Edwards Lifesciences, Irvine, CA, USA) and surgical aortic valve replacement (SAVR) in intermediate-risk patients, apparently demonstrating superiority of the TAVI and suggesting that TAVI might be the preferred treatment method in this risk class of patients. Nonetheless, assessment of the non-randomized methodology used in this comparison reveals challenges that should be addressed in order to elucidate the validity of the results. The study by Thourani and colleagues showed several major methodological concerns: suboptimal methods in propensity score analysis with evident misspecification of the propensity scores (PS; no adjustment for the most significantly different covariates: left ventricular ejection fraction, moderate-severe mitral regurgitation and associated procedures); use of PS quintiles rather than matching; inference on not-adjusted Kaplan-Meier curves, although the authors correctly claimed for the need of balancing score adjusting for confounding factors in order to have unbiased estimates of the treatment effect; evidence of poor fit; lack of data on valve-related death.These methodological flaws invalidate direct comparison between treatments and cannot support authors' conclusions that TAVI with SAPIEN 3 in intermediate-risk patients is superior to surgery and might be the preferred treatment alternative to surgery. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  4. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    PubMed

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p < 0.05). In a real dataset, SAN had the lowest SDM and Kolmogorov-Smirnov values for blood urea nitrogen, hematocrit, hemoglobin, and serum potassium, and the lowest SDM for serum creatinine (p < 0.05). Subgroup-adjusted normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Flexible Endian Adjustment for Cross Architecture Binary Translation

    NASA Astrophysics Data System (ADS)

    Zhu, Tong; Liu, Bo; Guan, Haibing; Liang, Alei

    Different architectures and/or ISA (Instruction Set Architecture) representations hold different data arranging formats in the memory. Therefore, the adjustment of byte packing order (endianness) is indispensable in cross- architecture binary translation if the source and target machines are of heterogeneous endianness, which may otherwise cause system failure. The issue is inconspicuous but may lead to significant performance bottleneck. This paper investigates the key aspects of endianness and finds several solutions to endian adjustment for cross-architecture binary translation. In particular, it considers the two principal methods of this field - byte swapping and address swizzling, and gives a comparison of them in our DBT (Dynamic Binary Translator) - CrossBit.

  6. A Novel Adjustment Method for Shearer Traction Speed through Integration of T-S Cloud Inference Network and Improved PSO

    PubMed Central

    Si, Lei; Wang, Zhongbin; Yang, Yinwei

    2014-01-01

    In order to efficiently and accurately adjust the shearer traction speed, a novel approach based on Takagi-Sugeno (T-S) cloud inference network (CIN) and improved particle swarm optimization (IPSO) is proposed. The T-S CIN is built through the combination of cloud model and T-S fuzzy neural network. Moreover, the IPSO algorithm employs parameter automation adjustment strategy and velocity resetting to significantly improve the performance of basic PSO algorithm in global search and fine-tuning of the solutions, and the flowchart of proposed approach is designed. Furthermore, some simulation examples are carried out and comparison results indicate that the proposed method is feasible, efficient, and is outperforming others. Finally, an industrial application example of coal mining face is demonstrated to specify the effect of proposed system. PMID:25506358

  7. Comparison of hurricane exposure methods and associations with county fetal death rates, adjusting for environmental quality

    EPA Science Inventory

    Adverse effects of hurricanes are increasing as coastal populations grow and events become more severe. Hurricane exposure during pregnancy can influence fetal death rates through mechanisms related to healthcare, infrastructure disruption, nutrition, and injury. Estimation of hu...

  8. Adjustable patella grapple versus cannulated screw and cable technique for treatment of transverse patellar fractures.

    PubMed

    Yan, Ning; Yang, Anli; Liu, Xiaodong; Cai, Feng; Liu, Liang; Chang, Shimin

    2014-03-01

    Although the cannulated screw and cable (CSC) tension band technique is an effective method for fixation of transverse patellar fractures, it has shortcomings, such as extensive soft tissue damage, osseous substance damage, and complex manipulation. We conducted a retrospective comparison of the adjustable patella grapple (APG) technique and the CSC tension band technique. We retrospectively reviewed 78 patients with transverse patellar fractures (45 in the APG group and 33 in the CSC group). Follow-up was 18 months. Comparison criteria were operation time, fracture reduction, fracture healing time, the knee injury and osteoarthritis outcome score for knee function, and complications. The APG group showed shorter operation time and equal fracture reduction, fracture healing time, and knee function compared with the CSC group. Eleven patients in the APG group experienced skin irritation generated by implants. There was no complication in the CSC group. The APG technique should be considered as an alternative method for treatment of transverse patellar fractures.

  9. Effects of delay and probability combinations on discounting in humans

    PubMed Central

    Cox, David J.; Dallery, Jesse

    2017-01-01

    To determine discount rates, researchers typically adjust the amount of an immediate or certain option relative to a delayed or uncertain option. Because this adjusting amount method can be relatively time consuming, researchers have developed more efficient procedures. One such procedure is a 5-trial adjusting delay procedure, which measures the delay at which an amount of money loses half of its value (e.g., $1000 is valued at $500 with a 10-year delay to its receipt). Experiment 1 (n = 212) used 5-trial adjusting delay or probability tasks to measure delay discounting of losses, probabilistic gains, and probabilistic losses. Experiment 2 (n = 98) assessed combined probabilistic and delayed alternatives. In both experiments, we compared results from 5-trial adjusting delay or probability tasks to traditional adjusting amount procedures. Results suggest both procedures produced similar rates of probability and delay discounting in six out of seven comparisons. A magnitude effect consistent with previous research was observed for probabilistic gains and losses, but not for delayed losses. Results also suggest that delay and probability interact to determine the value of money. Five-trial methods may allow researchers to assess discounting more efficiently as well as study more complex choice scenarios. PMID:27498073

  10. Exploring methods for comparing the real-world effectiveness of treatments for osteoporosis: adjusted direct comparisons versus using patients as their own control.

    PubMed

    Karlsson, Linda; Mesterton, Johan; Tepie, Maurille Feudjo; Intorcia, Michele; Overbeek, Jetty; Ström, Oskar

    2017-09-21

    Using Swedish and Dutch registry data for women initiating bisphosphonates, we evaluated two methods of comparing the real-world effectiveness of osteoporosis treatments that attempt to adjust for differences in patient baseline characteristics. Each method has advantages and disadvantages; both are potential complements to clinical trial analyses. We evaluated methods of comparing the real-world effectiveness of osteoporosis treatments that attempt to adjust for both observed and unobserved confounding. Swedish and Dutch registry data for women initiating zoledronate or oral bisphosphonates (OBPs; alendronate/risedronate) were used; the primary outcome was fracture. In adjusted direct comparisons (ADCs), regression and matching techniques were used to account for baseline differences in known risk factors for fracture (e.g., age, previous fracture, comorbidities). In an own-control analysis (OCA), for each treatment, fracture incidence in the first 90 days following treatment initiation (the baseline risk period) was compared with fracture incidence in the 1-year period starting 91 days after treatment initiation (the treatment exposure period). In total, 1196 and 149 women initiating zoledronate and 14,764 and 25,058 initiating OBPs were eligible in the Swedish and Dutch registries, respectively. Owing to the small Dutch zoledronate sample, only the Swedish data were used to compare fracture incidences between treatment groups. ADCs showed a numerically higher fracture incidence in the zoledronate than in the OBPs group (hazard ratio 1.09-1.21; not statistically significant, p > 0.05). For both treatment groups, OCA showed a higher fracture incidence in the baseline risk period than in the treatment exposure period, indicating a treatment effect. OCA showed a similar or greater effect in the zoledronate group compared with the OBPs group. ADC and OCA each possesses advantages and disadvantages. Combining both methods may provide an estimate of real-world treatment efficacy that could potentially complement clinical trial findings.

  11. An automatic step adjustment method for average power analysis technique used in fiber amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Ming

    2006-04-01

    An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.

  12. Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models.

    PubMed

    Nicholl, Jon; Jacques, Richard M; Campbell, Michael J

    2013-10-29

    Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as "practically impossible", and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons.

  13. Multi–College Bystander Intervention Evaluationon for Violence Prevention

    PubMed Central

    Coker, Ann L.; Bush, Heather M.; Fisher, Bonnie S.; Swan, Suzanne C.; Williams, Corrine M.; Clear, Emily R.; DeGue, Sarah

    2015-01-01

    Introduction The 2013 Campus Sexual Violence Elimination Act requires U.S. colleges to provide bystander-based training to reduce sexual violence, but little is known about the efficacy of such programs for preventing violent behavior. This study provides the first multiyear evaluation of a bystander intervention’s campus-level impact on reducing interpersonal violence victimization and perpetration behavior on college campuses. Methods First-year students attending three similarly sized public university campuses were randomly selected and invited to complete online surveys in the spring terms of 2010–2013. On one campus, the Green Dot bystander intervention had been implemented since 2008 (Intervention, n=2,979) and two Comparison campuses had no bystander programming at baseline (Comparison, n=4,132). Data analyses conducted in 2014–2015 compared violence rates by condition over the four survey periods. Multivariable logistic regression was used to estimate violence risk on Intervention relative to Comparison campuses adjusting for demographic factors and time (2010–2013). Results Interpersonal violence victimization rates (measured in the past academic year) were 17% lower among students attending the Intervention (46.4%) relative to Comparison (55.7%) campuses (adjusted rate ratio, 0.83; 95% CI=0.79, 0.88); a similar pattern held for interpersonal violence perpetration (25.5% in Intervention; 32.2% in Comparison; adjusted rate ratio, 0.79; 95% CI=0.71, 0.86). Violence rates were lower on Intervention versus Comparison campuses for unwanted sexual victimization, sexual harassment, stalking, and psychological dating violence victimization and perpetration (p<0.01). Conclusions Green Dot may be an efficacious intervention to reduce violence at the community-level and meet Campus Sexual Violence Elimination Act bystander training requirements. PMID:26541099

  14. Comparison on genomic predictions using three GBLUP methods and two single-step blending methods in the Nordic Holstein population

    PubMed Central

    2012-01-01

    Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16 traits in the Nordic Holstein population. Methods The data consisted of de-regressed proofs (DRP) for 5 214 genotyped and 9 374 non-genotyped bulls. The bulls were divided into a training and a validation population by birth date, October 1, 2001. Five approaches for genomic prediction were used: 1) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted for the difference of scale between the genomic and the pedigree relationship matrices. A set of weights on the pedigree relationship matrix (ranging from 0.05 to 0.40) was used to build the combined relationship matrix in the single-step blending method and the GBLUP method with a polygenetic effect. Results Averaged over the 16 traits, reliabilities of genomic breeding values predicted using the GBLUP method with a polygenic effect (relative weight of 0.20) were 0.3% higher than reliabilities from the simple GBLUP method (without a polygenic effect). The adjusted single-step blending and original single-step blending methods (relative weight of 0.20) had average reliabilities that were 2.1% and 1.8% higher than the simple GBLUP method, respectively. In addition, the GBLUP method with a polygenic effect led to less bias of genomic predictions than the simple GBLUP method, and both single-step blending methods yielded less bias of predictions than all GBLUP methods. Conclusions The single-step blending method is an appealing approach for practical genomic prediction in dairy cattle. Genomic prediction from the single-step blending method can be improved by adjusting the scale of the genomic relationship matrix. PMID:22455934

  15. Redrawing the baseline: a method for adjusting biased historical forest estimates using a spatial and temporally representative plot network

    Treesearch

    Sara A. Goeking; Paul L. Patterson

    2015-01-01

    Users of Forest Inventory and Analysis (FIA) data sometimes compare historic and current forest inventory estimates, despite warnings that such comparisons may be tenuous. The purpose of this study was to demonstrate a method for obtaining a more accurate and representative reference dataset using data collected at co-located plots (i.e., plots that were measured...

  16. Evaluation of selected methods for determining streamflow during periods of ice effect

    USGS Publications Warehouse

    Melcher, N.B.; Walker, J.F.

    1990-01-01

    The methods are classified into two general categories, subjective and analytical, depending on whether individual judgement is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods, and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used for streamflow-gaging stations where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice adjustment factor) may be appropriate for use for stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge ratio and multiple regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.

  17. Analysis of Observational Studies in the Presence of Treatment Selection Bias: Effects of Invasive Cardiac Management on AMI Survival Using Propensity Score and Instrumental Variable Methods

    PubMed Central

    Stukel, Thérèse A.; Fisher, Elliott S; Wennberg, David E.; Alter, David A.; Gottlieb, Daniel J.; Vermeulen, Marian J.

    2007-01-01

    Context Comparisons of outcomes between patients treated and untreated in observational studies may be biased due to differences in patient prognosis between groups, often because of unobserved treatment selection biases. Objective To compare 4 analytic methods for removing the effects of selection bias in observational studies: multivariable model risk adjustment, propensity score risk adjustment, propensity-based matching, and instrumental variable analysis. Design, Setting, and Patients A national cohort of 122 124 patients who were elderly (aged 65–84 years), receiving Medicare, and hospitalized with acute myocardial infarction (AMI) in 1994–1995, and who were eligible for cardiac catheterization. Baseline chart reviews were taken from the Cooperative Cardiovascular Project and linked to Medicare health administrative data to provide a rich set of prognostic variables. Patients were followed up for 7 years through December 31, 2001, to assess the association between long-term survival and cardiac catheterization within 30 days of hospital admission. Main Outcome Measure Risk-adjusted relative mortality rate using each of the analytic methods. Results Patients who received cardiac catheterization (n=73 238) were younger and had lower AMI severity than those who did not. After adjustment for prognostic factors by using standard statistical risk-adjustment methods, cardiac catheterization was associated with a 50% relative decrease in mortality (for multivariable model risk adjustment: adjusted relative risk [RR], 0.51; 95% confidence interval [CI], 0.50–0.52; for propensity score risk adjustment: adjusted RR, 0.54; 95% CI, 0.53–0.55; and for propensity-based matching: adjusted RR, 0.54; 95% CI, 0.52–0.56). Using regional catheterization rate as an instrument, instrumental variable analysis showed a 16% relative decrease in mortality (adjusted RR, 0.84; 95% CI, 0.79–0.90). The survival benefits of routine invasive care from randomized clinical trials are between 8% and 21 %. Conclusions Estimates of the observational association of cardiac catheterization with long-term AMI mortality are highly sensitive to analytic method. All standard risk-adjustment methods have the same limitations regarding removal of unmeasured treatment selection biases. Compared with standard modeling, instrumental variable analysis may produce less biased estimates of treatment effects, but is more suited to answering policy questions than specific clinical questions. PMID:17227979

  18. Optimizing ACS NSQIP modeling for evaluation of surgical quality and risk: patient risk adjustment, procedure mix adjustment, shrinkage adjustment, and surgical focus.

    PubMed

    Cohen, Mark E; Ko, Clifford Y; Bilimoria, Karl Y; Zhou, Lynn; Huffman, Kristopher; Wang, Xue; Liu, Yaoming; Kraemer, Kari; Meng, Xiangju; Merkow, Ryan; Chow, Warren; Matel, Brian; Richards, Karen; Hart, Amy J; Dimick, Justin B; Hall, Bruce L

    2013-08-01

    The American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) collects detailed clinical data from participating hospitals using standardized data definitions, analyzes these data, and provides participating hospitals with reports that permit risk-adjusted comparisons with a surgical quality standard. Since its inception, the ACS NSQIP has worked to refine surgical outcomes measurements and enhance statistical methods to improve the reliability and validity of this hospital profiling. From an original focus on controlling for between-hospital differences in patient risk factors with logistic regression, ACS NSQIP has added a variable to better adjust for the complexity and risk profile of surgical procedures (procedure mix adjustment) and stabilized estimates derived from small samples by using a hierarchical model with shrinkage adjustment. New models have been developed focusing on specific surgical procedures (eg, "Procedure Targeted" models), which provide opportunities to incorporate indication and other procedure-specific variables and outcomes to improve risk adjustment. In addition, comparative benchmark reports given to participating hospitals have been expanded considerably to allow more detailed evaluations of performance. Finally, procedures have been developed to estimate surgical risk for individual patients. This article describes the development of, and justification for, these new statistical methods and reporting strategies in ACS NSQIP. Copyright © 2013 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  19. Comparison of methods for determining volatile compounds in milk, cheese, and whey powder

    USDA-ARS?s Scientific Manuscript database

    Solid phase microextraction (SPME) and gas chromatography-mass spectrometry (GC-MS) are commonly used for qualitative and quantitative analysis of volatile compounds in various dairy products, but conditions have to be adjusted for optimal SPME release while not generating new compounds that are abs...

  20. Analysis of Developmental Data: Comparison Among Alternative Methods

    ERIC Educational Resources Information Center

    Wilson, Ronald S.

    1975-01-01

    To examine the ability of the correction factor epsilon to counteract statistical bias in univariate analysis, an analysis of variance (adjusted by epsilon) and a multivariate analysis of variance were performed on the same data. The results indicated that univariate analysis is a fully protected design when used with epsilon. (JMB)

  1. INTERANNUAL VARIATION IN METEOROLOGICALLY ADJUSTED OZONE LEVELS IN THE EASTERN UNITED STATES: A COMPARISON OF TWO APPROACHED

    EPA Science Inventory

    Assessing the influence of abatement efforts and other human activities on ozone levels is complicated by the atmosphere's changeable nature. Two statistical methods, the dynamic linear model(DLM) and the generalized additive model (GAM), are used to estimate ozone trends in the...

  2. The association between short interpregnancy interval and preterm birth in Louisiana: a comparison of methods.

    PubMed

    Howard, Elizabeth J; Harville, Emily; Kissinger, Patricia; Xiong, Xu

    2013-07-01

    There is growing interest in the application of propensity scores (PS) in epidemiologic studies, especially within the field of reproductive epidemiology. This retrospective cohort study assesses the impact of a short interpregnancy interval (IPI) on preterm birth and compares the results of the conventional logistic regression analysis with analyses utilizing a PS. The study included 96,378 singleton infants from Louisiana birth certificate data (1995-2007). Five regression models designed for methods comparison are presented. Ten percent (10.17 %) of all births were preterm; 26.83 % of births were from a short IPI. The PS-adjusted model produced a more conservative estimate of the exposure variable compared to the conventional logistic regression method (β-coefficient: 0.21 vs. 0.43), as well as a smaller standard error (0.024 vs. 0.028), odds ratio and 95 % confidence intervals [1.15 (1.09, 1.20) vs. 1.23 (1.17, 1.30)]. The inclusion of more covariate and interaction terms in the PS did not change the estimates of the exposure variable. This analysis indicates that PS-adjusted regression may be appropriate for validation of conventional methods in a large dataset with a fairly common outcome. PS's may be beneficial in producing more precise estimates, especially for models with many confounders and effect modifiers and where conventional adjustment with logistic regression is unsatisfactory. Short intervals between pregnancies are associated with preterm birth in this population, according to either technique. Birth spacing is an issue that women have some control over. Educational interventions, including birth control, should be applied during prenatal visits and following delivery.

  3. A 5-trial adjusting delay discounting task: Accurate discount rates in less than 60 seconds

    PubMed Central

    Koffarnus, Mikhail N.; Bickel, Warren K.

    2014-01-01

    Individuals who discount delayed rewards at a high rate are more likely to engage in substance abuse, overeating, or problem gambling. Findings such as these suggest the value of methods to obtain an accurate and fast measurement of discount rate that can be easily deployed in variety of settings. In the present study, we developed and evaluated the 5-trial adjusting delay task, a novel method of obtaining discount rate in less than one minute. We hypothesized that discount rates from the 5-trial adjusting delay task would be similar and correlated with discount rates from a lengthier task we have used previously, and that four known effects relating to delay discounting would be replicable with this novel task. To test these hypotheses, the 5-trial adjusting delay task was administered to 111 college students six times to obtain discount rates for six different commodities, along with a lengthier adjusting amount discounting task. We found that discount rates were similar and correlated between the 5-trial adjusting delay task and the adjusting amount task. Each of the four known effects relating to delay discounting was replicated with the 5-trial adjusting delay task to varying degrees. First, discount rates were inversely correlated with amount. Second, discount rates between past and future outcomes were correlated. Third, discount rates were greater for consumable rewards than with money, although we did not control for amount in this comparison. Fourth, discount rates were lower when zero amounts opposing the chosen time point were explicitly described. Results indicate that the 5-trial adjusting delay task is a viable, rapid method to assess discount rate. PMID:24708144

  4. A 5-trial adjusting delay discounting task: accurate discount rates in less than one minute.

    PubMed

    Koffarnus, Mikhail N; Bickel, Warren K

    2014-06-01

    Individuals who discount delayed rewards at a high rate are more likely to engage in substance abuse, overeating, or problem gambling. Such findings suggest the value of methods to obtain an accurate and fast measurement of discount rate that can be easily deployed in variety of settings. In the present study, we developed and evaluated the 5-trial adjusting delay task, a novel method of obtaining a discount rate in less than 1 min. We hypothesized that discount rates from the 5-trial adjusting delay task would be similar and would correlate with discount rates from a lengthier task we have used previously, and that 4 known effects relating to delay discounting would be replicable with this novel task. To test these hypotheses, the 5-trial adjusting delay task was administered to 111 college students 6 times to obtain discount rates for 6 different commodities, along with a lengthier adjusting amount discounting task. We found that discount rates were similar and correlated between the 5-trial adjusting delay task and the adjusting amount task. Each of the 4 known effects relating to delay discounting was replicated with the 5-trial adjusting delay task to varying degrees. First, discount rates were inversely correlated with amount. Second, discount rates between past and future outcomes were correlated. Third, discount rates were greater for consumable rewards than with money, although we did not control for amount in this comparison. Fourth, discount rates were lower when $0 amounts opposing the chosen time point were explicitly described. Results indicate that the 5-trial adjusting delay task is a viable, rapid method to assess discount rate. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  5. Simulation-based coefficients for adjusting climate impact on energy consumption of commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Na; Makhmalbaf, Atefe; Srivastava, Viraj

    This paper presents a new technique for and the results of normalizing building energy consumption to enable a fair comparison among various types of buildings located near different weather stations across the U.S. The method was developed for the U.S. Building Energy Asset Score, a whole-building energy efficiency rating system focusing on building envelope, mechanical systems, and lighting systems. The Asset Score is calculated based on simulated energy use under standard operating conditions. Existing weather normalization methods such as those based on heating and cooling degrees days are not robust enough to adjust all climatic factors such as humidity andmore » solar radiation. In this work, over 1000 sets of climate coefficients were developed to separately adjust building heating, cooling, and fan energy use at each weather station in the United States. This paper also presents a robust, standardized weather station mapping based on climate similarity rather than choosing the closest weather station. This proposed simulated-based climate adjustment was validated through testing on several hundreds of thousands of modeled buildings. Results indicated the developed climate coefficients can isolate and adjust for the impacts of local climate for asset rating.« less

  6. Effects of delay and probability combinations on discounting in humans.

    PubMed

    Cox, David J; Dallery, Jesse

    2016-10-01

    To determine discount rates, researchers typically adjust the amount of an immediate or certain option relative to a delayed or uncertain option. Because this adjusting amount method can be relatively time consuming, researchers have developed more efficient procedures. One such procedure is a 5-trial adjusting delay procedure, which measures the delay at which an amount of money loses half of its value (e.g., $1000 is valued at $500 with a 10-year delay to its receipt). Experiment 1 (n=212) used 5-trial adjusting delay or probability tasks to measure delay discounting of losses, probabilistic gains, and probabilistic losses. Experiment 2 (n=98) assessed combined probabilistic and delayed alternatives. In both experiments, we compared results from 5-trial adjusting delay or probability tasks to traditional adjusting amount procedures. Results suggest both procedures produced similar rates of probability and delay discounting in six out of seven comparisons. A magnitude effect consistent with previous research was observed for probabilistic gains and losses, but not for delayed losses. Results also suggest that delay and probability interact to determine the value of money. Five-trial methods may allow researchers to assess discounting more efficiently as well as study more complex choice scenarios. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Baseline estimation from simultaneous satellite laser tracking

    NASA Technical Reports Server (NTRS)

    Dedes, George C.

    1987-01-01

    Simultaneous Range Differences (SRDs) to Lageos are obtained by dividing the observing stations into pairs with quasi-simultaneous observations. For each of those pairs the station with the least number of observations is identified, and at its observing epochs interpolated ranges for the alternate station are generated. The SRD observables are obtained by subtracting the actually observed laser range of the station having the least number of observations from the interpolated ranges of the alternate station. On the basis of these observables semidynamic single baseline solutions were performed. The aim of these solutions is to further develop and implement the SRD method in the real data environment, to assess its accuracy, its advantages and disadvantages as related to the range dynamic mode methods, when the baselines are the only parameters of interest. Baselines, using simultaneous laser range observations to Lageos, were also estimated through the purely geometric method. These baselines formed the standards the standards of comparison in the accuracy assessment of the SRD method when compared to that of the range dynamic mode methods. On the basis of this comparison it was concluded that for baselines of regional extent the SRD method is very effective, efficient, and at least as accurate as the range dynamic mode methods, and that on the basis of a simple orbital modeling and a limited orbit adjustment. The SRD method is insensitive to the inconsistencies affecting the terrestrial reference frame and simultaneous adjustment of the Earth Rotation Parameters (ERPs) is not necessary.

  8. Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: survey of published systematic reviews.

    PubMed

    Song, Fujian; Loke, Yoon K; Walsh, Tanya; Glenny, Anne-Marie; Eastwood, Alison J; Altman, Douglas G

    2009-04-03

    To investigate basic assumptions and other methodological problems in the application of indirect comparison in systematic reviews of competing healthcare interventions. Survey of published systematic reviews. Inclusion criteria Systematic reviews published between 2000 and 2007 in which an indirect approach had been explicitly used. Identified reviews were assessed for comprehensiveness of the literature search, method for indirect comparison, and whether assumptions about similarity and consistency were explicitly mentioned. The survey included 88 review reports. In 13 reviews, indirect comparison was informal. Results from different trials were naively compared without using a common control in six reviews. Adjusted indirect comparison was usually done using classic frequentist methods (n=49) or more complex methods (n=18). The key assumption of trial similarity was explicitly mentioned in only 40 of the 88 reviews. The consistency assumption was not explicit in most cases where direct and indirect evidence were compared or combined (18/30). Evidence from head to head comparison trials was not systematically searched for or not included in nine cases. Identified methodological problems were an unclear understanding of underlying assumptions, inappropriate search and selection of relevant trials, use of inappropriate or flawed methods, lack of objective and validated methods to assess or improve trial similarity, and inadequate comparison or inappropriate combination of direct and indirect evidence. Adequate understanding of basic assumptions underlying indirect and mixed treatment comparison is crucial to resolve these methodological problems. APPENDIX 1: PubMed search strategy. APPENDIX 2: Characteristics of identified reports. APPENDIX 3: Identified studies. References of included studies.

  9. Comparison of Adjustable and Fixed Oral Appliances for the Treatment of Obstructive Sleep Apnea

    PubMed Central

    Lettieri, Christopher J.; Paolino, Nathalie; Eliasson, Arn H.; Shah, Anita A.; Holley, Aaron B.

    2011-01-01

    Study Objectives: To compare the efficacy of adjustable and fixed oral appliances for the treatment of OSA. Methods: Retrospective review of consecutive patients with OSA treated with either adjustable or fixed oral appliances. Polysomnography was conducted before and during therapy. Effective treatment was defined as an apnea-hypopnea index (AHI) < 5 events/h or < 10 events/h with resolution of sleepiness (Epworth < 10). We compared efficacy rates between fixed and adjustable appliances and sought to identify factors associated with greater success. Results: We included 805 patients, 602 (74.8%) treated with an adjustable and 203 (25.2%) a fixed oral appliances. Among the cohort, 86.4% were men; mean age was 41.3 ± 9.2 years. Mean AHI was 30.7 ± 25.6, with 34.1% having mild (AHI 5-14.9), 29.2% moderate (AHI 15-29.9), and 36.8% severe (AHI ≥ 30) OSA. Successful therapy was significantly more common with adjustable appliances. Obstructive events were reduced to < 5/h in 56.8% with adjustable compared to 47.0% with fixed appliances (p = 0.02). Similarly, a reduction of events to < 10 with resolution of sleepiness occurred in 66.4% with adjustable appliances versus 44.9% with fixed appliances (p < 0.001). For both devices, success was more common in younger patients, with lower BMI and less severe disease. Conclusions: Adjustable devices produced greater reductions in obstructive events and were more likely to provide successful therapy, especially in moderate-severe OSA. Fixed appliances were effective in mild disease, but were less successful in those with higher AHIs. Given these findings, the baseline AHI should be considered when selecting the type of oral appliance. Citation: Lettieri CJ; Paolino N; Eliasson AH; Shah AA; Holley AB. Comparison of adjustable and fixed oral appliances for the treatment of obstructive sleep apnea. J Clin Sleep Med 2011;7(5):439-445. PMID:22003337

  10. A preliminary investigation of Stroop-related intrinsic connectivity in cocaine dependence: Associations with treatment outcomes

    PubMed Central

    Mitchell, Marci R.; Balodis, Iris M.; DeVito, Elise E.; Lacadie, Cheryl M.; Yeston, Jon; Scheinost, Dustin; Constable, R. Todd; Carroll, Kathleen M.; Potenza, Marc N.

    2013-01-01

    Background Cocaine-dependent individuals demonstrate neural and behavioral differences compared to healthy comparison subjects when performing the Stroop color-word inference test. Stroop measures also relate to treatment outcome for cocaine dependence. Intrinsic connectivity analyses assess the extent to which task-related regional brain activations are related to each other in the absence of defining a priori regions-of-interest. Objective This study examined: 1) the extent to which cocaine-dependent and non-addicted individuals differed on measures of intrinsic connectivity during fMRI Stroop performance; and, 2) the relationships between fMRI Stroop intrinsic connectivity and treatment outcome in cocaine dependence. Methods Sixteen treatment-seeking cocaine-dependent patients and matched non-addicted comparison subjects completed an fMRI Stroop task. Between-group differences in intrinsic connectivity were assessed and related to self-reported and urine-toxicology-based cocaine-abstinence measures. Results Cocaine-dependent patients vs. comparison subjects showed less intrinsic connectivity in cortical and sub-cortical regions. When adjusting for individual degree of intrinsic connectivity, cocaine-dependent vs. comparison subjects showed relatively greater intrinsic connectivity in the ventral striatum, putamen, inferior frontal gyrus, anterior insula, thalamus, and substantia nigra. Non-mean-adjusted intrinsic-connectivity measures in the midbrain, thalamus, ventral striatum, substantia nigra, insula, and hippocampus negatively correlated with measures of cocaine abstinence. Conclusion The diminished intrinsic connectivity in cocaine-dependent vs. comparison subjects suggests poorer communication across brain regions during cognitive-control processes. In mean-adjusted analyses, the cocaine-dependent group displayed relatively greater Stroop-related connectivity in regions implicated in motivational processes in addictions. The relationships between treatment outcomes and connectivity in the midbrain and basal ganglia suggest that connectivity represents a potential treatment target. PMID:24200209

  11. Zolpidem Use and the Risk of Injury: A Population-Based Follow-Up Study

    PubMed Central

    Lin, Ching-Chun; Wang, Li-Hsuan; Kang, Jiunn-Horng

    2013-01-01

    Background While an association between zolpidem use and fracture and road accident was previously proposed, this study aimed to further explore the frequency and risk of a wide spectrum of injuries in subjects prescribed with zolpidem in Taiwan. Methods We identified 77,036 subjects who received Zolpidem treatment between 2005 and 2007. We randomly selected 77,036 comparison subjects who were frequency-matched based-on their demographic profiles. We individually tracked each subject for a 90-day period to identify those who subsequently suffered an injury. Cox proportional hazards regressions were performed to calculate the hazard ratio of injury between the two groups. Results The incidence rate of injury during the 90-day follow-up period for the total subjects was 18.11 (95% CI = 17.69–18.54) per 100 person-years; this was 24.35 (95% CI = 23.66–25.05) and 11.86 (95% CI = 11.39–12.36) for the study and comparison cohort, respectively. After adjusting for demographic variables, the hazard ratio (HR) of injury during the 90-day follow-up period for study subjects was 1.83 (95% CI = 1.73–1.94) that of comparison subjects. Additionally, compared to comparison subjects, the adjusted HR of injury during the 90-day follow-up period for study subjects who were prescribed Zolpidem for >30 days was as high as 2.17 (95% CI = 2.05–2.32). The adjusted HR of injury to blood vessels for study subjects was particularly high when compared to comparison subjects (HR = 6.34; 95% CI = 1.37–29.38). Conclusions We found that patients prescribed with Zolpidem were at a higher risk for a wide range of injuries. PMID:23826304

  12. Comparison of Passive Microwave-Derived Early Melt Onset Records on Arctic Sea Ice

    NASA Technical Reports Server (NTRS)

    Bliss, Angela C.; Miller, Jeffrey A.; Meier, Walter N.

    2017-01-01

    Two long records of melt onset (MO) on Arctic sea ice from passive microwave brightness temperatures (Tbs) obtained by a series of satellite-borne instruments are compared. The Passive Microwave (PMW) method and Advanced Horizontal Range Algorithm (AHRA) detect the increase in emissivity that occurs when liquid water develops around snow grains at the onset of early melting on sea ice. The timing of MO on Arctic sea ice influences the amount of solar radiation absorbed by the ice-ocean system throughout the melt season by reducing surface albedos in the early spring. This work presents a thorough comparison of these two methods for the time series of MO dates from 1979through 2012. The methods are first compared using the published data as a baseline comparison of the publically available data products. A second comparison is performed on adjusted MO dates we produced to remove known differences in inter-sensor calibration of Tbs and masking techniques used to develop the original MO date products. These adjustments result in a more consistent set of input Tbs for the algorithms. Tests of significance indicate that the trends in the time series of annual mean MO dates for the PMW and AHRA are statistically different for the majority of the Arctic Ocean including the Laptev, E. Siberian, Chukchi, Beaufort, and central Arctic regions with mean differences as large as 38.3 days in the Barents Sea. Trend agreement improves for our more consistent MO dates for nearly all regions. Mean differences remain large, primarily due to differing sensitivity of in-algorithm thresholds and larger uncertainties in thin-ice regions.

  13. An experimental detrending approach to attributing change of pan evaporation in comparison with the traditional partial differential method

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Sun, Fubao; Xia, Jun; Liu, Wenbin; Sang, Yanfang

    2017-04-01

    In predicting how droughts and hydrological cycles would change in a warming climate, change of atmospheric evaporative demand measured by pan evaporation (Epan) is one crucial element to be understood. Over the last decade, the derived partial differential (PD) form of the PenPan equation is a prevailing attribution approach to attributing changes to Epan worldwide. However, the independency among climatic variables required by the PD approach cannot be met using long term observations. Here we designed a series of numerical experiments to attribute changes of Epan over China by detrending each climatic variable, i.e., an experimental detrending approach, to address the inter-correlation among climate variables, and made comparison with the traditional PD method. The results show that the detrending approach is superior not only to a complicate system with multi-variables and mixing algorithm like aerodynamic component (Ep,A) and Epan, but also to a simple case like radiative component (Ep,R), when compared with traditional PD method. The major reason for this is the strong and significant inter-correlation of input meteorological forcing. Very similar and fine attributing results have been achieved based on detrending approach and PD method after eliminating the inter-correlation of input through a randomize approach. The contribution of Rh and Ta in net radiation and thus Ep,R, which has been overlooked based on the PD method but successfully detected by detrending approach, provides some explanation to the comparing results. We adopted the control run from the detrending approach and applied it to made adjustment of PD method. Much improvement has been made and thus proven this adjustment an effective way in attributing changes to Epan. Hence, the detrending approach and the adjusted PD method are well recommended in attributing changes in hydrological models to better understand and predict water and energy cycle.

  14. Comparison of two on-orbit attitude sensor alignment methods

    NASA Technical Reports Server (NTRS)

    Krack, Kenneth; Lambertson, Michael; Markley, F. Landis

    1990-01-01

    Compared here are two methods of on-orbit alignment of vector attitude sensors. The first method uses the angular difference between simultaneous measurements from two or more sensors. These angles are compared to the angular differences between the respective reference positions of the sensed objects. The alignments of the sensors are adjusted to minimize the difference between the two sets of angles. In the second method, the sensor alignment is part of a state vector that includes the attitude. The alignments are adjusted along with the attitude to minimize all observation residuals. It is shown that the latter method can result in much less alignment uncertainty when gyroscopes are used for attitude propagation during the alignment estimation. The additional information for this increased accuracy comes from knowledge of relative attitude obtained from the spacecraft gyroscopes. The theoretical calculations of this difference in accuracy are presented. Also presented are numerical estimates of the alignment uncertainties of the fixed-head star trackers on the Extreme Ultraviolet Explorer spacecraft using both methods.

  15. A Simple Illustration for the Need of Multiple Comparison Procedures

    ERIC Educational Resources Information Center

    Carter, Rickey E.

    2010-01-01

    Statistical adjustments to accommodate multiple comparisons are routinely covered in introductory statistical courses. The fundamental rationale for such adjustments, however, may not be readily understood. This article presents a simple illustration to help remedy this.

  16. Estimating pregnancy-related mortality from census data: experience in Latin America

    PubMed Central

    Queiroz, Bernardo L; Wong, Laura; Plata, Jorge; Del Popolo, Fabiana; Rosales, Jimmy; Stanton, Cynthia

    2009-01-01

    Abstract Objective To assess the feasibility of measuring maternal mortality in countries lacking accurate birth and death registration through national population censuses by a detailed evaluation of such data for three Latin American countries. Methods We used established demographic techniques, including the general growth balance method, to evaluate the completeness and coverage of the household death data obtained through population censuses. We also compared parity to cumulative fertility data to evaluate the coverage of recent household births. After evaluating the data and adjusting it as necessary, we calculated pregnancy-related mortality ratios (PRMRs) per 100 000 live births and used them to estimate maternal mortality. Findings The PRMRs for Honduras (2001), Nicaragua (2005) and Paraguay (2002) were 168, 95 and 178 per 100 000 live births, respectively. Surprisingly, evaluation of the data for Nicaragua and Paraguay showed overreporting of adult deaths, so a downward adjustment of 20% to 30% was required. In Honduras, the number of adult female deaths required substantial upward adjustment. The number of live births needed minimal adjustment. The adjusted PRMR estimates are broadly consistent with existing estimates of maternal mortality from various data sources, though the comparison varies by source. Conclusion Census data can be used to measure pregnancy-related mortality as a proxy for maternal mortality in countries with poor death registration. However, because our data were obtained from countries with reasonably good statistical systems and literate populations, we cannot be certain the methods employed in the study will be equally useful in more challenging environments. Our data evaluation and adjustment methods worked, but with considerable uncertainty. Ways of quantifying this uncertainty are needed. PMID:19551237

  17. Sensitivity analysis for missing dichotomous outcome data in multi-visit randomized clinical trial with randomization-based covariance adjustment.

    PubMed

    Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde

    2017-01-01

    Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.

  18. Comparison of the antibacterial activity of chelating agents using the agar diffusion method

    USDA-ARS?s Scientific Manuscript database

    The agar diffusion assay was used to examine antibacterial activity of 2 metal chelators. Concentrations of 0 to 40 mM of ethylenediaminetetraacetic acid (EDTA) and ethylenediamine-N,N’-disuccinic acid (EDDS) were prepared in 1.0 M potassium hydroxide (KOH). The pH of the solutions was adjusted to 1...

  19. A Cross-Cultural Comparison of Belgian and Vietnamese Children's Social Competence and Behavior

    ERIC Educational Resources Information Center

    Roskam, Isabelle; Hoang, Thi Vân; Schelstraete, Marie-Anne

    2017-01-01

    Children's social competence and behavioral adjustment are key issues for child development, education, and clinical research. Cross-cultural analyses are necessary to provide relevant methods of assessing them for cross-cultural research. The aim of the current study was to contribute to this important line of research by validating the 3-factor…

  20. Drug affordability-potential tool for comparing illicit drug markets.

    PubMed

    Groshkova, Teodora; Cunningham, Andrew; Royuela, Luis; Singleton, Nicola; Saggers, Tony; Sedefov, Roumen

    2018-06-01

    The importance of illicit drug price data and making appropriate adjustments for purity has been repeatedly highlighted for understanding illicit drug markets. The European Monitoring Centre for Drugs and Drug Addiction (EMCDDA) has been collecting retail price data for a number of drug types alongside drug-specific purity information for over 15 years. While these data are useful for a number of monitoring and analytical purposes, they are not without their limitations and there are circumstances where additional adjustment needs to be considered. This paper reviews some conceptual issues and measurement challenges relevant to the interpretation of price data. It also highlights the issues with between-country comparisons of drug prices and introduces the concept of affordability of drugs, going beyond purity-adjustment to account for varying national economies. Based on a 2015 European data set of price and purity data across the heroin and cocaine retail markets, the paper demonstrates a new model for drug market comparative analysis; calculation of drug affordability is achieved by applying to purity-adjusted prices 2015 Price Level Indices (PLI, Eurostat). Available data allowed retail heroin and cocaine market comparison for 27 European countries. The lowest and highest unadjusted prices per gram were observed for heroin: in Estonia, Belgium, Greece and Bulgaria (lowest) and Finland, Ireland, Sweden and Latvia (highest); for cocaine: the Netherlands, Belgium and the United Kingdom (lowest) and Turkey, Finland, Estonia and Romania (highest). The affordability per gram of heroin and cocaine when taking into account adjustment for both purity and economy demonstrates different patterns. It is argued that purity-adjusted price alone provides an incomplete comparison of retail price across countries. The proposed new method takes account of the differing economic conditions within European countries, thus providing a more sophisticated tool for cross-national comparisons of retail drug markets in Europe. Future work will need to examine other potential uses of the drug affordability tool. The limitations of this measure reflect primarily the limitations of the constituent data; in addition to issues inherent in collecting accurate data on illicit markets, analysis that relies on data collected from multiple countries is susceptible to discrepancies in data collection practices from country to country. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Pixel Color Clustering of Multi-Temporally Acquired Digital Photographs of a Rice Canopy by Luminosity-Normalization and Pseudo-Red-Green-Blue Color Imaging

    PubMed Central

    Doi, Ryoichi; Arif, Chusnul

    2014-01-01

    Red-green-blue (RGB) channels of RGB digital photographs were loaded with luminosity-adjusted R, G, and completely white grayscale images, respectively (RGwhtB method), or R, G, and R + G (RGB yellow) grayscale images, respectively (RGrgbyB method), to adjust the brightness of the entire area of multi-temporally acquired color digital photographs of a rice canopy. From the RGwhtB or RGrgbyB pseudocolor image, cyan, magenta, CMYK yellow, black, L*, a*, and b* grayscale images were prepared. Using these grayscale images and R, G, and RGB yellow grayscale images, the luminosity-adjusted pixels of the canopy photographs were statistically clustered. With the RGrgbyB and the RGwhtB methods, seven and five major color clusters were given, respectively. The RGrgbyB method showed clear differences among three rice growth stages, and the vegetative stage was further divided into two substages. The RGwhtB method could not clearly discriminate between the second vegetative and midseason stages. The relative advantages of the RGrgbyB method were attributed to the R, G, B, magenta, yellow, L*, and a* grayscale images that contained richer information to show the colorimetrical differences among objects than those of the RGwhtB method. The comparison of rice canopy colors at different time points was enabled by the pseudocolor imaging method. PMID:25302325

  2. A comparison of methods of estimating potential evapotranspiration from climatological data in arid and subhumid environments

    USGS Publications Warehouse

    Cruff, R.W.; Thompson, T.H.

    1967-01-01

    This study compared potential evapotranspiration, computed from climatological data by each of six empirical methods, with pan evaporation adjusted to equivalent lake evaporation by regional coefficients. The six methods tested were the Thornthwaite, U.S. Weather Bureau (a modification of the Permian method), Lowry-Johnson, Blaney-Criddle, Lane, and Hamon methods. The test was limited to 25 sites in the arid and subhumid parts of Arizona, California, and Nevada, where pan evaporation and concurrent climatological data were available. However, some of the sites lacked complete climatological data for the application of all six methods. Average values of adjusted pan evaporation and computed potential evapotransp4ration were compared for two periods---the calendar year and the 6-month period from May 1 through October 31. The 25 sites sampled a wide range of climatic conditions. Ten sites (group 1) were in a highly arid environment and four (group 2) were in an arid environment that was modified by extensive irrigation. The remaining 11 sites (group 3) were in a subhumid environment. Only the Weather Bureau method gave estimates of potential evapotranspiration that closely agreed with the adjusted pan evaporation at all sites where the method was used. However, lack of climatological data restricted the use of the Weather Bureau method to seven sites. Results obtained by use of the Thornthwaite, Lowry-Johnson, and Hamon methods were consistently low. Results obtained by use of the Lane method agreed with adjusted pan evaporation at the group 1 sites but were consistently high at the group 2 and 3 sites. During the analysis it became apparent that adjusted pan evaporation in an arid environment (group 1 sites) was a spurious standard for evaluating the reliability of .the methods that were tested. Group 1 data were accordingly not considered when making conclusions as ,to which of the six methods tested was best. The results of this study for group 2 and 3 data indicated that the Blaney-Criddle method, which uses climatological data that can be readily obtained or deduced, was the most practical of the six methods for estimating potential evapotranspiration. At all 15 sites in the two environments, potential evapotranspiration computed by the Blaney-Criddle method checked the adjusted pan evaporation within ?22 percent. This percentage range is generally considered to be the range of reliability for estimating lake evaporation from evaporation pans.

  3. Cost-effectiveness of a new short-stay unit to "rule out" acute myocardial infarction in low risk patients.

    PubMed

    Gaspoz, J M; Lee, T H; Weinstein, M C; Cook, E F; Goldman, P; Komaroff, A L; Goldman, L

    1994-11-01

    This study attempted to determine the safety and costs of a new short-stay unit for low risk patients who may be admitted to a hospital to rule out myocardial infarction or ischemia. One strategy to reduce the costs of ruling out acute myocardial infarction in low risk patients is to develop alternatives to coronary care units. The short-term and 6-month clinical outcomes and costs for 592 patients admitted to a short-stay coronary observation unit at Brigham and Women's Hospital with a low (< or = 10%) probability of acute myocardial infarction were compared with those for 924 consecutive comparison patients who were eligible for the same unit but were admitted to other hospital settings or discharged home. Actual costs were calculated using detailed cost-accounting methods that incorporated nursing intensity weights. The rate of major complications, recurrent myocardial infarction or cardiac death during 6 months after the initial presentation of the 592 patients admitted to the coronary observation unit was similar to that of the 924 comparison patients before and after adjustment for clinical factors influencing triage and initial diagnoses (adjusted relative risk 0.86, 95% confidence interval 0.49 to 1.53). Their median total costs (25th, 75th percentile) at 6 months ($1,927; 1,455, 3,650) were significantly lower than for comparison patients admitted to the wards $4,712; 1,868, 11,187), to stepdown or intermediate care units ($4,031; 2,069, 9,169) or to the coronary care unit ($9,201; 3,171, 20,011) but were higher than for comparison patients discharged home from the emergency department ($403; 403,927) before and after the same adjustments (all adjusted p < 0.0001). These data suggest that the coronary observation unit may be a safe and cost-saving alternative to current triage strategies for patients with a low risk of acute myocardial infarction admitted from the emergency department. Its replication in other hospitals should be tested.

  4. The efficacy of inhibiting tumour necrosis factor alpha and interleukin 1 in patients with rheumatoid arthritis: a meta-analysis and adjusted indirect comparisons.

    PubMed

    Nixon, R; Bansback, N; Brennan, A

    2007-07-01

    New treatments that inhibit the cytokines tumour necrosis factor alpha (TNFalpha) and interleukin 1 (IL-1) in the treatment of rheumatoid arthritis have proven clinical effect against placebo and methotrexate (MTX) in several clinical trials in early and late-stage disease and different severity groups. Since there are no head-to-head randomized controlled trials directly comparing the currently available treatments, etanercept, adalimumab, infliximab or anakinra, we perform a meta-analysis that adjusts for differences between study characteristics, and allows indirect comparisons between treatments. Thirteen trials of cytokine antagonists were included from a systematic review of the literature. They reported the primary outcome of American College of Rheumatology (ACR) response criteria at 6 months or beyond. Meta-analytical methods are used to quantify relative treatment effects, using the log odds ratio of an ACR20 or ACR50 response at 6 months, whilst adjusting for study-level variables. In each of the trials, cytokine treatment was efficacious in comparison with placebo or MTX. For each treatment, the inclusion of MTX in combination improved the response. After adjustment for study-level variables, we found TNFalpha antagonists to be more efficacious compared with anakinra (P < 0.05). Indirect comparisons between the three TNFalpha antagonists indicated no difference in efficacy. Sensitivity analysis using a different statistical model structure confirmed these results. When the outcome of interest is the probability of an ACR20 or ACR50 response at 6 months we found: (i) treatment with the IL-1 antagonist anakinra is better than placebo; (ii) for each treatment, the use of combination MTX improves the probability of response; (iii) treatment with any of the TNFalpha antagonists is better than with the IL-1 antagonist anakinra; and (iv) all drugs in the TNFalpha antagonist class are no different from each other.

  5. A comparison of methods for adjusting biomarkers of iron, zinc, and selenium status for the effect of inflammation in an older population: a case for interleukin 6.

    PubMed

    MacDonell, Sue O; Miller, Jody C; Harper, Michelle J; Reid, Malcolm R; Haszard, Jillian J; Gibson, Rosalind S; Houghton, Lisa A

    2018-05-14

    Older people are at risk of micronutrient deficiencies, which can be under- or overestimated in the presence of inflammation. Several methods have been proposed to adjust for the effect of inflammation; however, to our knowledge, none have been investigated in older adults in whom chronic inflammation is common. We investigated the influence of various inflammation-adjustment methods on micronutrient biomarkers associated with anemia in older people living in aged-care facilities in New Zealand. Blood samples were collected from 289 New Zealand aged-care residents aged >65 y. Serum ferritin, soluble transferrin receptor (sTfR), total body iron (TBI), plasma zinc, and selenium as well as the inflammatory markers high-sensitivity C-reactive protein (CRP), α1-acid glycoprotein (AGP), and interleukin 6 (IL-6) were measured. Four adjustment methods were applied to micronutrient concentrations: 1) internal correction factors based on stages of inflammation defined by CRP and AGP, 2) external correction factors derived from the literature, 3) a regression correction model in which reference CRP and AGP were set to the maximum of the lowest decile, and 4) a regression correction model in which reference IL-6 was set to the maximum of the lowest decile. Forty percent of participants had elevated concentrations of CRP, AGP, or both, and 37% of participants had higher than normal concentrations of IL-6. Adjusted geometric mean values for serum ferritin, sTfR, and TBI were significantly lower (P < 0.001), and plasma zinc and selenium were significantly higher (P < 0.001), than the unadjusted values regardless of the method applied. The greatest inflammation adjustment was observed with the regression correction that used IL-6. Subsequently, the prevalence of zinc and selenium deficiency decreased (-13% and -14%, respectively; P < 0.001), whereas iron deficiency remained unaffected. Adjustment for inflammation should be considered when evaluating micronutrient status in this aging population group; however, the approaches used require further investigation, particularly the influence of adjustment for IL-6.

  6. Risk adjustment in the American College of Surgeons National Surgical Quality Improvement Program: a comparison of logistic versus hierarchical modeling.

    PubMed

    Cohen, Mark E; Dimick, Justin B; Bilimoria, Karl Y; Ko, Clifford Y; Richards, Karen; Hall, Bruce Lee

    2009-12-01

    Although logistic regression has commonly been used to adjust for risk differences in patient and case mix to permit quality comparisons across hospitals, hierarchical modeling has been advocated as the preferred methodology, because it accounts for clustering of patients within hospitals. It is unclear whether hierarchical models would yield important differences in quality assessments compared with logistic models when applied to American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP) data. Our objective was to evaluate differences in logistic versus hierarchical modeling for identifying hospitals with outlying outcomes in the ACS-NSQIP. Data from ACS-NSQIP patients who underwent colorectal operations in 2008 at hospitals that reported at least 100 operations were used to generate logistic and hierarchical prediction models for 30-day morbidity and mortality. Differences in risk-adjusted performance (ratio of observed-to-expected events) and outlier detections from the two models were compared. Logistic and hierarchical models identified the same 25 hospitals as morbidity outliers (14 low and 11 high outliers), but the hierarchical model identified 2 additional high outliers. Both models identified the same eight hospitals as mortality outliers (five low and three high outliers). The values of observed-to-expected events ratios and p values from the two models were highly correlated. Results were similar when data were permitted from hospitals providing < 100 patients. When applied to ACS-NSQIP data, logistic and hierarchical models provided nearly identical results with respect to identification of hospitals' observed-to-expected events ratio outliers. As hierarchical models are prone to implementation problems, logistic regression will remain an accurate and efficient method for performing risk adjustment of hospital quality comparisons.

  7. A comparison of bootstrap methods and an adjusted bootstrap approach for estimating the prediction error in microarray classification.

    PubMed

    Jiang, Wenyu; Simon, Richard

    2007-12-20

    This paper first provides a critical review on some existing methods for estimating the prediction error in classifying microarray data where the number of genes greatly exceeds the number of specimens. Special attention is given to the bootstrap-related methods. When the sample size n is small, we find that all the reviewed methods suffer from either substantial bias or variability. We introduce a repeated leave-one-out bootstrap (RLOOB) method that predicts for each specimen in the sample using bootstrap learning sets of size ln. We then propose an adjusted bootstrap (ABS) method that fits a learning curve to the RLOOB estimates calculated with different bootstrap learning set sizes. The ABS method is robust across the situations we investigate and provides a slightly conservative estimate for the prediction error. Even with small samples, it does not suffer from large upward bias as the leave-one-out bootstrap and the 0.632+ bootstrap, and it does not suffer from large variability as the leave-one-out cross-validation in microarray applications. Copyright (c) 2007 John Wiley & Sons, Ltd.

  8. Investigating Subtypes of Child Development: A Comparison of Cluster Analysis and Latent Class Cluster Analysis in Typology Creation

    ERIC Educational Resources Information Center

    DiStefano, Christine; Kamphaus, R. W.

    2006-01-01

    Two classification methods, latent class cluster analysis and cluster analysis, are used to identify groups of child behavioral adjustment underlying a sample of elementary school children aged 6 to 11 years. Behavioral rating information across 14 subscales was obtained from classroom teachers and used as input for analyses. Both the procedures…

  9. "How We Know What We Know": A Systematic Comparison of Research Methods Employed in Higher Education Journals, 1996-2000 v. 2006-2010

    ERIC Educational Resources Information Center

    Wells, Ryan S.; Kolek, Ethan A.; Williams, Elizabeth A.; Saunders, Daniel B.

    2015-01-01

    This study replicates and extends a 2004 content analysis of three major higher education journals. The original study examined the methodological characteristics of all published research in these journals from 1996 to 2000, recommending that higher education programs adjust their graduate training to better match the heavily quantitative and…

  10. A Comparison of Various Estimators for Updating Forest Area Coverage Using AVHRR and Forest Inventory Data

    Treesearch

    Francis A. Roesch; Paul C. van Deusen; Zhiliang Zhu

    1995-01-01

    Various methods of adjusting low-cost and possibly biased estimates of percent forest coverage from AVHRR data with a subsample of higher-cost estimates from the USDA Forest Service's Forest Inventory and Analysis plots were investigated. Two ratio and two regression estimators were evaluated. Previous work (Zhu and Teuber, 1991) finding that the estimates from...

  11. A Comparison of the Performance of Advanced Statistical Techniques for the Refinement of Day-ahead and Longer NWP-based Wind Power Forecasts

    NASA Astrophysics Data System (ADS)

    Zack, J. W.

    2015-12-01

    Predictions from Numerical Weather Prediction (NWP) models are the foundation for wind power forecasts for day-ahead and longer forecast horizons. The NWP models directly produce three-dimensional wind forecasts on their respective computational grids. These can be interpolated to the location and time of interest. However, these direct predictions typically contain significant systematic errors ("biases"). This is due to a variety of factors including the limited space-time resolution of the NWP models and shortcomings in the model's representation of physical processes. It has become common practice to attempt to improve the raw NWP forecasts by statistically adjusting them through a procedure that is widely known as Model Output Statistics (MOS). The challenge is to identify complex patterns of systematic errors and then use this knowledge to adjust the NWP predictions. The MOS-based improvements are the basis for much of the value added by commercial wind power forecast providers. There are an enormous number of statistical approaches that can be used to generate the MOS adjustments to the raw NWP forecasts. In order to obtain insight into the potential value of some of the newer and more sophisticated statistical techniques often referred to as "machine learning methods" a MOS-method comparison experiment has been performed for wind power generation facilities in 6 wind resource areas of California. The underlying NWP models that provided the raw forecasts were the two primary operational models of the US National Weather Service: the GFS and NAM models. The focus was on 1- and 2-day ahead forecasts of the hourly wind-based generation. The statistical methods evaluated included: (1) screening multiple linear regression, which served as a baseline method, (2) artificial neural networks, (3) a decision-tree approach called random forests, (4) gradient boosted regression based upon an decision-tree algorithm, (5) support vector regression and (6) analog ensemble, which is a case-matching scheme. The presentation will provide (1) an overview of each method and the experimental design, (2) performance comparisons based on standard metrics such as bias, MAE and RMSE, (3) a summary of the performance characteristics of each approach and (4) a preview of further experiments to be conducted.

  12. Comparative coronary risks of apixaban, rivaroxaban and dabigatran: a meta-analysis and adjusted indirect comparison

    PubMed Central

    Loke, Yoon K; Pradhan, Shiva; Yeong, Jessica Ka-yan; Kwok, Chun Shing

    2014-01-01

    Aims There are concerns regarding increased risk of acute coronary syndrome with dabigatran. We aimed to assess whether alternative treatment options such as rivaroxaban or apixaban carry a similar risk as compared with dabigatran. Methods We searched MEDLINE and EMBASE for randomized controlled trials of apixaban, dabigatran or rivaroxaban against control (placebo, heparin or vitamin K antagonist). We pooled odds ratios (OR) for adverse coronary events (acute coronary syndrome or myocardial infarction) using fixed effect meta-analysis and assessed heterogeneity with I2. We conducted adjusted indirect comparisons to compare risk of adverse coronary events with apixaban or rivaroxaban vs. dabigatran. Results Twenty-seven randomized controlled trials met the inclusion criteria. Dabigatran was associated with a significantly increased risk of adverse coronary events in pooled analysis of nine trials (OR 1.45, 95% CI 1.14, 1.86). There was no signal for coronary risk with apixaban from nine trials (pooled OR 0.89, 95% CI 0.78, 1.03) or rivaroxaban from nine trials (pooled OR 0.81, 95% CI 0.72, 0.93). Overall, adjusted indirect comparison suggested that both apixaban (OR 0.61, 95% CI 0.44, 0.85) and rivaroxaban (OR 0.54; 95% CI 0.39, 0.76) were associated with lower coronary risk than dabigatran. Restricting the indirect comparison to a vitamin K antagonist as a common control, yielded similar findings, OR 0.57 (95% CI 0.39, 0.85) for apixaban vs. dabigatran and 0.53 (95% CI 0.37, 0.77) for rivaroxaban vs. dabigatran. Conclusions There are significant differences in the comparative safety of apixaban, rivaroxaban and dabigatran with regards to acute coronary adverse events. PMID:24617578

  13. Using Linear and Non-Linear Temporal Adjustments to Align Multiple Phenology Curves, Making Vegetation Status and Health Directly Comparable

    NASA Astrophysics Data System (ADS)

    Hargrove, W. W.; Norman, S. P.; Kumar, J.; Hoffman, F. M.

    2017-12-01

    National-scale polar analysis of MODIS NDVI allows quantification of degree of seasonality expressed by local vegetation, and also selects the most optimum start/end of a local "phenological year" that is empirically customized for the vegetation that is growing at each location. Interannual differences in timing of phenology make direct comparisons of vegetation health and performance between years difficult, whether at the same or different locations. By "sliding" the two phenologies in time using a Procrustean linear time shift, any particular phenological event or "completion milestone" can be synchronized, allowing direct comparison of differences in timing of other remaining milestones. Going beyond a simple linear translation, time can be "rubber-sheeted," compressed or dilated. Considering one phenology curve to be a reference, the second phenology can be "rubber-sheeted" to fit that baseline as well as possible by stretching or shrinking time to match multiple control points, which can be any recognizable phenological events. Similar to "rubber sheeting" to georectify a map inside a GIS, rubber sheeting a phenology curve also yields a warping signature that shows at every time and every location how many days the adjusted phenology is ahead or behind the phenological development of the reference vegetation. Using such temporal methods to "adjust" phenologies may help to quantify vegetation impacts from frost, drought, wildfire, insects and diseases by permitting the most commensurate quantitative comparisons with unaffected vegetation.

  14. Patterns of Unit and Item Nonresponse in the CAHPS® Hospital Survey

    PubMed Central

    Elliott, Marc N; Edwards, Carol; Angeles, January; Hambarsoomians, Katrin; Hays, Ron D

    2005-01-01

    Objective To examine the predictors of unit and item nonresponse, the magnitude of nonresponse bias, and the need for nonresponse weights in the Consumer Assessment of Health Care Providers and Systems (CAHPS®) Hospital Survey. Methods A common set of 11 administrative variables (41 degrees of freedom) was used to predict unit nonresponse and the rate of item nonresponse in multivariate models. Descriptive statistics were used to examine the impact of nonresponse on CAHPS Hospital Survey ratings and reports. Results Unit nonresponse was highest for younger patients and patients other than non-Hispanic whites (p<.001); item nonresponse increased steadily with age (p<.001). Fourteen of 20 reports of ratings of care had significant (p<.05) but small negative correlations with nonresponse weights (median −0.06; maximum −0.09). Nonresponse weights do not improve overall precision below sample sizes of 300–1,000, and are unlikely to improve the precision of hospital comparisons. In some contexts, case-mix adjustment eliminates most observed nonresponse bias. Conclusions Nonresponse weights should not be used for between-hospital comparisons of the CAHPS Hospital Survey, but may make small contributions to overall estimates or demographic comparisons, especially in the absence of case-mix adjustment. PMID:16316440

  15. Real medical benefit assessed by indirect comparison.

    PubMed

    Falissard, Bruno; Zylberman, Myriam; Cucherat, Michel; Izard, Valérie; Meyer, François

    2009-01-01

    Frequently, in data packages submitted for Marketing Approval to the CHMP, there is a lack of relevant head-to-head comparisons of medicinal products that could enable national authorities responsible for the approval of reimbursement to assess the Added Therapeutic Value (ASMR) of new clinical entities or line extensions of existing therapies.Indirect or mixed treatment comparisons (MTC) are methods stemming from the field of meta-analysis that have been designed to tackle this problem. Adjusted indirect comparisons, meta-regressions, mixed models, Bayesian network analyses pool results of randomised controlled trials (RCTs), enabling a quantitative synthesis.The REAL procedure, recently developed by the HAS (French National Authority for Health), is a mixture of an MTC and effect model based on expert opinions. It is intended to translate the efficacy observed in the trials into effectiveness expected in day-to-day clinical practice in France.

  16. FAST Model Calibration and Validation of the OC5-DeepCwind Floating Offshore Wind System Against Wave Tank Test Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  17. A comparison of methods for organ-weight data adjustment in chicks.

    PubMed

    Brown, D R; Southern, L L; Baker, D H

    1985-02-01

    An experiment was conducted with 168 Arbor Acre X Peterson unsexed, crossbred broiler chicks to compare methods of expressing organ-weight data and to assess changes in organ weights and physiological parameters as body weight (97 to 791 g) and age (5 to 26 days) increased. Actual wet weight of liver, heart, intestine, spleen, and pancreas and percent bone ash increased (P less than .01) as age and body weight increased. Tibia length-to-width ratio decreased (P less than .01) as age and body weight increased. Blood hemoglobin, hematocrit, and plasma protein were not affected (P greater than .1) by age or by body weight. Liver, heart, and intestinal weight decreased (P less than .01) and spleen weight increased (P less than .01) as body weight and age increased when these tissue weights were expressed as percent of body weight. Liver weight adjusted for body weight by covariance analysis, however, remained constant; adjusted heart and intestinal weights decreased (P less than .01), and adjusted spleen weights increased (P less than .01) with increasing age and body weight. The covariate, body weight, was not significant (P greater than .1) for pancreas weight, tibia length-to-width ratio, and percent bone ash. Except for spleen, adjustment by covariance analysis more effectively reduced variation due to body weight than did expression as percent of body weight.(ABSTRACT TRUNCATED AT 250 WORDS)

  18. Effects on Parental Mental Health of an Education and Skills Training Program for Parents of Young Children with Autism: A Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Tonge, Bruce; Brereton, Avril; Kiomall, Melissa; MacKinnon, Andrew; King, Neville; Rinehart, Nicole

    2006-01-01

    Objective: To determine the impact of a parent education and behavior management intervention (PEBM) on the mental health and adjustment of parents with preschool children with autism. Method: A randomized, group-comparison design involving a parent education and counseling intervention to control for nonspecific therapist effects and a control…

  19. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    PubMed

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  20. A zero-augmented generalized gamma regression calibration to adjust for covariate measurement error: A case of an episodically consumed dietary intake

    PubMed Central

    Agogo, George O.

    2017-01-01

    Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long-term dietary intake and disease occurrence. Long-term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ-reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short-term instrument such as 24-hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR-reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero-augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long-term intake with 24HR and FFQ-reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method. PMID:27704599

  1. Impact of gastrectomy procedural complexity on surgical outcomes and hospital comparisons.

    PubMed

    Mohanty, Sanjay; Paruch, Jennifer; Bilimoria, Karl Y; Cohen, Mark; Strong, Vivian E; Weber, Sharon M

    2015-08-01

    Most risk adjustment approaches adjust for patient comorbidities and the primary procedure. However, procedures done at the same time as the index case may increase operative risk and merit inclusion in adjustment models for fair hospital comparisons. Our objectives were to evaluate the impact of surgical complexity on postoperative outcomes and hospital comparisons in gastric cancer surgery. Patients who underwent gastric resection for cancer were identified from a large clinical dataset. Procedure complexity was characterized using secondary procedure CPT codes and work relative value units (RVUs). Regression models were developed to evaluate the association between complexity variables and outcomes. The impact of complexity adjustment on model performance and hospital comparisons was examined. Among 3,467 patients who underwent gastrectomy for adenocarcinoma, 2,171 operations were distal and 1,296 total. A secondary procedure was reported for 33% of distal gastrectomies and 59% of total gastrectomies. Six of 10 secondary procedures were associated with adverse outcomes. For example, patients who underwent a synchronous bowel resection had a higher risk of mortality (odds ratio [OR], 2.14; 95% CI, 1.07-4.29) and reoperation (OR, 2.09; 95% CI, 1.26-3.47). Model performance was slightly better for nearly all outcomes with complexity adjustment (mortality c-statistics: standard model, 0.853; secondary procedure model, 0.858; RVU model, 0.855). Hospital ranking did not change substantially after complexity adjustment. Surgical complexity variables are associated with adverse outcomes in gastrectomy, but complexity adjustment does not affect hospital rankings appreciably. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Comparing and Combining Data across Multiple Sources via Integration of Paired-sample Data to Correct for Measurement Error

    PubMed Central

    Huang, Yunda; Huang, Ying; Moodie, Zoe; Li, Sue; Self, Steve

    2014-01-01

    Summary In biomedical research such as the development of vaccines for infectious diseases or cancer, measures from the same assay are often collected from multiple sources or laboratories. Measurement error that may vary between laboratories needs to be adjusted for when combining samples across laboratories. We incorporate such adjustment in comparing and combining independent samples from different labs via integration of external data, collected on paired samples from the same two laboratories. We propose: 1) normalization of individual level data from two laboratories to the same scale via the expectation of true measurements conditioning on the observed; 2) comparison of mean assay values between two independent samples in the Main study accounting for inter-source measurement error; and 3) sample size calculations of the paired-sample study so that hypothesis testing error rates are appropriately controlled in the Main study comparison. Because the goal is not to estimate the true underlying measurements but to combine data on the same scale, our proposed methods do not require that the true values for the errorprone measurements are known in the external data. Simulation results under a variety of scenarios demonstrate satisfactory finite sample performance of our proposed methods when measurement errors vary. We illustrate our methods using real ELISpot assay data generated by two HIV vaccine laboratories. PMID:22764070

  3. Lifetime incidences of traumatic events and mental health among children affected by HIV/AIDS in rural China.

    PubMed

    Li, Xiaoming; Barnett, Douglas; Fang, Xiaoyi; Lin, Xiuyun; Zhao, Guoxiang; Zhao, Junfeng; Hong, Yan; Zhang, Liying; Naar-King, Sylvie; Stanton, Bonita

    2009-09-01

    Cross-sectional data were gathered from 1,625 children (M age = 12.85, SD = 2.21) which included 755 AIDS orphans, 466 vulnerable children, and 404 comparison children. Participants completed self-report measures of exposure to traumatic events, and psychosocial adjustment including behavior problems, depression, self-esteem, and future orientation. AIDS orphans and vulnerable children reported experiencing a higher total occurrence, density, duration, initial impact and lasting impact of traumatic events compared to comparison children. Scores reflecting adjustment were lower among orphans and vulnerable children than among comparison children. Both orphan status and traumatic events contributed unique variance in the expected direction to the prediction of psychosocial adjustment. The data in the current study suggested that children affected by HIV/AIDS in China are exposed to more trauma and suffer more adjustment problems than children who do not experience HIV/AIDS in their families.

  4. Association between hyperglycaemic crisis and long-term major adverse cardiovascular events: a nationwide population-based, propensity score-matched, cohort study

    PubMed Central

    Chang, Li-Hsin; Lin, Liang-Yu; Tsai, Ming-Tsun; How, Chorng-Kuang; Chiang, Jen-Huai; Hsieh, Vivian Chia-Rong; Hu, Sung-Yuan; Hsieh, Ming-Shun

    2016-01-01

    Objective Hyperglycaemic crisis was associated with significant intrahospital morbidity and mortality. However, the association between hyperglycaemic crisis and long-term cardiovascular outcomes remained unknown. This study aimed to investigate the association between hyperglycaemic crisis and subsequent long-term major adverse cardiovascular events (MACEs). Participants and methods This population-based cohort study was conducted using data from Taiwan's National Health Insurance Research Database for the period of 1996–2012. A total of 2171 diabetic patients with hyperglycaemic crisis fit the inclusion criteria. Propensity score matching was used to match the baseline characteristics of the study cohort to construct a comparison cohort which comprised 8684 diabetic patients without hyperglycaemic crisis. The risk of long-term MACEs was compared between the two cohorts. Results Six hundred and seventy-six MACEs occurred in the study cohort and the event rate was higher than that in the comparison cohort (31.1% vs 24.1%, p<0.001). Patients with hyperglycaemic crisis were associated with a higher risk of long-term MACEs even after adjusting for all baseline characteristics and medications (adjusted HR=1.76, 95% CI 1.62 to 1.92, p<0.001). Acute myocardial infarction had the highest adjusted HR (adjusted HR=2.19, 95% CI 1.75 to 2.75, p<0.001) in the four types of MACEs, followed by congestive heart failure (adjusted HR=1.97, 95% CI 1.70 to 2.28, p<0.001). Younger patients with hyperglycaemic crisis had a higher risk of MACEs than older patients (adjusted HR=2.69 for patients aged 20–39 years vs adjusted HR=1.58 for patients aged >65 years). Conclusions Hyperglycaemic crisis was significantly associated with long-term MACEs, especially in the young population. Further prospective longitudinal study should be conducted for validation. PMID:27554106

  5. Intra-operative adjustment of standard planes in C-arm CT image data.

    PubMed

    Brehler, Michael; Görres, Joseph; Franke, Jochen; Barth, Karl; Vetter, Sven Y; Grützner, Paul A; Meinzer, Hans-Peter; Wolf, Ivo; Nabers, Diana

    2016-03-01

    With the help of an intra-operative mobile C-arm CT, medical interventions can be verified and corrected, avoiding the need for a post-operative CT and a second intervention. An exact adjustment of standard plane positions is necessary for the best possible assessment of the anatomical regions of interest but the mobility of the C-arm causes the need for a time-consuming manual adjustment. In this article, we present an automatic plane adjustment at the example of calcaneal fractures. We developed two feature detection methods (2D and pseudo-3D) based on SURF key points and also transferred the SURF approach to 3D. Combined with an atlas-based registration, our algorithm adjusts the standard planes of the calcaneal C-arm images automatically. The robustness of the algorithms is evaluated using a clinical data set. Additionally, we tested the algorithm's performance for two registration approaches, two resolutions of C-arm images and two methods for metal artifact reduction. For the feature extraction, the novel 3D-SURF approach performs best. As expected, a higher resolution ([Formula: see text] voxel) leads also to more robust feature points and is therefore slightly better than the [Formula: see text] voxel images (standard setting of device). Our comparison of two different artifact reduction methods and the complete removal of metal in the images shows that our approach is highly robust against artifacts and the number and position of metal implants. By introducing our fast algorithmic processing pipeline, we developed the first steps for a fully automatic assistance system for the assessment of C-arm CT images.

  6. Concurrent validity of the College Adjustment scales using comparison with the MMPI College Maladjustment Scale.

    PubMed

    Campbell, Michael H; Palmieri, Michael; Lasch, Brandi

    2006-12-01

    The concurrent validity of the College Adjustment Scales was assessed using comparison to the College Maladjustment Scale of the Minnesota Multiphasic Inventory-2. Undergraduate students (N=56, 40 women, M age = 21.3 yr., 87.5% white, non-Hispanic) completed both tests. Analysis indicated scores on 8 of 9 College Adjustment Scales correlated significantly in the predicted direction with those on the College Maladjustment Scale, thereby providing some additional support for convergent validity. While the conclusions are limited significantly by the small sample, this report provides an incremental contribution to the validity of the College Adjustment Scales.

  7. Public Reporting of Primary Care Clinic Quality: Accounting for Sociodemographic Factors in Risk Adjustment and Performance Comparison.

    PubMed

    Wholey, Douglas R; Finch, Michael; Kreiger, Rob; Reeves, David

    2018-01-03

    Performance measurement and public reporting are increasingly being used to compare clinic performance. Intended consequences include quality improvement, value-based payment, and consumer choice. Unintended consequences include reducing access for riskier patients and inappropriately labeling some clinics as poor performers, resulting in tampering with stable care processes. Two analytic steps are used to maximize intended and minimize unintended consequences. First, risk adjustment is used to reduce the impact of factors outside providers' control. Second, performance categorization is used to compare clinic performance using risk-adjusted measures. This paper examines the effects of methodological choices, such as risk adjusting for sociodemographic factors in risk adjustment and accounting for patients clustering by clinics in performance categorization, on clinic performance comparison for diabetes care, vascular care, asthma, and colorectal cancer screening. The population includes all patients with commercial and public insurance served by clinics in Minnesota. Although risk adjusting for sociodemographic factors has a significant effect on quality, it does not explain much of the variation in quality. In contrast, taking into account the nesting of patients within clinics in performance categorization has a substantial effect on performance comparison.

  8. Detection of QT prolongation using a novel ECG analysis algorithm applying intelligent automation: Prospective blinded evaluation using the Cardiac Safety Research Consortium ECG database

    PubMed Central

    Green, Cynthia L.; Kligfield, Paul; George, Samuel; Gussak, Ihor; Vajdic, Branislav; Sager, Philip; Krucoff, Mitchell W.

    2013-01-01

    Background The Cardiac Safety Research Consortium (CSRC) provides both “learning” and blinded “testing” digital ECG datasets from thorough QT (TQT) studies annotated for submission to the US Food and Drug Administration (FDA) to developers of ECG analysis technologies. This manuscript reports the first results from a blinded “testing” dataset that examines Developer re-analysis of original Sponsor-reported core laboratory data. Methods 11,925 anonymized ECGs including both moxifloxacin and placebo arms of a parallel-group TQT in 191 subjects were blindly analyzed using a novel ECG analysis algorithm applying intelligent automation. Developer measured ECG intervals were submitted to CSRC for unblinding, temporal reconstruction of the TQT exposures, and statistical comparison to core laboratory findings previously submitted to FDA by the pharmaceutical sponsor. Primary comparisons included baseline-adjusted interval measurements, baseline- and placebo-adjusted moxifloxacin QTcF changes (ddQTcF), and associated variability measures. Results Developer and Sponsor-reported baseline-adjusted data were similar with average differences less than 1 millisecond (ms) for all intervals. Both Developer and Sponsor-reported data demonstrated assay sensitivity with similar ddQTcF changes. Average within-subject standard deviation for triplicate QTcF measurements was significantly lower for Developer than Sponsor-reported data (5.4 ms and 7.2 ms, respectively; p<0.001). Conclusion The virtually automated ECG algorithm used for this analysis produced similar yet less variable TQT results compared to the Sponsor-reported study, without the use of a manual core laboratory. These findings indicate CSRC ECG datasets can be useful for evaluating novel methods and algorithms for determining QT/QTc prolongation by drugs. While the results should not constitute endorsement of specific algorithms by either CSRC or FDA, the value of a public domain digital ECG warehouse to provide prospective, blinded comparisons of ECG technologies applied for QT/QTc measurement is illustrated. PMID:22424006

  9. Using structural equation modeling for network meta-analysis.

    PubMed

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.

  10. Risk adjustment as basis for rational benchmarking: the example of colon carcinoma.

    PubMed

    Ptok, Henry; Marusch, Frank; Schmidt, Uwe; Gastinger, Ingo; Wenisch, Hubertus J C; Lippert, Hans

    2011-01-01

    The results of resection of colorectal carcinoma can vary greatly from one hospital to another. However, this does not necessarily reflect differences in the quality of treatment. The purpose of this study was to compare various tools for the risk-adjusted assessment of treatment results after resection of colorectal carcinoma within the context of hospital benchmarking. On the basis of a data pool provided by a multicentric observation study of patients with colon cancer, the postoperative in-hospital mortality rates at two high-volume hospitals ("A" and "B") were compared. After univariate comparison, risk-adjusted comparison of postoperative mortality was performed by logistic regression analysis (LReA), propensity-score analysis (PScA), and the CR-POSSUM score. Postoperative complications were compared by LReA and PScA. Although postoperative mortality differed significantly (P = 0.041) in univariate comparison of hospitals A and B (2.9% vs. 6.4%), no significant difference was found by LReA or PScA. Similarly, the observed mortality at these did not differ significantly from the mortality estimated by the CR-POSSUM score (hospital A, 2.9%/4.9%, P = 0.298; hospital B, 6.4%/6.5%, P = 1.000). Significant differences were seen in risk-adjusted comparison of most postoperative complications (by both LReA and PScA), but there were no differences in the rates of relaparotomy or anastomotic leakage that required surgery. For the hard outcome variable "postoperative mortality," none of the three risk adjustment procedures showed any difference between the hospitals. The CR-POSSUM score can be regarded as the most practicable tool for risk-adjusted comparison of the outcome of colon-carcinoma resection in clinical benchmarking.

  11. Comparison of enzyme-linked immunosorbent assay and gas chromatography procedures for the detection of cyanazine and metolachlor in surface water samples

    USGS Publications Warehouse

    Schraer, S.M.; Shaw, D.R.; Boyette, M.; Coupe, R.H.; Thurman, E.M.

    2000-01-01

    Enzyme-linked immunosorbent assay (ELISA) data from surface water reconnaissance were compared to data from samples analyzed by gas chromatography for the pesticide residues cyanazine (2-[[4-chloro-6-(ethylamino)-l,3,5-triazin-2-yl]amino]-2-methylpropanenitrile ) and metolachlor (2-chloro-N-(2-ethyl-6-methylphenyl)-N-(2-methoxy-1-methylethyl)acetamide). When ELISA analyses were duplicated, cyanazine and metolachlor detection was found to have highly reproducible results; adjusted R2s were 0.97 and 0.94, respectively. When ELISA results for cyanazine were regressed against gas chromatography results, the models effectively predicted cyanazine concentrations from ELISA analyses (adjusted R2s ranging from 0.76 to 0.81). The intercepts and slopes for these models were not different from 0 and 1, respectively. This indicates that cyanazine analysis by ELISA is expected to give the same results as analysis by gas chromatography. However, regressing ELISA analyses for metolachlor against gas chromatography data provided more variable results (adjusted R2s ranged from 0.67 to 0.94). Regression models for metolachlor analyses had two of three intercepts that were not different from 0. Slopes for all metolachlor regression models were significantly different from 1. This indicates that as metolachlor concentrations increase, ELISA will over- or under-estimate metolachlor concentration, depending on the method of comparison. ELISA can be effectively used to detect cyanazine and metolachlor in surface water samples. However, when detections of metolachlor have significant consequences or implications it may be necessary to use other analytical methods.

  12. How firms set prices for medical materials: a multi-country study.

    PubMed

    Ide, Hiroo; Mollahaliloglu, Salih

    2009-09-01

    This study presents a comparison of medical material prices, discusses why differences exist, and examines methods for comparing prices. Market prices for drug-eluting stents (DES), non-drug-eluting stents (non-DES), and percutaneous transluminal coronary angioplasty (PTCA) catheters were collected from five countries: the United States, Japan, Korea, Turkey, and Thailand. To compare prices, three adjustment methods were used: currency exchange rates, purchasing power parity (PPP), and gross domestic product (GDP) per capita. The ratios of medical material prices compared with those in the United States were higher in Japan (from 1.4 for DES to 5.0 for PTCA catheters) and Korea (from 1.2 for DES to 4.0 for PTCA catheters), and lower in Turkey (from 0.8 for non-DES to 1.4 for DES) and Thailand (from 0.5 for non-DES to 1.3 for PTCA catheters). The PPP-adjusted ratios changed slightly for Japan, Korea, and Turkey. When the prices were adjusted by GDP per capita, the ratios were much higher. Comparing prices using currency exchange rates or PPP is applicable only between countries with stable economic relations; adjustment by GDP per capita reflects the actual burden. Further study is needed to fully elucidate the factors influencing the global medical material market.

  13. A computational approach to compare regression modelling strategies in prediction research.

    PubMed

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  14. Evaluation of an artificial intelligence guided inverse planning system: clinical case study.

    PubMed

    Yan, Hui; Yin, Fang-Fang; Willett, Christopher

    2007-04-01

    An artificial intelligence (AI) guided method for parameter adjustment of inverse planning was implemented on a commercial inverse treatment planning system. For evaluation purpose, four typical clinical cases were tested and the results from both plans achieved by automated and manual methods were compared. The procedure of parameter adjustment mainly consists of three major loops. Each loop is in charge of modifying parameters of one category, which is carried out by a specially customized fuzzy inference system. A physician prescribed multiple constraints for a selected volume were adopted to account for the tradeoff between prescription dose to the PTV and dose-volume constraints for critical organs. The searching process for an optimal parameter combination began with the first constraint, and proceeds to the next until a plan with acceptable dose was achieved. The initial setup of the plan parameters was the same for each case and was adjusted independently by both manual and automated methods. After the parameters of one category were updated, the intensity maps of all fields were re-optimized and the plan dose was subsequently re-calculated. When final plan arrived, the dose statistics were calculated from both plans and compared. For planned target volume (PTV), the dose for 95% volume is up to 10% higher in plans using the automated method than those using the manual method. For critical organs, an average decrease of the plan dose was achieved. However, the automated method cannot improve the plan dose for some critical organs due to limitations of the inference rules currently employed. For normal tissue, there was no significant difference between plan doses achieved by either automated or manual method. With the application of AI-guided method, the basic parameter adjustment task can be accomplished automatically and a comparable plan dose was achieved in comparison with that achieved by the manual method. Future improvements to incorporate case-specific inference rules are essential to fully automate the inverse planning process.

  15. Poultry Processing Work and Respiratory Health of Latino Men and Women in North Carolina

    PubMed Central

    Mirabelli, Maria C.; Chatterjee, Arjun B.; Arcury, Thomas A.; Mora, Dana C.; Blocker, Jill N.; Grzywacz, Joseph G.; Chen, Haiying; Marín, Antonio J.; Schulz, Mark R.; Quandt, Sara A.

    2015-01-01

    Objective To evaluate associations between poultry processing work and respiratory health among working Latino men and women in North Carolina. Methods Between May 2009 and November 2010, 402 poultry processing workers and 339 workers in a comparison population completed interviewer-administered questionnaires. Of these participants, 279 poultry processing workers and 222 workers in the comparison population also completed spirometry testing to provide measurements of forced expiratory volume in 1 second and forced vital capacity. Results Nine percent of poultry processing workers and 10% of workers in the comparison population reported current asthma. Relative to the comparison population, adjusted mean forced expiratory volume in 1 second and forced vital capacity were lower in the poultry processing population, particularly among men who reported sanitation job activities. Conclusions Despite the low prevalence of respiratory symptoms reported, poultry processing work may affect lung function. PMID:22237034

  16. Modelling Comparative Efficacy of Drugs with Different Survival Profiles: Ipilimumab, Vemurafenib and Dacarbazine in Advanced Melanoma.

    PubMed

    Lee, D; Porter, J; Hertel, N; Hatswell, A J; Briggs, A

    2016-08-01

    In the absence of head-to-head data, a common method for modelling comparative survival for cost-effectiveness analysis is estimating hazard ratios from trial publications. This assumes that the hazards of mortality are proportional between treatments and that outcomes are not polluted by subsequent therapy use. Newer techniques that compare treatments where the proportional hazards assumption is violated and adjust for use of subsequent therapies often require patient-level data, which are rarely available for all treatments. The objective of this study was to provide a comparison of overall survival data for ipilimumab, vemurafenib and dacarbazine using data from three trials lacking a common comparator arm and confounded by the use of subsequent treatment. We compared three estimated overall survival curves for vemurafenib and the difference compared to ipilimumab and dacarbazine. We performed a naïve comparison and adjusted it for heterogeneity between the ipilimumab and vemurafenib trials, including differences in prognostic characteristics and subsequent therapy using a published hazard function for the impact of prognostic characteristics in melanoma and trial data on the impact of second-line use of ipilimumab. The mean incremental life-years gained for patients receiving ipilimumab compared with vemurafenib were 0.34 (95 % confidence interval [CI] -0.24 to 0.84) using the naïve comparison and 0.51 (95 % CI -0.08 to 0.99) using the covariate-adjusted survival curve. The analyses estimated the comparative efficacy of ipilimumab and vemurafenib in the absence of head-to-head patient-level data for all trials and proportional hazards in overall survival.

  17. Information-Adaptive Image Encoding and Restoration

    NASA Technical Reports Server (NTRS)

    Park, Stephen K.; Rahman, Zia-ur

    1998-01-01

    The multiscale retinex with color restoration (MSRCR) has shown itself to be a very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition. A number of algorithms exist that provide one or more of these features, but not all. In this paper we compare the performance of the MSRCR with techniques that are widely used for image enhancement. Specifically, we compare the MSRCR with color adjustment methods such as gamma correction and gain/offset application, histogram modification techniques such as histogram equalization and manual histogram adjustment, and other more powerful techniques such as homomorphic filtering and 'burning and dodging'. The comparison is carried out by testing the suite of image enhancement methods on a set of diverse images. We find that though some of these techniques work well for some of these images, only the MSRCR performs universally well oil the test set.

  18. Comparison of Memory Function and MMPI-2 Profile between Post-traumatic Stress Disorder and Adjustment Disorder after a Traffic Accident

    PubMed Central

    Bae, Sung-Man; Hyun, Myoung-Ho

    2014-01-01

    Objective Differential diagnosis between post-traumatic stress disorder (PTSD) and adjustment disorder (AD) is rather difficult, but very important to the assignment of appropriate treatment and prognosis. This study investigated methods to differentiate PTSD and AD. Methods Twenty-five people with PTSD and 24 people with AD were recruited. Memory tests, the Minnesota Multiphasic Personality Inventory 2 (MMPI-2), and Beck's Depression Inventory were administered. Results There were significant decreases in immediate verbal recall and delayed verbal recognition in the participants with PTSD. The reduced memory functions of participants with PTSD were significantly influenced by depressive symptoms. Hypochondriasis, hysteria, psychopathic deviate, paranoia, schizophrenia, post-traumatic stress disorder scale of MMPI-2 classified significantly PTSD and AD group. Conclusion Our results suggest that verbal memory assessments and the MMPI-2 could be useful for discriminating between PTSD and AD. PMID:24851120

  19. Boundary condition at a two-phase interface in the lattice Boltzmann method for the convection-diffusion equation.

    PubMed

    Yoshida, Hiroaki; Kobayashi, Takayuki; Hayashi, Hidemitsu; Kinjo, Tomoyuki; Washizu, Hitoshi; Fukuzawa, Kenji

    2014-07-01

    A boundary scheme in the lattice Boltzmann method (LBM) for the convection-diffusion equation, which correctly realizes the internal boundary condition at the interface between two phases with different transport properties, is presented. The difficulty in satisfying the continuity of flux at the interface in a transient analysis, which is inherent in the conventional LBM, is overcome by modifying the collision operator and the streaming process of the LBM. An asymptotic analysis of the scheme is carried out in order to clarify the role played by the adjustable parameters involved in the scheme. As a result, the internal boundary condition is shown to be satisfied with second-order accuracy with respect to the lattice interval, if we assign appropriate values to the adjustable parameters. In addition, two specific problems are numerically analyzed, and comparison with the analytical solutions of the problems numerically validates the proposed scheme.

  20. Inflation Adjustments for Defense Acquisition

    DTIC Science & Technology

    2014-10-01

    remaining for attribution to the growth of other factors that are captured by the price deflator. However, we lacked access to the BEA and BLS data needed ...the consideration of hedonic methods, which are based on the system’s characteristics rather than the cost of components. The current BEA index, for...engineering change orders. A hedonic price index, by comparison, would be based on the aircraft’s “quality” variables—its physical and operational

  1. Methods for network meta-analysis of continuous outcomes using individual patient data: a case study in acupuncture for chronic pain.

    PubMed

    Saramago, Pedro; Woods, Beth; Weatherly, Helen; Manca, Andrea; Sculpher, Mark; Khan, Kamran; Vickers, Andrew J; MacPherson, Hugh

    2016-10-06

    Network meta-analysis methods, which are an extension of the standard pair-wise synthesis framework, allow for the simultaneous comparison of multiple interventions and consideration of the entire body of evidence in a single statistical model. There are well-established advantages to using individual patient data to perform network meta-analysis and methods for network meta-analysis of individual patient data have already been developed for dichotomous and time-to-event data. This paper describes appropriate methods for the network meta-analysis of individual patient data on continuous outcomes. This paper introduces and describes network meta-analysis of individual patient data models for continuous outcomes using the analysis of covariance framework. Comparisons are made between this approach and change score and final score only approaches, which are frequently used and have been proposed in the methodological literature. A motivating example on the effectiveness of acupuncture for chronic pain is used to demonstrate the methods. Individual patient data on 28 randomised controlled trials were synthesised. Consistency of endpoints across the evidence base was obtained through standardisation and mapping exercises. Individual patient data availability avoided the use of non-baseline-adjusted models, allowing instead for analysis of covariance models to be applied and thus improving the precision of treatment effect estimates while adjusting for baseline imbalance. The network meta-analysis of individual patient data using the analysis of covariance approach is advocated to be the most appropriate modelling approach for network meta-analysis of continuous outcomes, particularly in the presence of baseline imbalance. Further methods developments are required to address the challenge of analysing aggregate level data in the presence of baseline imbalance.

  2. A Comparison of Fabrication Techniques for Hollow Retroreflectors

    NASA Technical Reports Server (NTRS)

    Preston, Alix; Merkowitz, Stephen

    2014-01-01

    Despite the wide usage of hollow retroreflectors, there is limited literature involving their fabrication techniques and only two documented construction methods could be found. One consists of an adjustable fixture that allows for the independent alignment of each mirror, while the other consists of a modified solid retroreflector that is used as a mandrel. Although both methods were shown to produce hollow retroreflectors with arcsecond dihedral angle errors, a comparison and analysis of each method could not be found which makes it difficult to ascertain which method would be better suited to use for precision-aligned retroreflectors. Although epoxy bonding is generally the preferred method to adhere the three mirrors, a relatively new method known as hydroxide-catalysis bonding (HCB) presents several potential advantages over epoxy bonding. HCB has been used to bond several optical components for space-based missions, but has never been applied for construction of hollow retroreflectors. In this paper we examine the benefits and limitations of each bonding fixture as well as present results and analysis of hollow retroreflectors made using both epoxy and HCB techniques.

  3. Monthly stumpage prices for the Pacific Northwest.

    Treesearch

    Richard W. Haynes

    1991-01-01

    Seasonal variation is found in monthly stumpage price data. Seasonal adjustments indicate that monthly adjustments improve the utility of estimates of monthly stumpage prices. Comparisons of adjusted and unadjusted prices suggest that the unadjusted price series are reasonably robust.

  4. Long-Term Large-Scale Bias-Adjusted Precipitation Estimates at High Spatial and Temporal Resolution Derived from the National Mosaic and Multi-Sensor QPE (NMQ/Q2) Precipitation Reanalysis over CONUS

    NASA Astrophysics Data System (ADS)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Seo, D. J.; Kim, B.

    2014-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over Continental United States (CONUS) is nearly completed for the period covering from 2000 to 2012. This important milestone constitutes a unique opportunity to study precipitation processes at a 1-km spatial resolution for a 5-min temporal resolution. However, in order to be suitable for hydrological, meteorological and climatological applications, the radar-only product needs to be bias-adjusted and merged with in-situ rain gauge information. Rain gauge networks such as the Hydrometeorological Automated Data System (HADS), the Automated Surface Observing Systems (ASOS), the Climate Reference Network (CRN), and the Global Historical Climatology Network - Daily (GHCN-D) are used to adjust for those biases and to merge with the radar only product to provide a multi-sensor estimate. The challenges related to incorporating non-homogeneous networks over a vast area and for a long-term record are enormous. Among the challenges we are facing are the difficulties incorporating differing resolution and quality surface measurements to adjust gridded estimates of precipitation. Another challenge is the type of adjustment technique. After assessing the bias and applying reduction or elimination techniques, we are investigating the kriging method and its variants such as simple kriging (SK), ordinary kriging (OK), and conditional bias-penalized Kriging (CBPK) among others. In addition we hope to generate estimates of uncertainty for the gridded estimate. In this work the methodology is presented as well as a comparison between the radar-only product and the final multi-sensor QPE product. The comparison is performed at various time scales from the sub-hourly, to annual. In addition, comparisons over the same period with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) and satellite products (TMPA, CMORPH, PERSIANN) are provided in order to give a detailed picture of the improvements and remaining challenges.

  5. Effects of past and recent blood pressure and cholesterol level on coronary heart disease and stroke mortality, accounting for measurement error.

    PubMed

    Boshuizen, Hendriek C; Lanti, Mariapaola; Menotti, Alessandro; Moschandreas, Joanna; Tolonen, Hanna; Nissinen, Aulikki; Nedeljkovic, Srecko; Kafatos, Anthony; Kromhout, Daan

    2007-02-15

    The authors aimed to quantify the effects of current systolic blood pressure (SBP) and serum total cholesterol on the risk of mortality in comparison with SBP or serum cholesterol 25 years previously, taking measurement error into account. The authors reanalyzed 35-year follow-up data on mortality due to coronary heart disease and stroke among subjects aged 65 years or more from nine cohorts of the Seven Countries Study. The two-step method of Tsiatis et al. (J Am Stat Assoc 1995;90:27-37) was used to adjust for regression dilution bias, and results were compared with those obtained using more commonly applied methods of adjustment for regression dilution bias. It was found that the commonly used univariate adjustment for regression dilution bias overestimates the effects of both SBP and cholesterol compared with multivariate methods. Also, the two-step method makes better use of the information available, resulting in smaller confidence intervals. Results comparing recent and past exposure indicated that past SBP is more important than recent SBP in terms of its effect on coronary heart disease mortality, while both recent and past values seem to be important for effects of cholesterol on coronary heart disease mortality and effects of SBP on stroke mortality. Associations between serum cholesterol concentration and risk of stroke mortality are weak.

  6. Logistic Regression Likelihood Ratio Test Analysis for Detecting Signals of Adverse Events in Post-market Safety Surveillance.

    PubMed

    Nam, Kijoeng; Henderson, Nicholas C; Rohan, Patricia; Woo, Emily Jane; Russek-Cohen, Estelle

    2017-01-01

    The Vaccine Adverse Event Reporting System (VAERS) and other product surveillance systems compile reports of product-associated adverse events (AEs), and these reports may include a wide range of information including age, gender, and concomitant vaccines. Controlling for possible confounding variables such as these is an important task when utilizing surveillance systems to monitor post-market product safety. A common method for handling possible confounders is to compare observed product-AE combinations with adjusted baseline frequencies where the adjustments are made by stratifying on observable characteristics. Though approaches such as these have proven to be useful, in this article we propose a more flexible logistic regression approach which allows for covariates of all types rather than relying solely on stratification. Indeed, a main advantage of our approach is that the general regression framework provides flexibility to incorporate additional information such as demographic factors and concomitant vaccines. As part of our covariate-adjusted method, we outline a procedure for signal detection that accounts for multiple comparisons and controls the overall Type 1 error rate. To demonstrate the effectiveness of our approach, we illustrate our method with an example involving febrile convulsion, and we further evaluate its performance in a series of simulation studies.

  7. A comparison between standard methods and structural nested modelling when bias from a healthy worker survivor effect is suspected: an iron-ore mining cohort study.

    PubMed

    Björ, Ove; Damber, Lena; Jonsson, Håkan; Nilsson, Tohr

    2015-07-01

    Iron-ore miners are exposed to extremely dusty and physically arduous work environments. The demanding activities of mining select healthier workers with longer work histories (ie, the Healthy Worker Survivor Effect (HWSE)), and could have a reversing effect on the exposure-response association. The objective of this study was to evaluate an iron-ore mining cohort to determine whether the effect of respirable dust was confounded by the presence of an HWSE. When an HWSE exists, standard modelling methods, such as Cox regression analysis, produce biased results. We compared results from g-estimation of accelerated failure-time modelling adjusted for HWSE with corresponding unadjusted Cox regression modelling results. For all-cause mortality when adjusting for the HWSE, cumulative exposure from respirable dust was associated with a 6% decrease of life expectancy if exposed ≥15 years, compared with never being exposed. Respirable dust continued to be associated with mortality after censoring outcomes known to be associated with dust when adjusting for the HWSE. In contrast, results based on Cox regression analysis did not support that an association was present. The adjustment for the HWSE made a difference when estimating the risk of mortality from respirable dust. The results of this study, therefore, support the recommendation that standard methods of analysis should be complemented with structural modelling analysis techniques, such as g-estimation of accelerated failure-time modelling, to adjust for the HWSE. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Principal Component Analysis for pulse-shape discrimination of scintillation radiation detectors

    NASA Astrophysics Data System (ADS)

    Alharbi, T.

    2016-01-01

    In this paper, we report on the application of Principal Component analysis (PCA) for pulse-shape discrimination (PSD) of scintillation radiation detectors. The details of the method are described and the performance of the method is experimentally examined by discriminating between neutrons and gamma-rays with a liquid scintillation detector in a mixed radiation field. The performance of the method is also compared against that of the conventional charge-comparison method, demonstrating the superior performance of the method particularly at low light output range. PCA analysis has the important advantage of automatic extraction of the pulse-shape characteristics which makes the PSD method directly applicable to various scintillation detectors without the need for the adjustment of a PSD parameter.

  9. Adjusting data to body size: a comparison of methods as applied to quantitative trait loci analysis of musculoskeletal phenotypes.

    PubMed

    Lang, Dean H; Sharkey, Neil A; Lionikas, Arimantas; Mack, Holly A; Larsson, Lars; Vogler, George P; Vandenbergh, David J; Blizard, David A; Stout, Joseph T; Stitt, Joseph P; McClearn, Gerald E

    2005-05-01

    The aim of this study was to compare three methods of adjusting skeletal data for body size and examine their use in QTL analyses. It was found that dividing skeletal phenotypes by body mass index induced erroneous QTL results. The preferred method of body size adjustment was multiple regression. Many skeletal studies have reported strong correlations between phenotypes for muscle, bone, and body size, and these correlations add to the difficulty in identifying genetic influence on skeletal traits that are not mediated through overall body size. Quantitative trait loci (QTL) identified for skeletal phenotypes often map to the same chromosome regions as QTLs for body size. The actions of a QTL identified as influencing BMD could therefore be mediated through the generalized actions of growth on body size or muscle mass. Three methods of adjusting skeletal phenotypes to body size were performed on morphologic, structural, and compositional measurements of the femur and tibia in 200-day-old C57BL/6J x DBA/2 (BXD) second generation (F(2)) mice (n = 400). A common method of removing the size effect has been through the use of ratios. This technique and two alternative techniques using simple and multiple regression were performed on muscle and skeletal data before QTL analyses, and the differences in QTL results were examined. The use of ratios to remove the size effect was shown to increase the size effect by inducing spurious correlations, thereby leading to inaccurate QTL results. Adjustments for body size using multiple regression eliminated these problems. Multiple regression should be used to remove the variance of co-factors related to skeletal phenotypes to allow for the study of genetic influence independent of correlated phenotypes. However, to better understand the genetic influence, adjusted and unadjusted skeletal QTL results should be compared. Additional insight can be gained by observing the difference in LOD score between the adjusted and nonadjusted phenotypes. Identifying QTLs that exert their effects on skeletal phenotypes through body size-related pathways as well as those having a more direct and independent influence on bone are equally important in deciphering the complex physiologic pathways responsible for the maintenance of bone health.

  10. Adjusted Levenberg-Marquardt method application to methene retrieval from IASI/METOP spectra

    NASA Astrophysics Data System (ADS)

    Khamatnurova, Marina; Gribanov, Konstantin

    2016-04-01

    Levenberg-Marquardt method [1] with iteratively adjusted parameter and simultaneous evaluation of averaging kernels together with technique of parameters selection are developed and applied to the retrieval of methane vertical profiles in the atmosphere from IASI/METOP spectra. Retrieved methane vertical profiles are then used for calculation of total atmospheric column amount. NCEP/NCAR reanalysis data provided by ESRL (NOAA, Boulder,USA) [2] are taken as initial guess for retrieval algorithm. Surface temperature, temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval for each selected spectrum. Modified software package FIRE-ARMS [3] were used for numerical experiments. To adjust parameters and validate the method we used ECMWF MACC reanalysis data [4]. Methane columnar values retrieved from cloudless IASI spectra demonstrate good agreement with MACC columnar values. Comparison is performed for IASI spectra measured in May of 2012 over Western Siberia. Application of the method for current IASI/METOP measurements are discussed. 1.Ma C., Jiang L. Some Research on Levenberg-Marquardt Method for the Nonlinear Equations // Applied Mathematics and Computation. 2007. V.184. P. 1032-1040 2.http://www.esrl.noaa.gov/psdhttp://www.esrl.noaa.gov/psd 3.Gribanov K.G., Zakharov V.I., Tashkun S.A., Tyuterev Vl.G.. A New Software Tool for Radiative Transfer Calculations and its application to IMG/ADEOS data // JQSRT.2001.V.68.№ 4. P. 435-451. 4.http://www.ecmwf.int/http://www.ecmwf.int

  11. Variable-image videoconfrontation as a method of assessing body image: a technical report and comparison with similar techniques.

    PubMed

    McCrea, C; Neil, W J; Flanigan, J W; Summerfield, A B

    1988-08-01

    In this study a new modified videosystem, designed for measuring body-image, was evaluated alongside the major size-estimation measure, namely, the visual size-estimation apparatus. The advantages afforded by a videosystem which allows independent adjustment of size and height/width proportions were highlighted, and its validity and reliability were examined, based on estimates made by obese, normal weight, and pregnant groups.

  12. Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care.

    PubMed

    Damman, Olga C; Stubbe, Janine H; Hendriks, Michelle; Arah, Onyebuchi A; Spreeuwenberg, Peter; Delnoij, Diana M J; Groenewegen, Peter P

    2009-04-01

    Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for analyzing healthcare performance data, it has rarely been used to assess case-mix adjustment of such data. The purpose of this article is to investigate whether multilevel regression analysis is a useful tool to detect case-mix adjusters in consumer assessment of healthcare. We used data on 11,539 consumers from 27 Dutch health plans, which were collected using the Dutch Consumer Quality Index health plan instrument. We conducted multilevel regression analyses of consumers' responses nested within health plans to assess the effects of consumer characteristics on consumer experience. We compared our findings to the results of another methodology: the impact factor approach, which combines the predictive effect of each case-mix variable with its heterogeneity across health plans. Both multilevel regression and impact factor analyses showed that age and education were the most important case-mix adjusters for consumer experience and ratings of health plans. With the exception of age, case-mix adjustment had little impact on the ranking of health plans. On both theoretical and practical grounds, multilevel modeling is useful for adequate case-mix adjustment and analysis of performance ratings.

  13. Network meta-analysis combining individual patient and aggregate data from a mixture of study designs with an application to pulmonary arterial hypertension.

    PubMed

    Thom, Howard H Z; Capkun, Gorana; Cerulli, Annamaria; Nixon, Richard M; Howard, Luke S

    2015-04-12

    Network meta-analysis (NMA) is a methodology for indirectly comparing, and strengthening direct comparisons of two or more treatments for the management of disease by combining evidence from multiple studies. It is sometimes not possible to perform treatment comparisons as evidence networks restricted to randomized controlled trials (RCTs) may be disconnected. We propose a Bayesian NMA model that allows to include single-arm, before-and-after, observational studies to complete these disconnected networks. We illustrate the method with an indirect comparison of treatments for pulmonary arterial hypertension (PAH). Our method uses a random effects model for placebo improvements to include single-arm observational studies into a general NMA. Building on recent research for binary outcomes, we develop a covariate-adjusted continuous-outcome NMA model that combines individual patient data (IPD) and aggregate data from two-arm RCTs with the single-arm observational studies. We apply this model to a complex comparison of therapies for PAH combining IPD from a phase-III RCT of imatinib as add-on therapy for PAH and aggregate data from RCTs and single-arm observational studies, both identified by a systematic review. Through the inclusion of observational studies, our method allowed the comparison of imatinib as add-on therapy for PAH with other treatments. This comparison had not been previously possible due to the limited RCT evidence available. However, the credible intervals of our posterior estimates were wide so the overall results were inconclusive. The comparison should be treated as exploratory and should not be used to guide clinical practice. Our method for the inclusion of single-arm observational studies allows the performance of indirect comparisons that had previously not been possible due to incomplete networks composed solely of available RCTs. We also built on many recent innovations to enable researchers to use both aggregate data and IPD. This method could be used in similar situations where treatment comparisons have not been possible due to restrictions to RCT evidence and where a mixture of aggregate data and IPD are available.

  14. ConvAn: a convergence analyzing tool for optimization of biochemical networks.

    PubMed

    Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils

    2012-01-01

    Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  15. An estimation of carrying capacity for sea otters along the California coast

    USGS Publications Warehouse

    Laidre, K.L.; Jameson, R.J.; DeMaster, D.P.

    2001-01-01

    Eggs of wild birds collected for the purpose of measuring concentrations of pesticides or other pollutants vary from nearly fresh to nearly dry so that objective comparisons cannot be made on the basis of weight of the contents at the time of collection. Residue concentrations in the nearly dry eggs can be greatly exaggerated by this artifact. Valid interpretation of residue data depends upon compensation for these losses. A method is presented for making adjustments on the basis of volume of the egg, and formulas are derived for estimating the volume of eggs of eagles, ospreys, and pelicans from egg measurements. The possibility of adjustments on the basis of percentage of moisture, solids, or fat in fresh eggs is discussed also.

  16. FAST Model Calibration and Validation of the OC5- DeepCwind Floating Offshore Wind System Against Wave Tank Test Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  17. Empirical Reference Distributions for Networks of Different Size

    PubMed Central

    Smith, Anna; Calder, Catherine A.; Browning, Christopher R.

    2016-01-01

    Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although “normalized” versions of some network statistics exist, we demonstrate via simulation why direct comparison is often inappropriate. We consider normalizing network statistics relative to a simple fully parameterized reference distribution and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively comparable across different network sizes but still describe interesting features of networks, and that this can be accomplished at relatively low computational expense. Finally, we apply this methodology to a collection of ecological networks derived from the Los Angeles Family and Neighborhood Survey activity location data. PMID:27721556

  18. Interhospital differences and case-mix in a nationwide prevalence survey.

    PubMed

    Kanerva, M; Ollgren, J; Lyytikäinen, O

    2010-10-01

    A prevalence survey is a time-saving and useful tool for obtaining an overview of healthcare-associated infection (HCAI) either in a single hospital or nationally. Direct comparison of prevalence rates is difficult. We evaluated the impact of case-mix adjustment on hospital-specific prevalences. All five tertiary care, all 15 secondary care and 10 (25% of 40) other acute care hospitals took part in the first national prevalence survey in Finland in 2005. US Centers for Disease Control and Prevention criteria served to define HCAI. The information collected included demographic characteristics, severity of the underlying disease, use of catheters and a respirator, and previous surgery. Patients with HCAI related to another hospital were excluded. Case-mix-adjusted HCAI prevalences were calculated by using a multivariate logistic regression model for HCAI risk and an indirect standardisation method. Altogether, 587 (7.2%) of 8118 adult patients had at least one infection; hospital-specific prevalences ranged between 1.9% and 12.6%. Risk factors for HCAI that were previously known or identified by univariate analysis (age, male gender, intensive care, high Charlson comorbidity and McCabe indices, respirator, central venous or urinary catheters, and surgery during stay) were included in the multivariate analysis for standardisation. Case-mix-adjusted prevalences varied between 2.6% and 17.0%, and ranked the hospitals differently from the observed rates. In 11 (38%) hospitals, the observed prevalence rank was lower than predicted by the case-mix-adjusted figure. Case-mix should be taken into consideration in the interhospital comparison of prevalence rates. Copyright 2010 The Hospital Infection Society. Published by Elsevier Ltd. All rights reserved.

  19. A perturbative approach for enhancing the performance of time series forecasting.

    PubMed

    de Mattos Neto, Paulo S G; Ferreira, Tiago A E; Lima, Aranildo R; Vasconcelos, Germano C; Cavalcanti, George D C

    2017-04-01

    This paper proposes a method to perform time series prediction based on perturbation theory. The approach is based on continuously adjusting an initial forecasting model to asymptotically approximate a desired time series model. First, a predictive model generates an initial forecasting for a time series. Second, a residual time series is calculated as the difference between the original time series and the initial forecasting. If that residual series is not white noise, then it can be used to improve the accuracy of the initial model and a new predictive model is adjusted using residual series. The whole process is repeated until convergence or the residual series becomes white noise. The output of the method is then given by summing up the outputs of all trained predictive models in a perturbative sense. To test the method, an experimental investigation was conducted on six real world time series. A comparison was made with six other methods experimented and ten other results found in the literature. Results show that not only the performance of the initial model is significantly improved but also the proposed method outperforms the other results previously published. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Is social class standardisation appropriate in occupational studies?

    PubMed Central

    Brisson, C; Loomis, D; Pearce, N

    1987-01-01

    Social class standardisation has been proposed as a method for separating the effects of occupation and "social" or "lifestyle" factors in epidemiological studies, by comparing workers in a particular occupation with other workers in the same social class. The validity of this method rests upon two assumptions: (1) that social factors have the same effect in all occupational groups in the same social class, and (2) that other workers in the same social class as the workers being studied are free of occupational risk factors for the disease of interest. These assumptions will not always be satisfied. In particular, the effect of occupation will be underestimated when the comparison group also has job-related exposures which cause the disease under study. Thus, although adjustment for social class may minimise bias due to social factors, it may introduce bias due to unmeasured occupational factors. This difficulty may be magnified when occupational category is used as the measure of social class. Because of this potential bias, adjustment for social class should be done only after careful consideration of the exposures and disease involved and should be based on an appropriate definition of social class. Both crude and standardised results should be presented when such adjustments are made. PMID:3455422

  1. Cost comparison of transcatheter and operative closures of ostium secundum atrial septal defects

    PubMed Central

    O’Byrne, Michael L.; Gillespie, Matthew J.; Shinohara, Russell T.; Dori, Yoav; Rome, Jonathan J.; Glatz, Andrew C.

    2015-01-01

    Background Clinical outcomes for transcatheter and operative closures of atrial septal defects (ASDs) are similar. Economic cost for each method has not been well described. Methods A single-center retrospective cohort study of children and adults <30 years of age undergoing closure for single secundum ASD from January 1, 2007, to April 1, 2012, was performed to measure differences in inflation-adjusted cost of operative and transcatheter closures of ASD. A propensity score weight-adjusted multivariate regression model was used in an intention-to-treat analysis. Costs for reintervention and crossover admissions were included in primary analysis. Results A total of 244 subjects were included in the study (64% transcatheter and 36% operative), of which 2% (n = 5) were ≥18 years. Crossover rate from transcatheter to operative group was 3%. Risk of reintervention (P = .66) and 30-day mortality (P = .37) were not significantly different. In a multivariate model, adjusted cost of operative closure was 2012 US $60,992 versus 2012 US $55,841 for transcatheter closure (P < .001). Components of total cost favoring transcatheter closure were length of stay, medications, and follow-up radiologic and laboratory testing, overcoming higher costs of procedure and echocardiography. Professional costs did not differ. The rate of 30-day readmission was greater in the operative cohort, further increasing the cost advantage of transcatheter closure. Sensitivity analyses demonstrated that costs of follow-up visits influenced relative cost but that device closure remained favorable over a broad range of crossover and reintervention rates. Conclusion For single secundum ASD, cost comparison analysis favors transcatheter closure over the short term. The cost of follow-up regimens influences the cost advantage of transcatheter closure. PMID:25965721

  2. Risk-adjusted monitoring of survival times

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sego, Landon H.; Reynolds, Marion R.; Woodall, William H.

    2009-02-26

    We consider the monitoring of clinical outcomes, where each patient has a di®erent risk of death prior to undergoing a health care procedure.We propose a risk-adjusted survival time CUSUM chart (RAST CUSUM) for monitoring clinical outcomes where the primary endpoint is a continuous, time-to-event variable that may be right censored. Risk adjustment is accomplished using accelerated failure time regression models. We compare the average run length performance of the RAST CUSUM chart to the risk-adjusted Bernoulli CUSUM chart, using data from cardiac surgeries to motivate the details of the comparison. The comparisons show that the RAST CUSUM chart is moremore » efficient at detecting a sudden decrease in the odds of death than the risk-adjusted Bernoulli CUSUM chart, especially when the fraction of censored observations is not too high. We also discuss the implementation of a prospective monitoring scheme using the RAST CUSUM chart.« less

  3. Functional form and risk adjustment of hospital costs: Bayesian analysis of a Box-Cox random coefficients model.

    PubMed

    Hollenbeak, Christopher S

    2005-10-15

    While risk-adjusted outcomes are often used to compare the performance of hospitals and physicians, the most appropriate functional form for the risk adjustment process is not always obvious for continuous outcomes such as costs. Semi-log models are used most often to correct skewness in cost data, but there has been limited research to determine whether the log transformation is sufficient or whether another transformation is more appropriate. This study explores the most appropriate functional form for risk-adjusting the cost of coronary artery bypass graft (CABG) surgery. Data included patients undergoing CABG surgery at four hospitals in the midwest and were fit to a Box-Cox model with random coefficients (BCRC) using Markov chain Monte Carlo methods. Marginal likelihoods and Bayes factors were computed to perform model comparison of alternative model specifications. Rankings of hospital performance were created from the simulation output and the rankings produced by Bayesian estimates were compared to rankings produced by standard models fit using classical methods. Results suggest that, for these data, the most appropriate functional form is not logarithmic, but corresponds to a Box-Cox transformation of -1. Furthermore, Bayes factors overwhelmingly rejected the natural log transformation. However, the hospital ranking induced by the BCRC model was not different from the ranking produced by maximum likelihood estimates of either the linear or semi-log model. Copyright (c) 2005 John Wiley & Sons, Ltd.

  4. Rendering of HDR content on LDR displays: an objective approach

    NASA Astrophysics Data System (ADS)

    Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick

    2015-09-01

    Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.

  5. The method of a joint intraday security check system based on cloud computing

    NASA Astrophysics Data System (ADS)

    Dong, Wei; Feng, Changyou; Zhou, Caiqi; Cai, Zhi; Dan, Xu; Dai, Sai; Zhang, Chuancheng

    2017-01-01

    The intraday security check is the core application in the dispatching control system. The existing security check calculation only uses the dispatch center’s local model and data as the functional margin. This paper introduces the design of all-grid intraday joint security check system based on cloud computing and its implementation. To reduce the effect of subarea bad data on the all-grid security check, a new power flow algorithm basing on comparison and adjustment with inter-provincial tie-line plan is presented. And the numerical example illustrated the effectiveness and feasibility of the proposed method.

  6. Future Orientation, Social Support, and Psychological Adjustment among Left-behind Children in Rural China: A Longitudinal Study.

    PubMed

    Su, Shaobing; Li, Xiaoming; Lin, Danhua; Zhu, Maoling

    2017-01-01

    Existing research has found that parental migration may negatively impact the psychological adjustment of left-behind children. However, limited longitudinal research has examined if and how future orientation (individual protective factor) and social support (contextual protective factor) are associated with the indicators of psychological adjustment (i.e., life satisfaction, school satisfaction, happiness, and loneliness) of left-behind children. In the current longitudinal study, we examined the differences in psychological adjustment between left-behind children and non-left behind children (comparison children) in rural areas, and explored the protective roles of future orientation and social support on the immediate (cross-sectional effects) and subsequent (lagged effects) status of psychological adjustment for both groups of children, respectively. The sample included 897 rural children ( M age = 14.09, SD = 1.40) who participated in two waves of surveys across six months. Among the participants, 227 were left-behind children with two parents migrating, 176 were with one parent migrating, and 485 were comparison children. Results showed that, (1) left-behind children reported lower levels of life satisfaction, school satisfaction, and happiness, as well as a higher level of loneliness in both waves; (2) After controlling for several demographics and characteristics of parental migration among left-behind children, future orientation significantly predicted life satisfaction, school satisfaction, and happiness in both cross-sectional and longitudinal regression models, as well as loneliness in the longitudinal regression analysis. Social support predicted immediate life satisfaction, school satisfaction, and happiness, as well as subsequent school satisfaction. Similar to left-behind children, comparison children who reported higher scores in future orientation, especially future expectation, were likely to have higher scores in most indicators of psychological adjustment measured at the same time and subsequently. However, social support seemed not exhibit as important in the immediate status of psychological adjustment of comparison children as that of left-behind children. Findings, implications, and limitations of the present study were discussed.

  7. Determination of small amounts of molybdenum in tungsten and molybdenum ores

    USGS Publications Warehouse

    Grimaldi, F.S.; Wells, R.C.

    1943-01-01

    A rapid method has been developed for the determination of small amounts of molybdenum in tungsten and molybdenum ores. After removing iron and other major constituents the molybdenum thiocyanate color is developed in water-acetone solutions, using ammonium citrate to eliminate the interference of tungsten. Comparison is made by titrating a blank with a standard molybdenum solution. Aliquots are adjusted to deal with amounts of molybdenum ranging from 0.01 to 1.30 mg.

  8. Marital Adjustment of Graduate Student Couples.

    ERIC Educational Resources Information Center

    McRoy, Sue; Fisher, Virginia L.

    1982-01-01

    Comparisons of graduate student couples indicated lower levels of marital adjustment (consensus and affection) for couples where only the husband was a student. Suggests variables other than student status may relate to marital adjustment. When only the wife was a student, family income was higher and couples were older. (Author)

  9. Depression and Social Adjustment in Siblings of Boys with Autism.

    ERIC Educational Resources Information Center

    Gold, Nora

    1993-01-01

    Twenty-two siblings of autistic boys and 34 other siblings were compared on measures of depression, social adjustment, and family responsibilities. Results showed that siblings of autistic boys scored significantly higher on depression than the comparison group, but not on problems of social adjustment. There were no significant gender…

  10. A comparison of time dependent Cox regression, pooled logistic regression and cross sectional pooling with simulations and an application to the Framingham Heart Study.

    PubMed

    Ngwa, Julius S; Cabral, Howard J; Cheng, Debbie M; Pencina, Michael J; Gagnon, David R; LaValley, Michael P; Cupples, L Adrienne

    2016-11-03

    Typical survival studies follow individuals to an event and measure explanatory variables for that event, sometimes repeatedly over the course of follow up. The Cox regression model has been used widely in the analyses of time to diagnosis or death from disease. The associations between the survival outcome and time dependent measures may be biased unless they are modeled appropriately. In this paper we explore the Time Dependent Cox Regression Model (TDCM), which quantifies the effect of repeated measures of covariates in the analysis of time to event data. This model is commonly used in biomedical research but sometimes does not explicitly adjust for the times at which time dependent explanatory variables are measured. This approach can yield different estimates of association compared to a model that adjusts for these times. In order to address the question of how different these estimates are from a statistical perspective, we compare the TDCM to Pooled Logistic Regression (PLR) and Cross Sectional Pooling (CSP), considering models that adjust and do not adjust for time in PLR and CSP. In a series of simulations we found that time adjusted CSP provided identical results to the TDCM while the PLR showed larger parameter estimates compared to the time adjusted CSP and the TDCM in scenarios with high event rates. We also observed upwardly biased estimates in the unadjusted CSP and unadjusted PLR methods. The time adjusted PLR had a positive bias in the time dependent Age effect with reduced bias when the event rate is low. The PLR methods showed a negative bias in the Sex effect, a subject level covariate, when compared to the other methods. The Cox models yielded reliable estimates for the Sex effect in all scenarios considered. We conclude that survival analyses that explicitly account in the statistical model for the times at which time dependent covariates are measured provide more reliable estimates compared to unadjusted analyses. We present results from the Framingham Heart Study in which lipid measurements and myocardial infarction data events were collected over a period of 26 years.

  11. Comparison of the performance of the CMS Hierarchical Condition Category (CMS-HCC) risk adjuster with the Charlson and Elixhauser comorbidity measures in predicting mortality.

    PubMed

    Li, Pengxiang; Kim, Michelle M; Doshi, Jalpa A

    2010-08-20

    The Centers for Medicare and Medicaid Services (CMS) has implemented the CMS-Hierarchical Condition Category (CMS-HCC) model to risk adjust Medicare capitation payments. This study intends to assess the performance of the CMS-HCC risk adjustment method and to compare it to the Charlson and Elixhauser comorbidity measures in predicting in-hospital and six-month mortality in Medicare beneficiaries. The study used the 2005-2006 Chronic Condition Data Warehouse (CCW) 5% Medicare files. The primary study sample included all community-dwelling fee-for-service Medicare beneficiaries with a hospital admission between January 1st, 2006 and June 30th, 2006. Additionally, four disease-specific samples consisting of subgroups of patients with principal diagnoses of congestive heart failure (CHF), stroke, diabetes mellitus (DM), and acute myocardial infarction (AMI) were also selected. Four analytic files were generated for each sample by extracting inpatient and/or outpatient claims for each patient. Logistic regressions were used to compare the methods. Model performance was assessed using the c-statistic, the Akaike's information criterion (AIC), the Bayesian information criterion (BIC) and their 95% confidence intervals estimated using bootstrapping. The CMS-HCC had statistically significant higher c-statistic and lower AIC and BIC values than the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality across all samples in analytic files that included claims from the index hospitalization. Exclusion of claims for the index hospitalization generally led to drops in model performance across all methods with the highest drops for the CMS-HCC method. However, the CMS-HCC still performed as well or better than the other two methods. The CMS-HCC method demonstrated better performance relative to the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality. The CMS-HCC model is preferred over the Charlson and Elixhauser methods if information about the patient's diagnoses prior to the index hospitalization is available and used to code the risk adjusters. However, caution should be exercised in studies evaluating inpatient processes of care and where data on pre-index admission diagnoses are unavailable.

  12. Adjustment for reporting bias in network meta-analysis of antidepressant trials

    PubMed Central

    2012-01-01

    Background Network meta-analysis (NMA), a generalization of conventional MA, allows for assessing the relative effectiveness of multiple interventions. Reporting bias is a major threat to the validity of MA and NMA. Numerous methods are available to assess the robustness of MA results to reporting bias. We aimed to extend such methods to NMA. Methods We introduced 2 adjustment models for Bayesian NMA. First, we extended a meta-regression model that allows the effect size to depend on its standard error. Second, we used a selection model that estimates the propensity of trial results being published and in which trials with lower propensity are weighted up in the NMA model. Both models rely on the assumption that biases are exchangeable across the network. We applied the models to 2 networks of placebo-controlled trials of 12 antidepressants, with 74 trials in the US Food and Drug Administration (FDA) database but only 51 with published results. NMA and adjustment models were used to estimate the effects of the 12 drugs relative to placebo, the 66 effect sizes for all possible pair-wise comparisons between drugs, probabilities of being the best drug and ranking of drugs. We compared the results from the 2 adjustment models applied to published data and NMAs of published data and NMAs of FDA data, considered as representing the totality of the data. Results Both adjustment models showed reduced estimated effects for the 12 drugs relative to the placebo as compared with NMA of published data. Pair-wise effect sizes between drugs, probabilities of being the best drug and ranking of drugs were modified. Estimated drug effects relative to the placebo from both adjustment models were corrected (i.e., similar to those from NMA of FDA data) for some drugs but not others, which resulted in differences in pair-wise effect sizes between drugs and ranking. Conclusions In this case study, adjustment models showed that NMA of published data was not robust to reporting bias and provided estimates closer to that of NMA of FDA data, although not optimal. The validity of such methods depends on the number of trials in the network and the assumption that conventional MAs in the network share a common mean bias mechanism. PMID:23016799

  13. EDTA analysis on the Roche MODULAR analyser.

    PubMed

    Davidson, D F

    2007-05-01

    Patient specimens can be subject to subtle interference from cross contamination by liquid-based, potassium-containing EDTA anticoagulant, leading to misinterpretation of results. A rapid method for EDTA analysis to detect such contamination is described. An in-house EDTA assay on the Roche MODULAR analyser was assessed for accuracy and precision by comparison with an adjusted calcium difference measurement (atomic absorption and o-cresolphthalein complexone colorimetry). EDTA method versus adjusted calcium difference showed: slope = 1.038 (95% confidence interval [CI] 0.949-1.131); intercept = 0.073 (95% CI 0.018-0.132) mmol/L; r = 0.914; n = 94. However, inter-assay precision of the calcium difference method was estimated to be poorer (coefficient of variation 24.8% versus 3.4% for the automated colorimetric method at an EDTA concentration of 0.25 mmol/L). Unequivocal contamination was observed at an EDTA concentration of > or =0.2 mmol/L. The automated method showed positive interference from haemolysis and negative interference from oxalate. The method was unaffected by lipaemia (triglycerides <20 mmol/L), icterus (bilirubin <500 micromol/L), glucose (<100 mmol/L), iron (<100 micromol/L), and citrate, phosphate or fluoride (all <2.5 mmol/L). The automated colorimetric assay described is an accurate, precise and rapid (3 min) means of detecting EDTA contamination of unhaemolysed biochemistry specimens.

  14. Evaluation of microarray data normalization procedures using spike-in experiments

    PubMed Central

    Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders

    2006-01-01

    Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679

  15. Prospect of Using Numerical Dynamo Model for Prediction of Geomagnetic Secular Variation

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Tangborn, Andrew

    2003-01-01

    Modeling of the Earth's core has reached a level of maturity to where the incorporation of observations into the simulations through data assimilation has become feasible. Data assimilation is a method by which observations of a system are combined with a model output (or forecast) to obtain a best guess of the state of the system, called the analysis. The analysis is then used as an initial condition for the next forecast. By doing assimilation, not only we shall be able to predict partially secular variation of the core field, we could also use observations to further our understanding of dynamical states in the Earth's core. One of the first steps in the development of an assimilation system is a comparison between the observations and the model solution. The highly turbulent nature of core dynamics, along with the absence of any regular external forcing and constraint (which occurs in atmospheric dynamics, for example) means that short time comparisons (approx. 1000 years) cannot be made between model and observations. In order to make sensible comparisons, a direct insertion assimilation method has been implemented. In this approach, magnetic field observations at the Earth's surface have been substituted into the numerical model, such that the ratio of the multiple components and the dipole component from observation is adjusted at the core-mantle boundary and extended to the interior of the core, while the total magnetic energy remains unchanged. This adjusted magnetic field is then used as the initial field for a new simulation. In this way, a time tugged simulation is created which can then be compared directly with observations. We present numerical solutions with and without data insertion and discuss their implications for the development of a more rigorous assimilation system.

  16. Pharmacoeconomics

    PubMed Central

    Hughes, Dyfrig A

    2012-01-01

    Pharmacoeconomics is an essential component of health technology assessment and the appraisal of medicines for use by UK National Health Service (NHS) patients. As a comparatively young discipline, its methods continue to evolve. Priority research areas for development include methods for synthesizing indirect comparisons when head-to-head trials have not been performed, synthesizing qualitative evidence (for example, stakeholder views), addressing the limitations of the EQ-5D tool for assessing quality of life, including benefits not captured in quality-adjusted life years (QALYs), ways of assessing valuation methods (for determining utility scores), extrapolation of costs and benefits beyond those observed in trials, early estimation of cost-effectiveness (including mechanism-based economic evaluation), methods for incorporating the impact of non-adherence and the role of behavioural economics in influencing patients and prescribers. PMID:22360714

  17. Ares I-X Launch Abort System, Crew Module, and Upper Stage Simulator Vibroacoustic Flight Data Evaluation, Comparison to Predictions, and Recommendations for Adjustments to Prediction Methodology and Assumptions

    NASA Technical Reports Server (NTRS)

    Smith, Andrew; Harrison, Phil

    2010-01-01

    The National Aeronautics and Space Administration (NASA) Constellation Program (CxP) has identified a series of tests to provide insight into the design and development of the Crew Launch Vehicle (CLV) and Crew Exploration Vehicle (CEV). Ares I-X was selected as the first suborbital development flight test to help meet CxP objectives. The Ares I-X flight test vehicle (FTV) is an early operational model of CLV, with specific emphasis on CLV and ground operation characteristics necessary to meet Ares I-X flight test objectives. The in-flight part of the test includes a trajectory to simulate maximum dynamic pressure during flight and perform a stage separation of the Upper Stage Simulator (USS) from the First Stage (FS). The in-flight test also includes recovery of the FS. The random vibration response from the ARES 1-X flight will be reconstructed for a few specific locations that were instrumented with accelerometers. This recorded data will be helpful in validating and refining vibration prediction tools and methodology. Measured vibroacoustic environments associated with lift off and ascent phases of the Ares I-X mission will be compared with pre-flight vibration predictions. The measured flight data was given as time histories which will be converted into power spectral density plots for comparison with the maximum predicted environments. The maximum predicted environments are documented in the Vibroacoustics and Shock Environment Data Book, AI1-SYS-ACOv4.10 Vibration predictions made using statistical energy analysis (SEA) VAOne computer program will also be incorporated in the comparisons. Ascent and lift off measured acoustics will also be compared to predictions to assess whether any discrepancies between the predicted vibration levels and measured vibration levels are attributable to inaccurate acoustic predictions. These comparisons will also be helpful in assessing whether adjustments to prediction methodologies are needed to improve agreement between the predicted and measured flight data. Future assessment will incorporate hybrid methods in VAOne analysis (i.e., boundary element methods, BEM and finite element methods, FEM). These hybrid methods will enable the ability to import NASTRAN models providing much more detailed modeling of the underlying beams and support structure of the ARES 1-X test vehicle. Measured acoustic data will be incorporated into these analyses to improve correlation for additional post flight analysis.

  18. Multiple comparisons permutation test for image based data mining in radiotherapy.

    PubMed

    Chen, Chun; Witte, Marnix; Heemsbergen, Wilma; van Herk, Marcel

    2013-12-23

    : Comparing incidental dose distributions (i.e. images) of patients with different outcomes is a straightforward way to explore dose-response hypotheses in radiotherapy. In this paper, we introduced a permutation test that compares images, such as dose distributions from radiotherapy, while tackling the multiple comparisons problem. A test statistic Tmax was proposed that summarizes the differences between the images into a single value and a permutation procedure was employed to compute the adjusted p-value. We demonstrated the method in two retrospective studies: a prostate study that relates 3D dose distributions to failure, and an esophagus study that relates 2D surface dose distributions of the esophagus to acute esophagus toxicity. As a result, we were able to identify suspicious regions that are significantly associated with failure (prostate study) or toxicity (esophagus study). Permutation testing allows direct comparison of images from different patient categories and is a useful tool for data mining in radiotherapy.

  19. Comparison of the Performance of Noise Metrics as Predictions of the Annoyance of Stage 2 and Stage 3 Aircraft Overflights

    NASA Technical Reports Server (NTRS)

    Pearsons, Karl S.; Howe, Richard R.; Sneddon, Matthew D.; Fidell, Sanford

    1996-01-01

    Thirty audiometrically screened test participants judged the relative annoyance of two comparison (variable level) and thirty-four standard (fixed level) signals in an adaptive paired comparison psychoacoustic study. The signal ensemble included both FAR Part 36 Stage 2 and 3 aircraft overflights, as well as synthesized aircraft noise signatures and other non-aircraft signals. All test signals were presented for judgment as heard indoors, in the presence of continuous background noise, under free-field listening conditions in an anechoic chamber. Analyses of the performance of 30 noise metrics as predictors of these annoyance judgments confirmed that the more complex metrics were generally more accurate and precise predictors than the simpler methods. EPNL was somewhat less accurate and precise as a predictor of the annoyance judgments than a duration-adjusted variant of Zwicker's Loudness Level.

  20. [Risk-adjusted assessment: late-onset infection in neonates].

    PubMed

    Gmyrek, Dieter; Koch, Rainer; Vogtmann, Christoph; Kaiser, Annette; Friedrich, Annette

    2011-01-01

    The weak point of the countrywide perinatal/neonatal quality surveillance is the ignorance of interhospital differences in the case mix of patients. As a result, this approach does not produce reliable benchmarking. The objective of this study was to adjust the result of the late-onset infection incidence of different hospitals according to their risk profile of patients by multivariate analysis. The perinatal/neonatal database of 41,055 newborns of the Saxonian quality surveillance from 1998 to 2004 was analysed. Based on 18 possible risk factors, a logistic regression model was used to develop a specific risk predictor for the quality indicator "late-onset infection". The developed risk predictor for the incidence of late-onset infection could be described by 4 of the 18 analysed risk factors, namely gestational age, admission from home, hypoxic ischemic encephalopathy and B-streptococcal infection. The AUC(ROC) value of this quality indicator was 83.3%, which demonstrates its reliability. The hospital ranking based on the adjusted risk assessment was very different from hospital rankings before this adjustment. The average correction of ranking position was 4.96 for 35 clinics. The application of the risk adjustment method proposed here allows for a more objective comparison of the incidence of the quality indicator "late onset infection" among different hospitals. Copyright © 2011. Published by Elsevier GmbH.

  1. Interconnection: A qualitative analysis of adjusting to living with renal cell carcinoma

    PubMed Central

    LEAL, ISABEL; MILBURY, KATHRIN; ENGEBRETSON, JOAN; MATIN, SURENA; JONASCH, ERIC; TANNIR, NIZAR; WOOD, CHRISTOPHER G.; COHEN, LORENZO

    2017-01-01

    Objective Adjusting to cancer is an ongoing process, yet few studies explore this adjustment from a qualitative perspective. The aim of our qualitative study was to understand how patients construct their experience of adjusting to living with cancer. Method Qualitative analysis was conducted of written narratives collected from four separate writing sessions as part of a larger expressive writing clinical trial with renal cell carcinoma patients. Thematic analysis and constant comparison were employed to code the primary patterns in the data into themes until thematic saturation was reached at 37 participants. A social constructivist perspective informed data interpretation. Results Interconnection described the overarching theme underlying the process of adjusting to cancer and involved four interrelated themes: (1) discontinuity—feelings of disconnection and loss following diagnosis; (2) reorientation—to the reality of cancer psychologically and physically; (3) rebuilding—struggling through existential distress to reconnect; and (4) expansion—finding meaning in interconnections with others. Participants related a dialectical movement in which disruption and loss catalyzed an ongoing process of finding meaning. Significance of results Our findings suggest that adjusting to living with cancer is an ongoing, iterative, nonlinear process. The dynamic interactions between the different themes in this process describe the transformation of meaning as participants move through and revisit prior themes in response to fluctuating symptoms and medical news. It is important that clinicians recognize the dynamic and ongoing process of adjusting to cancer to support patients in addressing their unmet psychosocial needs throughout the changing illness trajectory. PMID:28262086

  2. Bone accretion in adolescents using the combined estrogen and progestin transdermal contraceptive method Ortho Evra: a pilot study.

    PubMed

    Harel, Zeev; Riggs, Suzanne; Vaz, Rosalind; Flanagan, Patricia; Harel, Dalia; Machan, Jason T

    2010-02-01

    To date, there are no data regarding the effect of the transdermal combined estrogen and progestin contraceptive Ortho Evra on bone mineral content (BMC) and bone mineral density (BMD). We examined the effects of transdermally delivered ethinyl estradiol and norelgestromin on whole body (WB) BMC and BMD of the hip and lumbar spine (LS) of adolescent girls. In a matched case-control study, girls (n = 5) who applied Ortho Evra for days 1-21 followed by days 22-28 free of medication for 13 cycles (about 12 months) were compared with 5 age- and ethnicity-matched control girls. Evaluations of calcium intake; bone-protective physical activity; bone densitometry (DXA, QDR 4500A, Hologic); bone formation markers serum osteocalcin (OC) and bone-specific alkaline phosphatase (BAP); bone resorption marker urinary N-telopeptide (uNTX); insulin growth factor-1 (IGF-1); and sex hormone binding globulin (SHBG) were carried out at initiation, 6 months, and 12 months. Changes from baseline were compared using mixed models, adjusting for follow-up comparisons using the Holm Test (sequential Bonferroni). There were no significant differences (SD) between groups at baseline in age, gynecologic age, WBBMC, hip BMD, and LSBMD. Girls on Ortho Evra did not change significantly in WBBMC (12-month mean increase 0.2% +/- 0.8%), whereas controls did (3.9% +/- 1.8%, P < or = .001, adjusted P = .002), with SD between the 2 groups (P = .007, adjusted P = .036). Adolescents on Ortho Evra did not change significantly in hip BMD (12-month mean increase 0.5% +/- 0.6%), whereas controls did (2.7% +/- 0.6%, P < or = .001, adjusted P = .004), with SD between the 2 groups (P = .024) prior to adjustment for multiple comparisons, but no SD after adjustment (P = .096). Similarly, although the increase in LSBMD within the control group after 12 months (mean increase 2.8% +/- 1.0%) was statistically significant (P = .009, adjusted P = .044), the change within the treatment group (12-month mean increase 0.8% +/- 0.8%) was not. However, percent LSBMD changes after 12 months did not significantly differ between the 2 groups before or after adjustment for multiple comparisons. Calcium intake and bone-protective physical activity did not significantly predict BMC and BMD changes of study participants. There was a significantly greater increase in SHBG levels in the treatment group after 6 months (P = .003, adjusted P = .013) and 12 months (P < or = .001, adjusted P < or = .001) than in controls. Changes in levels of OC, BAP, uNTX, and IGF-1 were not significantly different between the 2 groups. Ortho Evra use attenuates bone mass acquisition in young women who are still undergoing skeletal maturation. This attenuation may be attributed in part to increased SHBG levels, which reduce the concentrations of free estradiol and free testosterone that are available to interact with receptors on the bone. Clinical implications remain to be determined in studies with a larger number of adolescents. Copyright 2010 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.

  3. Estimating dietary costs of low-income women in California: a comparison of 2 approaches.

    PubMed

    Aaron, Grant J; Keim, Nancy L; Drewnowski, Adam; Townsend, Marilyn S

    2013-04-01

    Currently, no simplified approach to estimating food costs exists for a large, nationally representative sample. The objective was to compare 2 approaches for estimating individual daily diet costs in a population of low-income women in California. Cost estimates based on time-intensive method 1 (three 24-h recalls and associated food prices on receipts) were compared with estimates made by using less intensive method 2 [a food-frequency questionnaire (FFQ) and store prices]. Low-income participants (n = 121) of USDA nutrition programs were recruited. Mean daily diet costs, both unadjusted and adjusted for energy, were compared by using Pearson correlation coefficients and the Bland-Altman 95% limits of agreement between methods. Energy and nutrient intakes derived by the 2 methods were comparable; where differences occurred, the FFQ (method 2) provided higher nutrient values than did the 24-h recall (method 1). The crude daily diet cost was $6.32 by the 24-h recall method and $5.93 by the FFQ method (P = 0.221). The energy-adjusted diet cost was $6.65 by the 24-h recall method and $5.98 by the FFQ method (P < 0.001). Although the agreement between methods was weaker than expected, both approaches may be useful. Additional research is needed to further refine a large national survey approach (method 2) to estimate daily dietary costs with the use of this minimal time-intensive method for the participant and moderate time-intensive method for the researcher.

  4. Increased frequency of chromosome translocations in airline pilots with long-term flying experience

    PubMed Central

    Yong, L C; Sigurdson, A J; Ward, E M; Waters, M A; Whelan, E A; Petersen, M R; Bhatti, P; Ramsey, M J; Ron, E; Tucker, J D

    2008-01-01

    Background Chromosome translocations are an established biomarker of cumulative exposure to external ionising radiation. Airline pilots are exposed to cosmic ionising radiation, but few flight crew studies have examined translocations in relation to flight experience. Methods We determined the frequency of translocations in the peripheral blood lymphocytes of 83 airline pilots and 50 comparison subjects (mean age 47 and 46 years, respectively). Translocations were scored in an average of 1039 cell equivalents (CE) per subject using fluorescence in situ hybridisation (FISH) whole chromo-some painting and expressed per 100 CE. Negative binomial regression models were used to assess the relationship between translocation frequency and exposure status and flight years, adjusting for age, diagnostic x ray procedures, and military flying. Results There was no significant difference in the adjusted mean translocation frequency of pilots and comparison subjects (0.37 (SE 0.04) vs 0.38 (SE 0.06) translocations/100 CE, respectively). However, among pilots, the adjusted translocation frequency was significantly associated with flight years (p = 0.01) with rate ratios of 1.06 (95% CI 1.01 to 1.11) and 1.81 (95% CI 1.16 to 2.82) for a 1- and 10-year incremental increase in flight years, respectively. The adjusted rate ratio for pilots in the highest compared to the lowest quartile of flight years was 2.59 (95% CI 1.26 to 5.33). Conclusions This data suggests that pilots with long-term flying experience may be exposed to biologically significant doses of ionising radiation. Epidemiological studies with longer follow-up of larger cohorts of pilots with a wide range of radiation exposure levels are needed to clarify the relationship between cosmic radiation exposure and cancer risk. PMID:19074211

  5. Psychosocial functioning and risk factors among siblings of children with cancer: An updated systematic review.

    PubMed

    Long, Kristin A; Lehmann, Vicky; Gerhardt, Cynthia A; Carpenter, Aubrey L; Marsland, Anna L; Alderfer, Melissa A

    2018-06-01

    Siblings' psychosocial adjustment to childhood cancer is poorly understood. This systematic review summarizes findings and limitations of the sibling literature since 2008, provides clinical recommendations, and offers future research directions. MEDLINE/Pubmed, Cumulative Index to Nursing and Allied Health Literature, and PsycINFO were searched for articles related to siblings, psychosocial functioning, and pediatric cancer. After systematic screening, studies meeting inclusion criteria were rated for scientific merit, and findings were extracted and synthesized. In total, 102 studies were included (63 quantitative, 35 qualitative, 4 mixed-methods). Methodological limitations are common. Mean levels of anxiety, depression, and general adjustment are similar across siblings and comparisons, but symptoms of cancer-related posttraumatic stress are prevalent. School-aged siblings display poorer academic functioning and more absenteeism but similar peer relationships as peers. Quality of life findings are mixed. Adult siblings engage in higher levels of risky health behaviors and may have poorer health outcomes than comparisons. Risk factors for poor sibling adjustment include lower social support, poorer family functioning, lower income, non-White race, and shorter time since diagnosis, but findings are inconsistent. Qualitative themes include siblings' maturity, compassion, and autonomy, but also strong negative emotions, uncertainty, family disruptions, limited parental support, school problems, altered friendships, and unmet needs. Despite methodological limitations, research indicates a strong need for sibling support. Clinical recommendations include identifying at-risk siblings and developing interventions to facilitate family communication and increase siblings' social support, cancer-related knowledge, and treatment involvement. Future longitudinal studies focusing on mechanisms and moderators of siblings' adjustment would inform timing and targets of psychosocial care. Copyright © 2018 John Wiley & Sons, Ltd.

  6. National Institute of Mental Health Multisite Eban HIV/STD Prevention Intervention for African American HIV Serodiscordant Couples

    PubMed Central

    El-Bassel, Nabila; Jemmott, John B.; Landis, J. Richard; Pequegnat, Willo; Wingood, Gina M.; Wyatt, Gail E.; Bellamy, Scarlett L.

    2014-01-01

    Background Human immunodeficiency virus (HIV) has disproportionately affected African Americans. Couple-level interventions may be a promising intervention strategy. Methods To determine if a behavioral intervention can reduce HIV/sexually transmitted disease (STD) risk behaviors among African American HIV serodiscordant couples, a cluster randomized controlled trial (Eban) was conducted in Atlanta, Georgia; Los Angeles, California; New York, New York; and Philadelphia, Pennsylvania; with African American HIV serodiscordant heterosexual couples who were eligible if both partners were at least 18 years old and reported unprotected intercourse in the previous 90 days and awareness of each other's serostatus. One thousand seventy participants were enrolled (mean age, 43 years; 40% of male participants were HIV positive). Couples were randomized to 1 of 2 interventions: couple-focused Eban HIV/STD risk-reduction intervention or attention-matched individual-focused health promotion comparison. The primary outcomes were the proportion of condom-protected intercourse acts and cumulative incidence of STDs (chlamydia, gonorrhea, or trichomonas). Data were collected preintervention and postintervention, and at 6- and 12-month follow-ups. Results Data were analyzed for 535 randomized couples: 260 in the intervention group and 275 in the comparison group; 81.9% were retained at the 12-month follow-up. Generalized estimating equation analyses revealed that the proportion of condom-protected intercourse acts was larger among couples in the intervention group (0.77) than in the comparison group (0.47; risk ratio, 1.24; 95% confidence interval [CI], 1.09 to 1.41; P=.006) when adjusted for the baseline criterion measure. The adjusted percentage of couples using condoms consistently was higher in the intervention group (63%) than in the comparison group (48%; risk ratio, 1.45; 95% CI, 1.24 to 1.70; P<.001). The adjusted mean number of (log)unprotected intercourse acts was lower in the intervention group than in the comparison group (mean difference, –1.52; 95% CI, –2.07 to –0.98; P<.001). The cumulative STD incidence over the 12-month follow-up did not differ between couples in the intervention and comparison groups. The overall HIV sero-conversion at the 12-month follow-up was 5 (2 in the intervention group, 3 in the comparison group) of 535 individuals, which translates to 935 per 100 000 population. Conclusion To our knowledge, this is the first randomized controlled intervention trial to report significant reductions in HIV/STD risk behaviors among African American HIV serodiscordant couples. PMID:20625011

  7. Comparison of Computational Approaches for Rapid Aerodynamic Assessment of Small UAVs

    NASA Technical Reports Server (NTRS)

    Shafer, Theresa C.; Lynch, C. Eric; Viken, Sally A.; Favaregh, Noah; Zeune, Cale; Williams, Nathan; Dansie, Jonathan

    2014-01-01

    Computational Fluid Dynamic (CFD) methods were used to determine the basic aerodynamic, performance, and stability and control characteristics of the unmanned air vehicle (UAV), Kahu. Accurate and timely prediction of the aerodynamic characteristics of small UAVs is an essential part of military system acquisition and air-worthiness evaluations. The forces and moments of the UAV were predicted using a variety of analytical methods for a range of configurations and conditions. The methods included Navier Stokes (N-S) flow solvers (USM3D, Kestrel and Cobalt) that take days to set up and hours to converge on a single solution; potential flow methods (PMARC, LSAERO, and XFLR5) that take hours to set up and minutes to compute; empirical methods (Datcom) that involve table lookups and produce a solution quickly; and handbook calculations. A preliminary aerodynamic database can be developed very efficiently by using a combination of computational tools. The database can be generated with low-order and empirical methods in linear regions, then replacing or adjusting the data as predictions from higher order methods are obtained. A comparison of results from all the data sources as well as experimental data obtained from a wind-tunnel test will be shown and the methods will be evaluated on their utility during each portion of the flight envelope.

  8. Adjusting for Health Status in Non-Linear Models of Health Care Disparities

    PubMed Central

    Cook, Benjamin L.; McGuire, Thomas G.; Meara, Ellen; Zaslavsky, Alan M.

    2009-01-01

    This article compared conceptual and empirical strengths of alternative methods for estimating racial disparities using non-linear models of health care access. Three methods were presented (propensity score, rank and replace, and a combined method) that adjust for health status while allowing SES variables to mediate the relationship between race and access to care. Applying these methods to a nationally representative sample of blacks and non-Hispanic whites surveyed in the 2003 and 2004 Medical Expenditure Panel Surveys (MEPS), we assessed the concordance of each of these methods with the Institute of Medicine (IOM) definition of racial disparities, and empirically compared the methods' predicted disparity estimates, the variance of the estimates, and the sensitivity of the estimates to limitations of available data. The rank and replace and combined methods (but not the propensity score method) are concordant with the IOM definition of racial disparities in that each creates a comparison group with the appropriate marginal distributions of health status and SES variables. Predicted disparities and prediction variances were similar for the rank and replace and combined methods, but the rank and replace method was sensitive to limitations on SES information. For all methods, limiting health status information significantly reduced estimates of disparities compared to a more comprehensive dataset. We conclude that the two IOM-concordant methods were similar enough that either could be considered in disparity predictions. In datasets with limited SES information, the combined method is the better choice. PMID:20352070

  9. Two-station comparison of peak flows to improve flood-frequency estimates for seven streamflow-gaging stations in the Salmon and Clearwater River Basins, Central Idaho

    USGS Publications Warehouse

    Berenbrock, Charles

    2003-01-01

    Improved flood-frequency estimates for short-term (10 or fewer years of record) streamflow-gaging stations were needed to support instream flow studies by the U.S. Forest Service, which are focused on quantifying water rights necessary to maintain or restore productive fish habitat. Because peak-flow data for short-term gaging stations can be biased by having been collected during an unusually wet, dry, or otherwise unrepresentative period of record, the data may not represent the full range of potential floods at a site. To test whether peak-flow estimates for short-term gaging stations could be improved, the two-station comparison method was used to adjust the logarithmic mean and logarithmic standard deviation of peak flows for seven short-term gaging stations in the Salmon and Clearwater River Basins, central Idaho. Correlation coefficients determined from regression of peak flows for paired short-term and long-term (more than 10 years of record) gaging stations over a concurrent period of record indicated that the mean and standard deviation of peak flows for all short-term gaging stations would be improved. Flood-frequency estimates for seven short-term gaging stations were determined using the adjusted mean and standard deviation. The original (unadjusted) flood-frequency estimates for three of the seven short-term gaging stations differed from the adjusted estimates by less than 10 percent, probably because the data were collected during periods representing the full range of peak flows. Unadjusted flood-frequency estimates for four short-term gaging stations differed from the adjusted estimates by more than 10 percent; unadjusted estimates for Little Slate Creek and Salmon River near Obsidian differed from adjusted estimates by nearly 30 percent. These large differences probably are attributable to unrepresentative periods of peak-flow data collection.

  10. Robust Covariate-Adjusted Log-Rank Statistics and Corresponding Sample Size Formula for Recurrent Events Data

    PubMed Central

    Song, Rui; Kosorok, Michael R.; Cai, Jianwen

    2009-01-01

    Summary Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study. PMID:18162107

  11. A weighted adjustment of a similarity transformation between two point sets containing errors

    NASA Astrophysics Data System (ADS)

    Marx, C.

    2017-10-01

    For an adjustment of a similarity transformation, it is often appropriate to consider that both the source and the target coordinates of the transformation are affected by errors. For the least squares adjustment of this problem, a direct solution is possible in the cases of specific-weighing schemas of the coordinates. Such a problem is considered in the present contribution and a direct solution is generally derived for the m-dimensional space. The applied weighing schema allows (fully populated) point-wise weight matrices for the source and target coordinates, both weight matrices have to be proportional to each other. Additionally, the solutions of two borderline cases of this weighting schema are derived, which only consider errors in the source or target coordinates. The investigated solution of the rotation matrix of the adjustment is independent of the scaling between the weight matrices of the source and the target coordinates. The mentioned borderline cases, therefore, have the same solution of the rotation matrix. The direct solution method is successfully tested on an example of a 3D similarity transformation using a comparison with an iterative solution based on the Gauß-Helmert model.

  12. Adjustment of regional regression equations for urban storm-runoff quality using at-site data

    USGS Publications Warehouse

    Barks, C.S.

    1996-01-01

    Regional regression equations have been developed to estimate urban storm-runoff loads and mean concentrations using a national data base. Four statistical methods using at-site data to adjust the regional equation predictions were developed to provide better local estimates. The four adjustment procedures are a single-factor adjustment, a regression of the observed data against the predicted values, a regression of the observed values against the predicted values and additional local independent variables, and a weighted combination of a local regression with the regional prediction. Data collected at five representative storm-runoff sites during 22 storms in Little Rock, Arkansas, were used to verify, and, when appropriate, adjust the regional regression equation predictions. Comparison of observed values of stormrunoff loads and mean concentrations to the predicted values from the regional regression equations for nine constituents (chemical oxygen demand, suspended solids, total nitrogen as N, total ammonia plus organic nitrogen as N, total phosphorus as P, dissolved phosphorus as P, total recoverable copper, total recoverable lead, and total recoverable zinc) showed large prediction errors ranging from 63 percent to more than several thousand percent. Prediction errors for 6 of the 18 regional regression equations were less than 100 percent and could be considered reasonable for water-quality prediction equations. The regression adjustment procedure was used to adjust five of the regional equation predictions to improve the predictive accuracy. For seven of the regional equations the observed and the predicted values are not significantly correlated. Thus neither the unadjusted regional equations nor any of the adjustments were appropriate. The mean of the observed values was used as a simple estimator when the regional equation predictions and adjusted predictions were not appropriate.

  13. Analysis of conditional genetic effects and variance components in developmental genetics.

    PubMed

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  14. Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics

    PubMed Central

    Zhu, J.

    1995-01-01

    A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500

  15. Use of Life Course Work–Family Profiles to Predict Mortality Risk Among US Women

    PubMed Central

    Guevara, Ivan Mejía; Glymour, M. Maria; Berkman, Lisa F.

    2015-01-01

    Objectives. We examined relationships between US women’s exposure to midlife work–family demands and subsequent mortality risk. Methods. We used data from women born 1935 to 1956 in the Health and Retirement Study to calculate employment, marital, and parenthood statuses for each age between 16 and 50 years. We used sequence analysis to identify 7 prototypical work–family trajectories. We calculated age-standardized mortality rates and hazard ratios (HRs) for mortality associated with work–family sequences, with adjustment for covariates and potentially explanatory later-life factors. Results. Married women staying home with children briefly before reentering the workforce had the lowest mortality rates. In comparison, after adjustment for age, race/ethnicity, and education, HRs for mortality were 2.14 (95% confidence interval [CI] = 1.58, 2.90) among single nonworking mothers, 1.48 (95% CI = 1.06, 1.98) among single working mothers, and 1.36 (95% CI = 1.02, 1.80) among married nonworking mothers. Adjustment for later-life behavioral and economic factors partially attenuated risks. Conclusions. Sequence analysis is a promising exposure assessment tool for life course research. This method permitted identification of certain lifetime work–family profiles associated with mortality risk before age 75 years. PMID:25713976

  16. Indirect Treatment Comparison of Talimogene Laherparepvec Compared with Ipilimumab and Vemurafenib for the Treatment of Patients with Metastatic Melanoma.

    PubMed

    Quinn, Casey; Ma, Qiufei; Kudlac, Amber; Palmer, Stephen; Barber, Beth; Zhao, Zhongyun

    2016-04-01

    Few randomized controlled trials have compared new treatments for metastatic melanoma. We sought to examine the relative treatment effect of talimogene laherparepvec compared with ipilimumab and vemurafenib. A systematic literature review of treatments for metastatic melanoma was undertaken but a valid network of evidence could not be established because of a lack of comparative data or studies with sufficient common comparators. A conventional adjusted indirect treatment comparison via network meta-analysis was, therefore, not feasible. Instead, a meta-analysis of absolute efficacy was undertaken, adjusting overall survival (OS) data for differences in prognostic factors between studies using a published algorithm. Four trials were included in the final indirect treatment comparison: two of ipilimumab, one of vemurafenib, and one of talimogene laherparepvec. Median OS for ipilimumab and vemurafenib increased significantly when adjustment was applied, demonstrating that variation in disease and patient characteristics was biasing OS estimates; adjusting for this made the survival data more comparable. For both ipilimumab and vemurafenib, the adjustments improved Kaplan-Meier OS curves; the observed talimogene laherparepvec OS curve remained above the adjusted OS curves for ipilimumab and vemurafenib, showing that long-term survival could differ from the observed medians. Even with limited data, talimogene laherparepvec, ipilimumab, and vemurafenib could be compared following adjustments, thereby providing a more reliable understanding of the relative effect of treatment on survival in a more comparable patient population. The results of this analysis suggest that OS with talimogene laherparepvec is at least as good as with ipilimumab and vemurafenib and improvement was more pronounced in patients with no bone, brain, lung or other visceral metastases. Amgen Inc.

  17. A comparative study of the costliness of Manitoba hospitals.

    PubMed

    Shanahan, M; Loyd, M; Roos, N P; Brownell, M

    1999-06-01

    In light of ongoing discussions about health care policy, this study offered a method of calculating costs at Manitoba hospitals that compared relative costliness of inpatient care provided in each hospital. This methodology also allowed comparisons across types of hospitals-teaching, community, major rural, intermediate and small rural, as well as northern isolated facilities. Data used in this project include basic hospital information, both financial and statistical, for each of the Manitoba hospitals, hospital charge information by case from the State of Maryland, and hospital discharge abstract information for Manitoba. The data from Maryland were used to create relative cost weights (RCWs) for refined diagnostic related groups (RDRGs) and were subsequently adjusted for Manitoba length of stay. These case weights were then applied to cases in Manitoba hospitals, and several other adjustments were made for nontypical cases. This case mix system allows cost comparisons across hospitals. In general, hospital case mix costing demonstrated variability in hospital costliness, not only across types of hospitals but also within hospitals of the same type and size. Costs at the teaching hospitals were found to be considerably higher than the average, even after accounting for acuity and case mix.

  18. Multivariable confounding adjustment in distributed data networks without sharing of patient-level data.

    PubMed

    Toh, Sengwee; Reichman, Marsha E; Houstoun, Monika; Ding, Xiao; Fireman, Bruce H; Gravel, Eric; Levenson, Mark; Li, Lingling; Moyneur, Erick; Shoaibi, Azadeh; Zornberg, Gwen; Hennessy, Sean

    2013-11-01

    It is increasingly necessary to analyze data from multiple sources when conducting public health safety surveillance or comparative effectiveness research. However, security, privacy, proprietary, and legal concerns often reduce data holders' willingness to share highly granular information. We describe and compare two approaches that do not require sharing of patient-level information to adjust for confounding in multi-site studies. We estimated the risks of angioedema associated with angiotensin-converting enzyme inhibitors (ACEIs), angiotensin receptor blockers (ARBs), and aliskiren in comparison with beta-blockers within Mini-Sentinel, which has created a distributed data system of 18 health plans. To obtain the adjusted hazard ratios (HRs) and 95% confidence intervals (CIs), we performed (i) a propensity score-stratified case-centered logistic regression analysis, a method identical to a stratified Cox regression analysis but needing only aggregated risk set data, and (ii) an inverse variance-weighted meta-analysis, which requires only the site-specific HR and variance. We also performed simulations to further compare the two methods. Compared with beta-blockers, the adjusted HR was 3.04 (95% CI: 2.81, 3.27) for ACEIs, 1.16 (1.00, 1.34) for ARBs, and 2.85 (1.34, 6.04) for aliskiren in the case-centered analysis. The corresponding HRs were 2.98 (2.76, 3.21), 1.15 (1.00, 1.33), and 2.86 (1.35, 6.04) in the meta-analysis. Simulations suggested that the two methods may produce different results under certain analytic scenarios. The case-centered analysis and the meta-analysis produced similar results without the need to share patient-level data across sites in our empirical study, but may provide different results in other study settings. Copyright © 2013 John Wiley & Sons, Ltd.

  19. Multi-feature machine learning model for automatic segmentation of green fractional vegetation cover for high-throughput field phenotyping.

    PubMed

    Sadeghi-Tehran, Pouria; Virlet, Nicolas; Sabermanesh, Kasra; Hawkesford, Malcolm J

    2017-01-01

    Accurately segmenting vegetation from the background within digital images is both a fundamental and a challenging task in phenotyping. The performance of traditional methods is satisfactory in homogeneous environments, however, performance decreases when applied to images acquired in dynamic field environments. In this paper, a multi-feature learning method is proposed to quantify vegetation growth in outdoor field conditions. The introduced technique is compared with the state-of the-art and other learning methods on digital images. All methods are compared and evaluated with different environmental conditions and the following criteria: (1) comparison with ground-truth images, (2) variation along a day with changes in ambient illumination, (3) comparison with manual measurements and (4) an estimation of performance along the full life cycle of a wheat canopy. The method described is capable of coping with the environmental challenges faced in field conditions, with high levels of adaptiveness and without the need for adjusting a threshold for each digital image. The proposed method is also an ideal candidate to process a time series of phenotypic information throughout the crop growth acquired in the field. Moreover, the introduced method has an advantage that it is not limited to growth measurements only but can be applied on other applications such as identifying weeds, diseases, stress, etc.

  20. Extraction and analysis of the image in the sight field of comparison goniometer to measure IR mirrors assembly

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-shan; Zhao, Yue-jin; Li, Zhuo; Dong, Liquan; Chu, Xuhong; Li, Ping

    2010-11-01

    The comparison goniometer is widely used to measure and inspect small angle, angle difference, and parallelism of two surfaces. However, the common manner to read a comparison goniometer is to inspect the ocular of the goniometer by one eye of the operator. To read an old goniometer that just equips with one adjustable ocular is a difficult work. In the fabrication of an IR reflecting mirrors assembly, a common comparison goniometer is used to measure the angle errors between two neighbor assembled mirrors. In this paper, a quick reading technique image-based for the comparison goniometer used to inspect the parallelism of mirrors in a mirrors assembly is proposed. One digital camera, one comparison goniometer and one set of computer are used to construct a reading system, the image of the sight field in the comparison goniometer will be extracted and recognized to get the angle positions of the reflection surfaces to be measured. In order to obtain the interval distance between the scale lines, a particular technique, left peak first method, based on the local peak values of intensity in the true color image is proposed. A program written in VC++6.0 has been developed to perform the color digital image processing.

  1. Case-mix adjustment and the comparison of community health center performance on patient experience measures.

    PubMed

    Johnson, M Laura; Rodriguez, Hector P; Solorio, M Rosa

    2010-06-01

    To assess the effect of case-mix adjustment on community health center (CHC) performance on patient experience measures. A Medicaid-managed care plan in Washington State collected patient survey data from 33 CHCs over three fiscal quarters during 2007-2008. The survey included three composite patient experience measures (6-month reports) and two overall ratings of care. The analytic sample includes 2,247 adult patients and 2,859 adults reporting for child patients. We compared the relative importance of patient case-mix adjusters by calculating each adjuster's predictive power and variability across CHCs. We then evaluated the impact of case-mix adjustment on the relative ranking of CHCs. Important case-mix adjusters included adult self-reported health status or parent-reported child health status, adult age, and educational attainment. The effects of case-mix adjustment on patient reports and ratings were different in the adult and child samples. Adjusting for race/ethnicity and language had a greater impact on parent reports than adult reports, but it impacted ratings similarly across the samples. The impact of adjustment on composites and ratings was modest, but it affected the relative ranking of CHCs. To ensure equitable comparison of CHC performance on patient experience measures, reports and ratings should be adjusted for adult self-reported health status or parent-reported child health status, adult age, education, race/ethnicity, and survey language. Because of the differential impact of case-mix adjusters for child and adult surveys, initiatives should consider measuring and reporting adult and child scores separately.

  2. Geodetic reanalysis of annual glaciological mass balances (2001-2011) of Hintereisferner, Austria

    NASA Astrophysics Data System (ADS)

    Klug, Christoph; Bollmann, Erik; Galos, Stephan Peter; Nicholson, Lindsey; Prinz, Rainer; Rieg, Lorenzo; Sailer, Rudolf; Stötter, Johann; Kaser, Georg

    2018-03-01

    This study presents a reanalysis of the glaciologically obtained annual glacier mass balances at Hintereisferner, Ötztal Alps, Austria, for the period 2001-2011. The reanalysis is accomplished through a comparison with geodetically derived mass changes, using annual high-resolution airborne laser scanning (ALS). The grid-based adjustments for the method-inherent differences are discussed along with associated uncertainties and discrepancies of the two methods of mass balance measurements. A statistical comparison of the two datasets shows no significant difference for seven annual, as well as the cumulative, mass changes over the 10-year record. Yet, the statistical view hides significant differences in the mass balance years 2002/03 (glaciological minus geodetic records = +0.92 m w.e.), 2005/06 (+0.60 m w.e.), and 2006/07 (-0.45 m w.e.). We conclude that exceptional meteorological conditions can render the usual glaciological observational network inadequate. Furthermore, we consider that ALS data reliably reproduce the annual mass balance and can be seen as validation or calibration tools for the glaciological method.

  3. TRASYS form factor matrix normalization

    NASA Technical Reports Server (NTRS)

    Tsuyuki, Glenn T.

    1992-01-01

    A method has been developed for adjusting a TRASYS enclosure form factor matrix to unity. This approach is not limited to closed geometries, and in fact, it is primarily intended for use with open geometries. The purpose of this approach is to prevent optimistic form factors to space. In this method, nodal form factor sums are calculated within 0.05 of unity using TRASYS, although deviations as large as 0.10 may be acceptable, and then, a process is employed to distribute the difference amongst the nodes. A specific example has been analyzed with this method, and a comparison was performed with a standard approach for calculating radiation conductors. In this comparison, hot and cold case temperatures were determined. Exterior nodes exhibited temperature differences as large as 7 C and 3 C for the hot and cold cases, respectively when compared with the standard approach, while interior nodes demonstrated temperature differences from 0 C to 5 C. These results indicate that temperature predictions can be artificially biased if the form factor computation error is lumped into the individual form factors to space.

  4. Lipophilic versus hydrophilic statin therapy for heart failure: a protocol for an adjusted indirect comparison meta-analysis

    PubMed Central

    2013-01-01

    Background Statins are known to reduce cardiovascular morbidity and mortality in primary and secondary prevention studies. Subsequently, a number of nonrandomised studies have shown statins improve clinical outcomes in patients with heart failure (HF). Small randomised controlled trials (RCT) also show improved cardiac function, reduced inflammation and mortality with statins in HF. However, the findings of two large RCTs do not support the evidence provided by previous studies and suggest statins lack beneficial effects in HF. Two meta-analyses have shown statins do not improve survival, whereas two others showed improved cardiac function and reduced inflammation in HF. It appears lipophilic statins produce better survival and other outcome benefits compared to hydrophilic statins. But the two types have not been compared in direct comparison trials in HF. Methods/design We will conduct a systematic review and meta-analysis of lipophilic and hydrophilic statin therapy in patients with HF. Our objectives are: 1. To determine the effects of lipophilic statins on (1) mortality, (2) hospitalisation for worsening HF, (3) cardiac function and (4) inflammation. 2. To determine the effects of hydrophilic statins on (1) mortality, (2) hospitalisation for worsening HF, (3) cardiac function and (4) inflammation. 3. To compare the efficacy of lipophilic and hydrophilic statins on HF outcomes with an adjusted indirect comparison meta-analysis. We will conduct an electronic search of databases for RCTs that evaluate statins in patients with HF. The reference lists of all identified studies will be reviewed. Two independent reviewers will conduct the search. The inclusion criteria include: 1. RCTs comparing statins with placebo or no statin in patients with symptomatic HF. 2. RCTs that employed the intention-to-treat (ITT) principle in data analysis. 3. Symptomatic HF patients of all aetiologies and on standard treatment. 4. Statin of any dose as intervention. 5. Placebo or no statin arm as control. The exclusion criteria include: 1. RCTs involving cerivastatin in HF patients. 2. RCTs with less than 4 weeks of follow-up. Discussion We will perform an adjusted indirect comparison meta-analysis of lipophilic versus hydrophilic statins in patients with HF using placebo or no statin arm as common comparator. PMID:23618535

  5. [Development of the scale of strategies for enhancing self-esteem among medical school students].

    PubMed

    Kim, Jin-Ju; Jang, Eun-Young; Park, Yong-Chon

    2013-06-01

    From the point of view that medical students are under the pressure of academic achievement and vulnerable to subjective distress, there is need for evaluate their strategies for enhancing self-esteem when they failed academically. This study was to develop the scale for enhancing self-esteem and to confirm the convergent, discriminant and criteria validity. Data were collected from 279 students at a medical school in Seoul. The scale of strategies for enhancing self-esteem (SSES) comprised comparison with inferior, doubting academic failure, accepting failure, and attribution to incidental factors. Also, to confirm the validities, participants responded to items measuring self-esteem, narcissism, 5 personality factors, depression and adjustment. By explanatory factor analysis of SSES, composed of three factors-comparison, doubting, and acceptance-and in the confirmatory factor analysis, 3 dimensions were best fit. Notably, comparison and doubting strategies were positively associated with depression and negatively associated with adjustment. In contrast, acceptance strategies were negatively associated with depression and positively associated with adjustment. Additionally, comparison and doubting strategies were positively associated with narcissism. The SSES of medical school students after academic failure yields 3 dimensions reliably and consistently. Also, it shows satisfactory convergent and concurrent validities.

  6. Evaluation of terrestrial and streamside salamander monitoring techniques at Shenandoah National Park

    USGS Publications Warehouse

    Jung, R.E.; Droege, S.; Sauer, J.R.; Landy, R.B.

    2000-01-01

    In response to concerns about amphibian declines, a study evaluating and validating amphibian monitoring techniques was initiated in Shenandoah and Big Bend National Parks in the spring of 1998. We evaluate precision, bias, and efficiency of several sampling methods for terrestrial and streamside salamanders in Shenandoah National Park and assess salamander abundance in relation to environmental variables, notably soil and water pH. Terrestrial salamanders, primarily redback salamanders (Plethodon cinereus), were sampled by searching under cover objects during the day in square plots (10 to 35 m2). We compared population indices (mean daily and total counts) with adjusted population estimates from capture-recapture. Analyses suggested that the proportion of salamanders detected (p) during sampling varied among plots, necessitating the use of adjusted population estimates. However, adjusted population estimates were less precise than population indices, and may not be efficient in relating salamander populations to environmental variables. In future sampling, strategic use of capture-recapture to verify consistency of p's among sites may be a reasonable compromise between the possibility of bias in estimation of population size and deficiencies due to inefficiency associated with the estimation of p. The streamside two-lined salamander (Eurycea bislineata) was surveyed using four methods: leaf litter refugia bags, 1 m2 quadrats, 50 x 1 m visual encounter transects, and electric shocking. Comparison of survey methods at nine streams revealed congruent patterns of abundance among sites, suggesting that relative bias among the methods is similar, and that choice of survey method should be based on precision and logistical efficiency. Redback and two-lined salamander abundance were not significantly related to soil or water pH, respectively.

  7. Application of a Novel DCPD Adjustment Method for the J-R Curve Characterization: A study based on ORNL and ASTM Interlaboratory Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xiang; Sokolov, Mikhail A; Nanstad, Randy K

    Material fracture toughness in the fully ductile region can be described by a J-integral vs. crack growth resistance curve (J-R curve). As a conventional J-R curve measurement method, the elastic unloading compliance (EUC) method becomes impractical for elevated temperature testing due to relaxation of the material and friction induced back-up shape of the J-R curve. One alternative solution of J-R curve testing applies the Direct Current Potential Drop (DCPD) technique for measuring crack extension. However, besides crack growth, potential drop can also be influenced by plastic deformation, crack tip blunting, etc., and uncertainties exist in the current DCPD methodology especiallymore » in differentiating potential drop due to stable crack growth and due to material deformation. Thus, using DCPD for J-R curve determination remains a challenging task. In this study, a new adjustment procedure for applying DCPD to derive the J-R curve has been developed for conventional fracture toughness specimens, including compact tension, three-point bend, and disk-shaped compact specimens. Data analysis has been performed on Oak Ridge National Laboratory (ORNL) and American Society for Testing and Materials (ASTM) interlaboratory results covering different specimen thicknesses, test temperatures, and materials, to evaluate the applicability of the new DCPD adjustment procedure for J-R curve characterization. After applying the newly-developed procedure, direct comparison between the DCPD method and the normalization method on the same specimens indicated close agreement for the overall J-R curves, as well as the provisional values of fracture toughness near the onset of ductile crack extension, Jq, and of tearing modulus.« less

  8. Why We (Usually) Don't Have to Worry about Multiple Comparisons

    ERIC Educational Resources Information Center

    Gelman, Andrew; Hill, Jennifer; Yajima, Masanao

    2012-01-01

    Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian…

  9. Including quality attributes in efficiency measures consistent with net benefit: creating incentives for evidence based medicine in practice.

    PubMed

    Eckermann, Simon; Coelli, Tim

    2013-01-01

    Evidence based medicine supports net benefit maximising therapies and strategies in processes of health technology assessment (HTA) for reimbursement and subsidy decisions internationally. However, translation of evidence based medicine to practice is impeded by efficiency measures such as cost per case-mix adjusted separation in hospitals, which ignore health effects of care. In this paper we identify a correspondence method that allows quality variables under control of providers to be incorporated in efficiency measures consistent with maximising net benefit. Including effects framed from a disutility bearing (utility reducing) perspective (e.g. mortality, morbidity or reduction in life years) as inputs and minimising quality inclusive costs on the cost-disutility plane is shown to enable efficiency measures consistent with maximising net benefit under a one to one correspondence. The method combines advantages of radial properties with an appropriate objective of maximising net benefit to overcome problems of inappropriate objectives implicit with alternative methods, whether specifying quality variables with utility bearing output (e.g. survival, reduction in morbidity or life years), hyperbolic or exogenous variables. This correspondence approach is illustrated in undertaking efficiency comparison at a clinical activity level for 45 Australian hospitals allowing for their costs and mortality rates per admission. Explicit coverage and comparability conditions of the underlying correspondence method are also shown to provide a robust framework for preventing cost-shifting and cream-skimming incentives, with appropriate qualification of analysis and support for data linkage and risk adjustment where these conditions are not satisfied. Comparison on the cost-disutility plane has previously been shown to have distinct advantages in comparing multiple strategies in HTA, which this paper naturally extends to a robust method and framework for comparing efficiency of health care providers in practice. Consequently, the proposed approach provides a missing link between HTA and practice, to allow active incentives for evidence based net benefit maximisation in practice. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Estimating dietary costs of low-income women in California: a comparison of 2 approaches123

    PubMed Central

    Aaron, Grant J; Keim, Nancy L; Drewnowski, Adam

    2013-01-01

    Background: Currently, no simplified approach to estimating food costs exists for a large, nationally representative sample. Objective: The objective was to compare 2 approaches for estimating individual daily diet costs in a population of low-income women in California. Design: Cost estimates based on time-intensive method 1 (three 24-h recalls and associated food prices on receipts) were compared with estimates made by using less intensive method 2 [a food-frequency questionnaire (FFQ) and store prices]. Low-income participants (n = 121) of USDA nutrition programs were recruited. Mean daily diet costs, both unadjusted and adjusted for energy, were compared by using Pearson correlation coefficients and the Bland-Altman 95% limits of agreement between methods. Results: Energy and nutrient intakes derived by the 2 methods were comparable; where differences occurred, the FFQ (method 2) provided higher nutrient values than did the 24-h recall (method 1). The crude daily diet cost was $6.32 by the 24-h recall method and $5.93 by the FFQ method (P = 0.221). The energy-adjusted diet cost was $6.65 by the 24-h recall method and $5.98 by the FFQ method (P < 0.001). Conclusions: Although the agreement between methods was weaker than expected, both approaches may be useful. Additional research is needed to further refine a large national survey approach (method 2) to estimate daily dietary costs with the use of this minimal time-intensive method for the participant and moderate time-intensive method for the researcher. PMID:23388658

  11. Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.

    PubMed

    Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E

    2014-02-28

    The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Surface engineering on CeO2 nanorods by chemical redox etching and their enhanced catalytic activity for CO oxidation

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Zhang, Zhiyun; Li, Jing; Ma, Yuanyuan; Qu, Yongquan

    2015-07-01

    Controllable surface properties of nanocerias are desired for various catalytic processes. There is a lack of efficient approaches to adjust the surface properties of ceria to date. Herein, a redox chemical etching method was developed to controllably engineer the surface properties of ceria nanorods. Ascorbic acid and hydrogen peroxide were used to perform the redox chemical etching process, resulting in a rough surface and/or pores on the surface of ceria nanorods. Increasing the etching cycles induced a steady increase of the specific surface area, oxygen vacancies and surface Ce3+ fractions. As a result, the etched nanorods delivered enhanced catalytic activity for CO oxidation, compared to the non-etched ceria nanorods. Our method provides a novel and facile approach to continuously adjust the surface properties of ceria for practical applications.Controllable surface properties of nanocerias are desired for various catalytic processes. There is a lack of efficient approaches to adjust the surface properties of ceria to date. Herein, a redox chemical etching method was developed to controllably engineer the surface properties of ceria nanorods. Ascorbic acid and hydrogen peroxide were used to perform the redox chemical etching process, resulting in a rough surface and/or pores on the surface of ceria nanorods. Increasing the etching cycles induced a steady increase of the specific surface area, oxygen vacancies and surface Ce3+ fractions. As a result, the etched nanorods delivered enhanced catalytic activity for CO oxidation, compared to the non-etched ceria nanorods. Our method provides a novel and facile approach to continuously adjust the surface properties of ceria for practical applications. Electronic supplementary information (ESI) available: Diameter distributions of as-prepared and etched samples, optical images, specific catalytic data of CO oxidation and comparison of CO oxidation. See DOI: 10.1039/c5nr01846c

  13. International drug price comparisons: quality assessment.

    PubMed

    Machado, Márcio; O'Brodovich, Ryan; Krahn, Murray; Einarson, Thomas R

    2011-01-01

    To quantitatively summarize results (i.e., prices and affordability) reported from international drug price comparison studies and assess their methodological quality. A systematic search of the most relevant databases-Medline, Embase, International Pharmaceutical Abstracts (IPA), and Scopus, from their inception to May 2009-was conducted to identify original research comparing international drug prices. International drug price information was extracted and recorded from accepted papers. Affordability was reported as drug prices adjusted for income. Study quality was assessed using six criteria: use of similar countries, use of a representative sample of drugs, selection of specific types of prices, identification of drug packaging, different weights on price indices, and the type of currency conversion used. Of the 1 828 studies identified, 21 were included. Only one study adequately addressed all quality issues. A large variation in study quality was observed due to the many methods used to conduct the drug price comparisons, such as different indices, economic parameters, price types, basket of drugs, and more. Thus, the quality of published studies was considered poor. Results varied across studies, but generally, higher income countries had higher drug prices. However, after adjusting drug prices for affordability, higher income countries had more affordable prices than lower income countries. Differences between drug prices and affordability in different countries were found. Low income countries reported less affordability of drugs, leaving room for potential problems with drug access, and consequently, a negative impact on health. The quality of the literature on this topic needs improvement.

  14. Detection of QT prolongation using a novel electrocardiographic analysis algorithm applying intelligent automation: prospective blinded evaluation using the Cardiac Safety Research Consortium electrocardiographic database.

    PubMed

    Green, Cynthia L; Kligfield, Paul; George, Samuel; Gussak, Ihor; Vajdic, Branislav; Sager, Philip; Krucoff, Mitchell W

    2012-03-01

    The Cardiac Safety Research Consortium (CSRC) provides both "learning" and blinded "testing" digital electrocardiographic (ECG) data sets from thorough QT (TQT) studies annotated for submission to the US Food and Drug Administration (FDA) to developers of ECG analysis technologies. This article reports the first results from a blinded testing data set that examines developer reanalysis of original sponsor-reported core laboratory data. A total of 11,925 anonymized ECGs including both moxifloxacin and placebo arms of a parallel-group TQT in 181 subjects were blindly analyzed using a novel ECG analysis algorithm applying intelligent automation. Developer-measured ECG intervals were submitted to CSRC for unblinding, temporal reconstruction of the TQT exposures, and statistical comparison to core laboratory findings previously submitted to FDA by the pharmaceutical sponsor. Primary comparisons included baseline-adjusted interval measurements, baseline- and placebo-adjusted moxifloxacin QTcF changes (ddQTcF), and associated variability measures. Developer and sponsor-reported baseline-adjusted data were similar with average differences <1 ms for all intervals. Both developer- and sponsor-reported data demonstrated assay sensitivity with similar ddQTcF changes. Average within-subject SD for triplicate QTcF measurements was significantly lower for developer- than sponsor-reported data (5.4 and 7.2 ms, respectively; P < .001). The virtually automated ECG algorithm used for this analysis produced similar yet less variable TQT results compared with the sponsor-reported study, without the use of a manual core laboratory. These findings indicate that CSRC ECG data sets can be useful for evaluating novel methods and algorithms for determining drug-induced QT/QTc prolongation. Although the results should not constitute endorsement of specific algorithms by either CSRC or FDA, the value of a public domain digital ECG warehouse to provide prospective, blinded comparisons of ECG technologies applied for QT/QTc measurement is illustrated. Copyright © 2012 Mosby, Inc. All rights reserved.

  15. Comparison of conventional and novel quadrupole drift tube magnets inspired by Klaus Halbach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feinberg, B.

    1995-02-01

    Quadrupole drift tube magnets for a heavy-ion linac provide a demanding application of magnet technology. A comparison is made of three different solutions to the problem of providing an adjustable high-field-strength quadrupole magnet in a small volume. A conventional tape-wound electromagnet quadrupole magnet (conventional) is compared with an adjustable permanent-magnet/iron quadrupole magnet (hybrid) and a laced permanent-magnet/iron/electromagnet (laced). Data is presented from magnets constructed for the SuperHILAC heavy-ion linear accelerator, and conclusions are drawn for various applications.

  16. Parent-Child Communication and Adjustment Among Children With Advanced and Non-Advanced Cancer in the First Year Following Diagnosis or Relapse.

    PubMed

    Keim, Madelaine C; Lehmann, Vicky; Shultz, Emily L; Winning, Adrien M; Rausch, Joseph R; Barrera, Maru; Gilmer, Mary Jo; Murphy, Lexa K; Vannatta, Kathryn A; Compas, Bruce E; Gerhardt, Cynthia A

    2017-09-01

    To examine parent-child communication (i.e., openness, problems) and child adjustment among youth with advanced or non-advanced cancer and comparison children. Families (n = 125) were recruited after a child's diagnosis/relapse and stratified by advanced (n = 55) or non-advanced (n = 70) disease. Comparison children (n = 60) were recruited from local schools. Children (ages 10-17) reported on communication (Parent-Adolescent Communication Scale) with both parents, while mothers reported on child adjustment (Child Behavior Checklist) at enrollment (T1) and one year (T2). Openness/problems in communication did not differ across groups at T1, but problems with fathers were higher among children with non-advanced cancer versus comparisons at T2. Openness declined for all fathers, while changes in problems varied by group for both parents. T1 communication predicted later adjustment only for children with advanced cancer. Communication plays an important role, particularly for children with advanced cancer. Additional research with families affected by life-limiting conditions is needed. © The Author 2017. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  17. Regional comparison of absolute gravimeters, EURAMET.M.G-K2 key comparison

    NASA Astrophysics Data System (ADS)

    Pálinkáš, V.; Francis, O.; Val'ko, M.; Kostelecký, J.; Van Camp, M.; Castelein, S.; Bilker-Koivula, M.; Näränen, J.; Lothhammer, A.; Falk, R.; Schilling, M.; Timmen, L.; Iacovone, D.; Baccaro, F.; Germak, A.; Biolcati, E.; Origlia, C.; Greco, F.; Pistorio, A.; De Plaen, R.; Klein, G.; Seil, M.; Radinovic, R.; Reudink, R.; Dykowski, P.; Sȩkowski, M.; Próchniewicz, D.; Szpunar, R.; Mojzeš, M.; Jańk, J.; Papčo, J.; Engfeldt, A.; Olsson, P. A.; Smith, V.; van Westrum, D.; Ellis, B.; Lucero, B.

    2017-01-01

    In the framework of the regional EURAMET.M.G-K2 comparison of absolute gravimeters, 17 gravimeters were compared in November 2015. Four gravimeters were from different NMIs and DIs, they were used to link the regional comparison to the CCM.G.K2 by means of linking converter. Combined least-squares adjustments with weighted constraint was used to determine KCRV. Several pilot solutions are presented and compared with the official solution to demonstrate influences of different approaches (e.g. definition of weights and the constraint) on results of the adjustment. In case of the official solution, all the gravimeters are in equivalence with declared uncertainties. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  18. Improving Cross-Sector Comparisons: Going Beyond the Health-Related QALY.

    PubMed

    Brazier, John; Tsuchiya, Aki

    2015-12-01

    The quality-adjusted life-year (QALY) has become a widely used measure of health outcomes for use in informing decision making in health technology assessment. However, there is growing recognition of outcomes beyond health within the health sector and in related sectors such as social care and public health. This paper presents the advantages and disadvantages of ten possible approaches covering extending the health-related QALY and using well-being and monetary-based methods, in order to address the problem of using multiple outcome measures to inform resource allocation within and between sectors.

  19. FDTD approach to optical forces of tightly focused vector beams on metal particles.

    PubMed

    Qin, Jian-Qi; Wang, Xi-Lin; Jia, Ding; Chen, Jing; Fan, Ya-Xian; Ding, Jianping; Wang, Hui-Tian

    2009-05-11

    We propose an improved FDTD method to calculate the optical forces of tightly focused beams on microscopic metal particles. Comparison study on different kinds of tightly focused beams indicates that trapping efficiency can be altered by adjusting the polarization of the incident field. The results also show the size-dependence of trapping forces exerted on metal particles. Transverse tapping forces produced by different illumination wavelengths are also evaluated. The numeric simulation demonstrates the possibility of trapping moderate-sized metal particles whose radii are comparable to wavelength.

  20. In search of best fitted composite model to the ALAE data set with transformed Gamma and inversed transformed Gamma families

    NASA Astrophysics Data System (ADS)

    Maghsoudi, Mastoureh; Bakar, Shaiful Anuar Abu

    2017-05-01

    In this paper, a recent novel approach is applied to estimate the threshold parameter of a composite model. Several composite models from Transformed Gamma and Inverse Transformed Gamma families are constructed based on this approach and their parameters are estimated by the maximum likelihood method. These composite models are fitted to allocated loss adjustment expenses (ALAE). In comparison to all composite models studied, the composite Weibull-Inverse Transformed Gamma model is proved to be a competitor candidate as it best fit the loss data. The final part considers the backtesting method to verify the validation of VaR and CTE risk measures.

  1. A liquid chromatographic method for determination of theophylline in serum and capillary blood--a comparison.

    PubMed

    Gartzke, J; Jäger, H; Vins, I

    1991-01-01

    A simple, fast and reliable liquid chromatographic method for the determination of theophylline in serum and capillary blood after a solid phase extraction is described for therapeutic drug monitoring. The employment of capillary blood permits the determination of an individual drug profile and other pharmacokinetic studies in neonates and infants. There were no differences in venous- and capillary-blood levels but these values compared poorly with those in serum. An adjustment of the results by correction of the different volumes of serum and blood by haematocrit was unsuccessful. Differences in the binding of theophylline to erythrocytes could be an explanation for the differences in serum at blood levels of theophylline.

  2. Comparison of flume and towing methods for verifying the calibration of a suspended-sediment sampler

    USGS Publications Warehouse

    Beverage, J.P.; Futrell, J.C.

    1986-01-01

    Suspended-sediment samplers must sample isokinetically (at stream velocity) in order to collect representative water samples of rivers. Each sampler solo by the Federal Interagency Sedimentation Project or by the U.S. Geological Survey Hydrologic Instrumentation Facility has been adjusted to sample isokinetically and tested in a flume to verify the calibration. The test program for a modified U.S. P-61 sampler provided an opportunity to compare flume and towing tank tests. Although the two tests yielded statistically distinct results, the difference between them was quite small. The conclusion is that verifying the calibration of any suspended-sediment sampler by either the flume or towing method should give acceptable results.

  3. Automatic image acquisition processor and method

    DOEpatents

    Stone, William J.

    1986-01-01

    A computerized method and point location system apparatus is disclosed for ascertaining the center of a primitive or fundamental object whose shape and approximate location are known. The technique involves obtaining an image of the object, selecting a trial center, and generating a locus of points having a predetermined relationship with the center. Such a locus of points could include a circle. The number of points overlying the object in each quadrant is obtained and the counts of these points per quadrant are compared. From this comparison, error signals are provided to adjust the relative location of the trial center. This is repeated until the trial center overlies the geometric center within the predefined accuracy limits.

  4. Automatic image acquisition processor and method

    DOEpatents

    Stone, W.J.

    1984-01-16

    A computerized method and point location system apparatus is disclosed for ascertaining the center of a primitive or fundamental object whose shape and approximate location are known. The technique involves obtaining an image of the object, selecting a trial center, and generating a locus of points having a predetermined relationship with the center. Such a locus of points could include a circle. The number of points overlying the object in each quadrant is obtained and the counts of these points per quadrant are compared. From this comparison, error signals are provided to adjust the relative location of the trial center. This is repeated until the trial center overlies the geometric center within the predefined accuracy limits.

  5. [Comparison of three methods for measuring multiple morbidity according to the use of health resources in primary healthcare].

    PubMed

    Sicras-Mainar, Antoni; Velasco-Velasco, Soledad; Navarro-Artieda, Ruth; Blanca Tamayo, Milagrosa; Aguado Jodar, Alba; Ruíz Torrejón, Amador; Prados-Torres, Alexandra; Violan-Fors, Concepción

    2012-06-01

    To compare three methods of measuring multiple morbidity according to the use of health resources (cost of care) in primary healthcare (PHC). Retrospective study using computerized medical records. Thirteen PHC teams in Catalonia (Spain). Assigned patients requiring care in 2008. The socio-demographic variables were co-morbidity and costs. Methods of comparison were: a) Combined Comorbidity Index (CCI): an index itself was developed from the scores of acute and chronic episodes, b) Charlson Index (ChI), and c) Adjusted Clinical Groups case-mix: resource use bands (RUB). The cost model was constructed by differentiating between fixed (operational) and variable costs. 3 multiple lineal regression models were developed to assess the explanatory power of each measurement of co-morbidity which were compared from the determination coefficient (R(2)), p< .05. The study included 227,235 patients. The mean unit of cost was €654.2. The CCI explained an R(2)=50.4%, the ChI an R(2)=29.2% and BUR an R(2)=39.7% of the variability of the cost. The behaviour of the ICC is acceptable, albeit with low scores (1 to 3 points), showing inconclusive results. The CCI may be a simple method of predicting PHC costs in routine clinical practice. If confirmed, these results will allow improvements in the comparison of the case-mix. Copyright © 2011 Elsevier España, S.L. All rights reserved.

  6. Improving the analysis of composite endpoints in rare disease trials.

    PubMed

    McMenamin, Martina; Berglind, Anna; Wason, James M S

    2018-05-22

    Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.

  7. Isolation of osteoprogenitors from human jaw periosteal cells: a comparison of two magnetic separation methods.

    PubMed

    Olbrich, Marcus; Rieger, Melanie; Reinert, Siegmar; Alexander, Dorothea

    2012-01-01

    Human jaw periosteum tissue contains osteoprogenitors that have potential for tissue engineering applications in oral and maxillofacial surgeries. To isolate osteoprogenitor cells from heterogeneous cell populations, we used the specific mesenchymal stem cell antigen-1 (MSCA-1) antibody and compared two magnetic separation methods. We analyzed the obtained MSCA-1(+) and MSCA-1(-) fractions in terms of purity, yield of positive/negative cells and proliferative and mineralization potentials. The analysis of cell viability after separation revealed that the EasySep method yielded higher viability rates, whereas the flow cytometry results showed a higher purity for the MACS-separated cell fractions. The mineralization capacity of the osteogenic induced MSCA-1(+) cells compared with the MSCA-1(-) controls using MACS was 5-fold higher, whereas the same comparison after EasySep showed no significant differences between both fractions. By analyzing cell proliferation, we detected a significant difference between the proliferative potential of the osteogenic cells versus untreated cells after the MACS and EasySep separations. The differentiated cells after MACS separation adjusted their proliferative capacity, whereas the EasySep-separated cells failed to do so. The protein expression analysis showed small differences between the two separation methods. Our findings suggest that MACS is a more suitable separation method to isolate osteoprogenitors from the entire jaw periosteal cell population.

  8. Multiple comparisons permutation test for image based data mining in radiotherapy

    PubMed Central

    2013-01-01

    Comparing incidental dose distributions (i.e. images) of patients with different outcomes is a straightforward way to explore dose-response hypotheses in radiotherapy. In this paper, we introduced a permutation test that compares images, such as dose distributions from radiotherapy, while tackling the multiple comparisons problem. A test statistic Tmax was proposed that summarizes the differences between the images into a single value and a permutation procedure was employed to compute the adjusted p-value. We demonstrated the method in two retrospective studies: a prostate study that relates 3D dose distributions to failure, and an esophagus study that relates 2D surface dose distributions of the esophagus to acute esophagus toxicity. As a result, we were able to identify suspicious regions that are significantly associated with failure (prostate study) or toxicity (esophagus study). Permutation testing allows direct comparison of images from different patient categories and is a useful tool for data mining in radiotherapy. PMID:24365155

  9. Changes in Initial Expenditures for Benign Prostatic Hyperplasia Evaluation in the Medicare Population: A Comparison to Overall Medicare Inflation

    PubMed Central

    Bellinger, Adam S.; Elliott, Sean P.; Yang, Liu; Wei, John T.; Saigal, Christopher S.; Smith, Alexandria; Wilt, Timothy J.; Strope, Seth A.

    2012-01-01

    Introduction Benign prostatic hyperplasia (BPH) creates significant expenses for the Medicare program. We sought to determine trends in expenditures for BPH evaluative testing after urologist consultation, and place these trends in the context of overall Medicare expenditures. Methods Using a 5% national sample of Medicare beneficiaries from 2000 to 2007, we developed a cohort of men with claims for new visits to urologists for diagnoses consistent with symptomatic BPH (n=40,253). We assessed trends in initial expenditures (within 12 months of diagnosis; inflation and geography adjusted) by categories of evaluative tests derived from the 2003 AUA Guideline on the Management of BPH. Using governmental reports on Medicare expenditures, trends in BPH expenditures were compared to overall and imaging-specific Medicare expenditures. Comparisons were assessed by Z-tests and regression analysis for linear trends as appropriate. Results Between 2000 and 2007 inflation adjusted total Medicare expenditure per patient for the initial evaluation of BPH patients seen by urologists increased from $255.44 to $343.98 (p<0.0001). Increases in BPH related imaging (55%), were significantly less than increases in overall Medicare expenditures on imaging (104%; p<0.001). The 35% increase in per patient expenditures for BPH was significantly lower than the increase in overall Medicare expenditure per enrollee (45%; p=0.0.0015). Conclusion From 2000 to 2007, inflation adjusted expenditures on BPH related evaluations increased. This growth was slower than overall growth in Medicare expenditures, and increases in imaging expenditures related to BPH were restrained compared to the Medicare program as a whole. PMID:22425128

  10. Comparison of performance-based measures among native Japanese, Japanese-Americans in Hawaii and Caucasian women in the United States, ages 65 years and over: a cross-sectional study

    PubMed Central

    Aoyagi, Kiyoshi; Ross, Philip D; Nevitt, Michael C; Davis, James W; Wasnich, Richard D; Hayashi, Takuo; Takemoto, Tai-ichiro

    2001-01-01

    Background Japanese (both in Japan and Hawaii) have a lower incidence of falls and of hip fracture than North American and European Caucasians, but the reasons for these differences are not clear. Subjects and Methods A cross-sectional study. We compared neuromuscular risk factors for falls using performance-based measures (chair stand time, usual and rapid walking speed, and grip strength) among 163 Japanese women in Japan, 681 Japanese-American women in Hawaii and 9403 Caucasian women in the United States aged 65 years and over. Results After adjusting for age, the Caucasian women required about 40% more time to complete 5 chair stands than either group of Japanese. Walking speed was about 10% slower among Caucasians than native Japanese, whereas Japanese-American women in Hawaii walked about 11% faster than native Japanese. Grip strength was greatest in Japan, which may reflect the rural farming district that this sample was drawn from. Additional adjustment for height, weight or body mass index increased the adjusted means of chair stand time and grip strength among Japanese, but the differences remained significant. Conclusions Both native Japanese and Japanese-American women in Hawaii performed better than Caucasians on chair stand time and walking speed tests, and native Japanese had greater grip strength than Japanese in Hawaii and Caucasians. The biological implications of these differences in performance are uncertain, but may be useful in planning future comparisons between populations. PMID:11696243

  11. Precision mass measurements of some isotopes of tungsten and mercury for an adjustment to the mass table in the region A = 184 to A = 204

    NASA Astrophysics Data System (ADS)

    Barillari, Domenico K.

    This thesis concerns the precise re-measurement of mass values in the region of the mercury isotopes, such that important discrepancies in the high-mass end of the mass table could be resolved. Scope and contents. Four mass spectroscopic doublets involving a comparison between 201Hg, 199Hg and 183W (and using a chlorocarbon reference) are reported from measurements made with the upgraded Manitoba 11 deflection instrument. The measurements address the problem of a mass table mis-adjustment in the region of the valley of β-stability between the tungsten group and the noble metals. The results, forming a well-closed loop of mass differences, support the earlier results of Kozier [Ko(1977)] regarding the (stable) mercury isotope masses and confirm an approximate 20 μu discrepancy in the mass adjustment of Audi et al [Au(1993)]. A local least- square re-adjustment conducted using these and existing mass table data suggests that the error originates with mass differences pertaining to one or more other nuclide pairs, perhaps 193Ir-192Ir. The work on upgrading the precision voltage supply and potentiometry system of the Manitoba II instrument is also reported, as is a new assessment on the data processing method. (Abstract shortened by UMI.)

  12. Comparison of nurse staffing based on changes in unit-level workload associated with patient churn.

    PubMed

    Hughes, Ronda G; Bobay, Kathleen L; Jolly, Nicholas A; Suby, Chrysmarie

    2015-04-01

    This analysis compares the staffing implications of three measures of nurse staffing requirements: midnight census, turnover adjustment based on length of stay, and volume of admissions, discharges and transfers. Midnight census is commonly used to determine registered nurse staffing. Unit-level workload increases with patient churn, the movement of patients in and out of the nursing unit. Failure to account for patient churn in staffing allocation impacts nurse workload and may result in adverse patient outcomes. Secondary data analysis of unit-level data from 32 hospitals, where nursing units are grouped into three unit-type categories: intensive care, intermediate care, and medical surgical. Midnight census alone did not account adequately for registered nurse workload intensity associated with patient churn. On average, units were staffed with a mixture of registered nurses and other nursing staff not always to budgeted levels. Adjusting for patient churn increases nurse staffing across all units and shifts. Use of the discharges and transfers adjustment to midnight census may be useful in adjusting RN staffing on a shift basis to account for patient churn. Nurse managers should understand the implications to nurse workload of various methods of calculating registered nurse staff requirements. © 2013 John Wiley & Sons Ltd.

  13. Set-size procedures for controlling variations in speech-reception performance with a fluctuating masker

    PubMed Central

    Bernstein, Joshua G. W.; Summers, Van; Iyer, Nandini; Brungart, Douglas S.

    2012-01-01

    Adaptive signal-to-noise ratio (SNR) tracking is often used to measure speech reception in noise. Because SNR varies with performance using this method, data interpretation can be confounded when measuring an SNR-dependent effect such as the fluctuating-masker benefit (FMB) (the intelligibility improvement afforded by brief dips in the masker level). One way to overcome this confound, and allow FMB comparisons across listener groups with different stationary-noise performance, is to adjust the response set size to equalize performance across groups at a fixed SNR. However, this technique is only valid under the assumption that changes in set size have the same effect on percentage-correct performance for different masker types. This assumption was tested by measuring nonsense-syllable identification for normal-hearing listeners as a function of SNR, set size and masker (stationary noise, 4- and 32-Hz modulated noise and an interfering talker). Set-size adjustment had the same impact on performance scores for all maskers, confirming the independence of FMB (at matched SNRs) and set size. These results, along with those of a second experiment evaluating an adaptive set-size algorithm to adjust performance levels, establish set size as an efficient and effective tool to adjust baseline performance when comparing effects of masker fluctuations between listener groups. PMID:23039460

  14. Case-Mix Adjustment of the Bereaved Family Survey.

    PubMed

    Kutney-Lee, Ann; Carpenter, Joan; Smith, Dawn; Thorpe, Joshua; Tudose, Alina; Ersek, Mary

    2018-01-01

    Surveys of bereaved family members are increasingly being used to evaluate end-of-life (EOL) care and to measure organizational performance in EOL care quality. The Bereaved Family Survey (BFS) is used to monitor EOL care quality and benchmark performance in the Veterans Affairs (VA) health-care system. The objective of this study was to develop a case-mix adjustment model for the BFS and to examine changes in facility-level scores following adjustment, in order to provide fair comparisons across facilities. We conducted a cross-sectional secondary analysis of medical record and survey data from veterans and their family members across 146 VA medical centers. Following adjustment using model-based propensity weighting, the mean change in the BFS-Performance Measure score across facilities was -0.6 with a range of -2.6 to 0.6. Fifty-five (38%) facilities changed within ±0.5 percentage points of their unadjusted score. On average, facilities that benefited most from adjustment cared for patients with greater comorbidity burden and were located in urban areas in the Northwest and Midwestern regions of the country. Case-mix adjustment results in minor changes to facility-level BFS scores but allows for fairer comparisons of EOL care quality. Case-mix adjustment of the BFS positions this National Quality Forum-endorsed measure for use in public reporting and internal quality dashboards for VA leadership and may inform the development and refinement of case-mix adjustment models for other surveys of bereaved family members.

  15. Comparing historical and modern methods of Sea Surface Temperature measurement - Part 1: Review of methods, field comparisons and dataset adjustments

    NASA Astrophysics Data System (ADS)

    Matthews, J. B. R.

    2012-09-01

    Sea Surface Temperature (SST) measurements have been obtained from a variety of different platforms, instruments and depths over the post-industrial period. Today most measurements come from ships, moored and drifting buoys and satellites. Shipboard methods include temperature measurement of seawater sampled by bucket and in engine cooling water intakes. Engine intake temperatures are generally thought to average a few tenths of a °C warmer than simultaneous bucket temperatures. Here I review SST measurement methods, studies comparing shipboard methods by field experiment and adjustments applied to SST datasets to account for variable methods. In opposition to contemporary thinking, I find average bucket-intake temperature differences reported from field studies inconclusive. Non-zero average differences often have associated standard deviations that are several times larger than the averages themselves. Further, average differences have been found to vary widely between ships and between cruises on the same ship. The cause of non-zero average differences is typically unclear given the general absence of additional temperature observations to those from buckets and engine intakes. Shipboard measurements appear of variable quality, highly dependent upon the accuracy and precision of the thermometer used and the care of the observer where manually read. Methods are generally poorly documented, with written instructions not necessarily reflecting actual practices of merchant mariners. Measurements cannot be expected to be of high quality where obtained by untrained sailors using thermometers of low accuracy and precision.

  16. Parental HIV/AIDS and psychosocial adjustment among rural Chinese children.

    PubMed

    Fang, Xiaoyi; Li, Xiaoming; Stanton, Bonita; Hong, Yan; Zhang, Liying; Zhao, Guoxiang; Zhao, Junfeng; Lin, Xiuyun; Lin, Danhua

    2009-01-01

    To assess the relationship between parental HIV/AIDS and psychosocial adjustment of children in rural central China. Participants included 296 double AIDS orphans (children who had lost both their parents to AIDS), 459 single orphans (children who had lost one parent to AIDS), 466 vulnerable children who lived with HIV-infected parents, and 404 comparison children who did not experience HIV/AIDS-related illness and death in their families. The measures included depressive symptoms, loneliness, self-esteem, future expectations, hopefulness about the future, and perceived control over the future. AIDS orphans and vulnerable children consistently demonstrated poorer psychosocial adjustment than comparison children in the same community. The level of psychosocial adjustment was similar between single orphans and double orphans, but differed by care arrangement among double orphans. The findings underscore the urgency and importance of culturally and developmentally appropriate intervention efforts targeting psychosocial problems among children affected by AIDS and call for more exploration of risk and resilience factors, both individual and contextual, affecting the psychosocial wellbeing of these children.

  17. Suicide Risk Among Holocaust Survivors Following Psychiatric Hospitalizations: A Historic Cohort Study.

    PubMed

    Lurie, Ido; Gur, Adi; Haklai, Ziona; Goldberger, Nehama

    2018-01-01

    The association between Holocaust experience, suicide, and psychiatric hospitalization has not been unequivocally established. The aim of this study was to determine the risk of suicide among 3 Jewish groups with past or current psychiatric hospitalizations: Holocaust survivors (HS), survivors of pre-Holocaust persecution (early HS), and a comparison group of similar European background who did not experience Holocaust persecution. In a retrospective cohort study based on the Israel National Psychiatric Case Register (NPCR) and the database of causes of death, all suicides in the years 1981-2009 were found for HS (n = 16,406), early HS (n = 1,212) and a comparison group (n = 4,286). Age adjusted suicide rates were calculated for the 3 groups and a logistic regression model was built to assess the suicide risk, controlling for demographic and clinical variables. The number of completed suicides in the study period was: HS-233 (1.4%), early HS-34 (2.8%), and the comparison group-64 (1.5%). Age adjusted rates were 106.7 (95% CI 93.0-120.5) per 100,000 person-years for HS, 231.0 (95% CI 157.0-327.9) for early HS and 150.7 (95% CI 113.2-196.6) for comparisons. The regression models showed significantly higher risk for the early HS versus comparisons (multivariate model adjusted OR = 1.68, 95% CI 1.09-2.60), but not for the HS versus comparisons. These results may indicate higher resilience among the survivors of maximal adversity compared to others who experienced lesser persecution.

  18. Beam-based calibrations of the BPM offset at C-ADS Injector II

    NASA Astrophysics Data System (ADS)

    Chen, Wei-Long; Wang, Zhi-Jun; Feng, Chi; Dou, Wei-Ping; Tao, Yue; Jia, Huan; Wang, Wang-Sheng; Liu, Shu-Hui; He, Yuan

    2016-07-01

    Beam-based BPM offset calibration was carried out for Injector II at the C-ADS demonstration facility at the Institute of Modern Physics (IMP), Chinese Academy of Science (CAS). By using the steering coils integrated in the quadrupoles, the beam orbit can be effectively adjusted and BPM positions recorded at the Medium Energy Beam Transport of the Injector II Linac. The studies were done with a 2 mA, 2.1 MeV proton beam in pulsed mode. During the studies, the “null comparison method” was applied for the calibration. This method is less sensitive to errors compared with the traditional transmission matrix method. In addition, the quadrupole magnet’s center can also be calibrated with this method. Supported by National Natural Science Foundation of China (91426303, 11525523)

  19. Normalized Cut Algorithm for Automated Assignment of Protein Domains

    NASA Technical Reports Server (NTRS)

    Samanta, M. P.; Liang, S.; Zha, H.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present a novel computational method for automatic assignment of protein domains from structural data. At the core of our algorithm lies a recently proposed clustering technique that has been very successful for image-partitioning applications. This grap.,l-theory based clustering method uses the notion of a normalized cut to partition. an undirected graph into its strongly-connected components. Computer implementation of our method tested on the standard comparison set of proteins from the literature shows a high success rate (84%), better than most existing alternative In addition, several other features of our algorithm, such as reliance on few adjustable parameters, linear run-time with respect to the size of the protein and reduced complexity compared to other graph-theory based algorithms, would make it an attractive tool for structural biologists.

  20. Trends in pesticide concentrations in corn-belt streams, 1996-2006

    USGS Publications Warehouse

    Sullivan, Daniel J.; Vecchia, Aldo V.; Lorenz, David L.; Gilliom, Robert J.; Martin, Jeffrey D.

    2009-01-01

    Trends in the concentrations of commonly occurring pesticides in the Corn Belt of the United States were assessed, and the performance and application of several statistical methods for trend analysis were evaluated. Trends in the concentrations of 11 pesticides with sufficient data for trend assessment were assessed at up to 31 stream sites for two time periods: 1996–2002 and 2000–2006. Pesticides included in the trend analyses were atrazine, acetochlor, metolachlor, alachlor, cyanazine, EPTC, simazine, metribuzin, prometon, chlorpyrifos, and diazinon.The statistical methods applied and compared were (1) a modified version of the nonparametric seasonal Kendall test (SEAKEN), (2) a modified version of the Regional Kendall test, (3) a parametric regression model with seasonal wave (SEAWAVE), and (4) a version of SEAWAVE with adjustment for streamflow (SEAWAVE-Q). The SEAKEN test is a statistical hypothesis test for detecting monotonic trends in seasonal time-series data such as pesticide concentrations at a particular site. Trends across a region, represented by multiple sites, were evaluated using the regional seasonal Kendall test, which computes a test for an overall trend within a region by computing a score for each season at each site and adding the scores to compute the total for the region. The SEAWAVE model is a parametric regression model specifically designed for analyzing seasonal variability and trends in pesticide concentrations. The SEAWAVE-Q model accounts for the effect of changing flow conditions in order to separate changes caused by hydrologic trends from changes caused by other factors, such as pesticide use.There was broad, general agreement between unadjusted trends (no adjustment for streamflow effects) identified by the SEAKEN and SEAWAVE methods, including the regional seasonal Kendall test. Only about 10 percent of the paired comparisons between SEAKEN and SEAWAVE indicated a difference in the direction of trend, and none of these had differences significant at the 10-percent significance level. This consistency of results supports the validity and robustness of all three approaches as trend analysis tools. The SEAWAVE method is favored, however, because it has less restrictive data requirements, enabling analysis for more site/pesticide combinations, and can incorporate adjustment for streamflow (SEAWAVE-Q) with substantially fewer measurements than the flow-adjustment procedure used with SEAKEN.Analysis of flow-adjusted trends is preferable to analysis of non-adjusted trends for evaluating potential effects of changes in pesticide use or management practices because flow-adjusted trends account for the influence of flow-related variability.Analysis of flow-adjusted trends by SEAWAVE-Q showed that all of the pesticides assessed, except simazine and acetochlor, were dominated by varying degrees of concentration downtrends in one or both analysis periods. Atrazine, metolachlor, alachlor, cyanazine, EPTC, and metribuzin—all major corn herbicides, as well as prometon and chlorpyrifos, showed more prevalent concentration downtrends during 1996–2002 compared to 2000–2006. Diazinon had no clear trends during 1996–2002, but had predominantly downward trends during 2000–2006. Acetochlor trends were mixed during 1996–2002 and slightly upward during 2000–2006, but most of the trends were not statistically significant. Simazine concentrations trended upward at most sites during both 1996–2002 and 2000–2006.Comparison of concentration trends to agricultural-use trends indicated similarity in direction and magnitude for acetochlor, metolachlor, alachlor, cyanazine, EPTC, and metribuzin. Concentration downtrends for atrazine, chlorpyrifos, and diazinon were steeper than agricultural-use downtrends at some sites, indicating the possibility that agricultural management practices may have increasingly reduced transport to streams (particularly atrazine) or, for chlorpyrifos and diazinon, that nonagricultural uses declined substantially. Concentration uptrends for simazine generally were steeper than agricultural-use uptrends, indicating the possibility that nonagricultural uses of this herbicide increased during the study period.

  1. Evaluation of selected methods for determining streamflow during periods of ice effect

    USGS Publications Warehouse

    Melcher, Norwood B.; Walker, J.F.

    1992-01-01

    Seventeen methods for estimating ice-affected streamflow are evaluated for potential use with the U.S. Geological Survey streamflow-gaging station network. The methods evaluated were identified by written responses from U.S. Geological Survey field offices and by a comprehensive literature search. The methods selected and techniques used for applying the methods are described in this report. The methods are evaluated by comparing estimated results with data collected at three streamflow-gaging stations in Iowa during the winter of 1987-88. Discharge measurements were obtained at 1- to 5-day intervals during the ice-affected periods at the three stations to define an accurate baseline record. Discharge records were compiled for each method based on data available, assuming a 6-week field schedule. The methods are classified into two general categories-subjective and analytical--depending on whether individual judgment is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used at streamflow-gaging stations, where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice-adjustment factor) may be appropriate for use at stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge-ratio and multiple-regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.

  2. [Economic impact of nosocomial bacteraemia. A comparison of three calculation methods].

    PubMed

    Riu, Marta; Chiarello, Pietro; Terradas, Roser; Sala, Maria; Castells, Xavier; Knobel, Hernando; Cots, Francesc

    2016-12-01

    The excess cost associated with nosocomial bacteraemia (NB) is used as a measurement of the impact of these infections. However, some authors have suggested that traditional methods overestimate the incremental cost due to the presence of various types of bias. The aim of this study was to compare three assessment methods of NB incremental cost to correct biases in previous analyses. Patients who experienced an episode of NB between 2005 and 2007 were compared with patients grouped within the same All Patient Refined-Diagnosis-Related Group (APR-DRG) without NB. The causative organisms were grouped according to the Gram stain, and whether bacteraemia was caused by a single or multiple microorganisms, or by a fungus. Three assessment methods are compared: stratification by disease; econometric multivariate adjustment using a generalised linear model (GLM); and propensity score matching (PSM) was performed to control for biases in the econometric model. The analysis included 640 admissions with NB and 28,459 without NB. The observed mean cost was €24,515 for admissions with NB and €4,851.6 for controls (without NB). Mean incremental cost was estimated at €14,735 in stratified analysis. Gram positive microorganism had the lowest mean incremental cost, €10,051. In the GLM, mean incremental cost was estimated as €20,922, and adjusting with PSM, the mean incremental cost was €11,916. The three estimates showed important differences between groups of microorganisms. Using enhanced methodologies improves the adjustment in this type of study and increases the value of the results. Copyright © 2015 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  3. The perverse impact of external reference pricing (ERP): a comparison of orphan drugs affordability in 12 European countries. A call for policy change

    PubMed Central

    Young, K. E.; Soussi, I.; Toumi, M.

    2017-01-01

    ABSTRACT Objective: The study compared the relative cost differences of similar orphan drugs among high and low GDP countries in Europe: Bulgaria, France, Germany, Greece, Hungary, Italy, Norway, Poland, Romania, Spain, Sweden, UK. Methods: Annual treatment costs per patient were calculated. Relative costs were computed by dividing the costs by each economic parameter: nominal GDP per capita, GDP in PPP per capita, % GDP contributed by the government, government budget per inhabitant, % GDP spent on healthcare, % GDP spent on pharmaceuticals, and average annual salary. An international comparison of the relative costs was done using UK as the reference country and results were analysed descriptively. Results: 120 orphan drugs were included. The median annual costs of orphan drugs in all countries varied minimally (cost ratios: 0.87 to 1.08). When the costs were adjusted using GDP per capita, the EU-5 and Nordic countries maintained minimal difference in median cost. However, the lower GDP countries showed three to six times higher relative costs. The same pattern was evident when costs were adjusted using the other economic parameters. Conclusion: When the country’s ability to pay is taken into consideration, lower GDP countries pay relatively higher costs for similarly available orphan drugs in Europe. PMID:29081920

  4. Who Is the Big Spender? Price Index Effects in Comparisons of Educational Expenditures between Countries and Over Time

    ERIC Educational Resources Information Center

    Arneberg, Marie; Bowitz, Einar

    2006-01-01

    International comparisons of data on expenditure on education use purchasing power parities for currency conversion and adjustment for price differences between countries to allow for volume comparisons. The resulting indicators are commonly interpreted as differences between countries in input volumes to the education sector-teachers, materials,…

  5. Differentiating Challenge Reactivity from Psychomotor Activity in Studies of Children’s Psychophysiology: Considerations for Theory and Measurement

    PubMed Central

    Bush, Nicole R.; Alkon, Abbey; Obradović, Jelena; Stamperdahl, Juliet; Boyce, W. Thomas

    2014-01-01

    Current methods of assessing children’s physiologic “stress reactivity” may be confounded by psychomotor activity, biasing estimates of the relation between reactivity and health. We examine the joint and independent contributions of psychomotor activity and challenge reactivity during a protocol for children ages 5–6 (N=338). Measures of parasympathetic (RSA) and sympathetic (PEP) reactivity were calculated for social, cognitive, sensory, and emotional challenge tasks. Reactivity was calculated relative to both resting and a paired comparison task that accounted for psychomotor activity effects during each challenge. Results indicated that comparison tasks themselves elicited RSA and PEP responses, and reactivity adjusted for psychomotor activity was incongruent with reactivity calculated using rest. Findings demonstrate the importance of accounting for confounding psychomotor activity effects on physiologic reactivity. PMID:21524757

  6. Adjusted comparison of daratumumab monotherapy versus real-world historical control data from the Czech Republic in heavily pretreated and highly refractory multiple myeloma patients.

    PubMed

    Jelínek, Tomáš; Maisnar, Vladimír; Pour, Luděk; Špička, Ivan; Minařík, Jiří; Gregora, Evžen; Kessler, Petr; Sýkora, Michal; Fraňková, Hana; Adamová, Dagmar; Wróbel, Marek; Mikula, Peter; Jarkovský, Jiří; Diels, Joris; Gatopoulou, Xenia; Veselá, Šárka; Besson, Hervé; Brožová, Lucie; Ito, Tetsuro; Hájek, Roman

    2018-05-01

    We conducted an adjusted comparison of progression-free survival (PFS) and overall survival (OS) for daratumumab monotherapy versus standard of care, as observed in a real-world historical cohort of heavily pretreated multiple myeloma patients from Czech Republic. Using longitudinal chart data from the Registry of Monoclonal Gammopathies (RMG) of the Czech Myeloma Group, patient-level data from the RMG was pooled with pivotal daratumumab monotherapy studies (GEN501 and SIRIUS; 16 mg/kg). From the RMG database, we identified 972 treatment lines in 463 patients previously treated with both a proteasome inhibitor and an immunomodulatory drug. Treatment initiation dates for RMG patients were between March 2006 and March 2015. The most frequently used treatment regimens were lenalidomide-based regimens (33.4%), chemotherapy (18.1%), bortezomib-based regimens (13.6%), thalidomide-based regimens (8.0%), and bortezomib plus thalidomide (5.3%). Few patients were treated with carfilzomib-based regimens (2.5%) and pomalidomide-based regimens (2.4%). Median observed PFS for daratumumab and the RMG cohort was 4.0 and 5.8 months (unadjusted hazard ratio [HR], 1.14; 95% confidence interval [CI], 0.94-1.39), respectively, and unadjusted median OS was 20.1 and 11.9 months (unadjusted HR, 0.61; 95% CI, 0.48-0.78), respectively. Statistical adjustments for differences in baseline characteristics were made using patient-level data. The adjusted HRs (95% CI) for PFS and OS for daratumumab versus the RMG cohort were 0.79 (0.56-1.12; p = .192) and 0.33 (0.21-0.52; p < .001), respectively. Adjusted comparisons between trial data and historical cohorts can provide useful insights to clinicians and reimbursement decision makers on relative treatment efficacies in the absence of head-to-head comparison studies for daratumumab monotherapy.

  7. Effect of Warfarin Treatment on Survival of Patients With Pulmonary Arterial Hypertension (PAH) in the Registry to Evaluate Early and Long-Term PAH Disease Management (REVEAL)

    PubMed Central

    Preston, Ioana R.; Roberts, Kari E.; Miller, Dave P.; Sen, Ginny P.; Selej, Mona; Benton, Wade W.; Hill, Nicholas S.

    2015-01-01

    Background— Long-term anticoagulation is recommended in idiopathic pulmonary arterial hypertension (IPAH). In contrast, limited data support anticoagulation in pulmonary arterial hypertension (PAH) associated with systemic sclerosis (SSc-PAH). We assessed the effect of warfarin anticoagulation on survival in IPAH and SSc-PAH patients enrolled in Registry to Evaluate Early and Long-term PAH Disease Management (REVEAL), a longitudinal registry of group I PAH. Methods and Results— Patients who initiated warfarin on study (n=187) were matched 1:1 with patients never on warfarin, by enrollment site, etiology, and diagnosis status. Descriptive analyses were conducted to compare warfarin users and nonusers by etiology. Survival analyses with and without risk adjustment were performed from the time of warfarin initiation or a corresponding quarterly update in matched pairs to avoid immortal time bias. Time-varying covariate models were used as sensitivity analyses. Mean warfarin treatment was 1 year; mean international normalized ratios were 1.9 (IPAH) and 2.0 (SSc-PAH). Two-thirds of patients initiating warfarin discontinued treatment before the last study assessment. There was no survival difference with warfarin in IPAH patients (adjusted hazard ratio, 1.37; P=0.21) or in SSc-PAH patients (adjusted hazard ratio, 1.60; P=0.15) in comparison with matched controls. However, SSc-PAH patients receiving warfarin within the previous year (hazard ratio, 1.57; P=0.031) or any time postbaseline (hazard ratio, 1.49; P=0.046) had increased mortality in comparison with warfarin-naïve patients. Conclusions— No significant survival advantage was observed in IPAH patients who started warfarin. In SSc-PAH patients, long-term warfarin was associated with poorer survival than in patients not receiving warfarin, even after adjusting for confounders. Clinical Trial Registration— URL: http://www.clinicaltrials.gov. Unique identifier: NCT00370214. PMID:26510696

  8. A comparison of three methods for estimating the requirements for medical specialists: the case of otolaryngologists.

    PubMed Central

    Anderson, G F; Han, K C; Miller, R H; Johns, M E

    1997-01-01

    OBJECTIVE: To compare three methods of computing the national requirements for otolaryngologists in 1994 and 2010. DATA SOURCES: Three large HMOs, a Delphi panel, the Bureau of Health Professions (BHPr), and published sources. STUDY DESIGN: Three established methods of computing requirements for otolaryngologists were compared: managed care, demand-utilization, and adjusted needs assessment. Under the managed care model, a published method based on reviewing staffing patterns in HMOs was modified to estimate the number of otolaryngologists. We obtained from BHPr estimates of work force projections from their demand model. To estimate the adjusted needs model, we convened a Delphi panel of otolaryngologists using the methodology developed by the Graduate Medical Education National Advisory Committee (GMENAC). DATA COLLECTION/EXTRACTION METHODS: Not applicable. PRINCIPAL FINDINGS: Wide variation in the estimated number of otolaryngologists required occurred across the three methods. Within each model it was possible to alter the requirements for otolaryngologists significantly by changing one or more of the key assumptions. The managed care model has a potential to obtain the most reliable estimates because it reflects actual staffing patterns in institutions that are attempting to use physicians efficiently. CONCLUSIONS: Estimates of work force requirements can vary considerably if one or more assumptions are changed. In order for the managed care approach to be useful for actual decision making concerning the appropriate number of otolaryngologists required, additional research on the methodology used to extrapolate the results to the general population is necessary. PMID:9180613

  9. 48 CFR 17.207 - Exercise of options.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... period specified in the contract. (b) When the contract provides for economic price adjustment and the... consideration such factors as market stability and comparison of the time since award with the usual duration of... that is subject to an economic price adjustment provision; or (5) A specific price that is subject to...

  10. Psychosocial Adjustment of Adolescents and Young Adults with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Mueller, Christian E.; Prout, H. Thompson

    2009-01-01

    Issues with adolescents with intellectual disabilities have received little attention in the research literature. This study compared adolescents with and without intellectual disabilities on several indices of psychosocial adjustment. The participants were selected from a large longitudinal database and comparisons were made at three points in…

  11. Sibling comparison of differential parental treatment in adolescence: gender, self-esteem, and emotionality as mediators of the parenting-adjustment association.

    PubMed

    Feinberg, M E; Neiderhiser, J M; Simmens, S; Reiss, D; Hetherington, E M

    2000-01-01

    This study employs findings from social comparison research to investigate adolescents' comparisons with siblings with regard to parental treatment. The sibling comparison hypothesis was tested on a sample of 516 two-child families by examining whether gender, self-esteem, and emotionality-which have been found in previous research to moderate social comparison-also moderate sibling comparison as reflected by siblings' own evaluations of differential parental treatment. Results supported a moderating effect for self-esteem and emotionality but not gender. The sibling comparison process was further examined by using a structural equation model in which parenting toward each child was associated with the adjustment of that child and of the child's sibling. Evidence of the "sibling barricade" effect-that is, parenting toward one child being linked with opposite results on the child's sibling as on the target child-was found in a limited number of cases and interpreted as reflecting a sibling comparison process. For older siblings, emotionality and self-esteem moderated the sibling barricade effect but in the opposite direction as predicted. Results are discussed in terms of older siblings' increased sensitivity to parenting as well as the report of differential parenting reflecting the child's level of comfort and benign understanding of differential parenting, which buffers the child against environmental vicissitudes evoking sibling comparison processes.

  12. Health-Related Quality of Life in Children and Adolescents with Hereditary Bleeding Disorders and in Children and Adolescents with Stroke: Cross-Sectional Comparison to Siblings and Peers

    PubMed Central

    Neuner, Bruno; von Mackensen, Sylvia; Holzhauer, Susanne; Funk, Stephanie; Klamroth, Robert; Kurnik, Karin; Krümpel, Anne; Halimeh, Susan; Reinke, Sarah; Frühwald, Michael; Nowak-Göttl, Ulrike

    2016-01-01

    Objectives. To investigate self-reported health-related quality of life (HrQoL) in children and adolescents with chronic medical conditions compared with siblings/peers. Methods. Group 1 (6 treatment centers) consisted of 74 children/adolescents aged 8–16 years with hereditary bleeding disorders (HBD), 12 siblings, and 34 peers. Group 2 (one treatment center) consisted of 70 children/adolescents with stroke/transient ischemic attack, 14 siblings, and 72 peers. HrQoL was assessed with the “revised KINDer Lebensqualitätsfragebogen” (KINDL-R) questionnaire. Multivariate analyses within groups were done by one-way ANOVA and post hoc pairwise single comparisons by Student's t-tests. Adjusted pairwise comparisons were done by hierarchical linear regressions with individuals nested within treatment centers (group 1) and by linear regressions (group 2), respectively. Results. No differences were found in multivariate analyses of self-reported HrQoL in group 1, while in group 2 differences occurred in overall wellbeing and all subdimensions. These differences were due to differences between patients and peers. After adjusting for age, gender, number of siblings, and treatment center these differences persisted regarding self-worth (p = .0040) and friend-related wellbeing (p < .001). Conclusions. In children with HBD, HrQoL was comparable to siblings and peers. In children with stroke/TIA HrQoL was comparable to siblings while peers, independently of relevant confounder, showed better self-worth and friend-related wellbeing. PMID:27294108

  13. Methods and Issues for the Combined Use of Integral Experiments and Covariance Data: Results of a NEA International Collaborative Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmiotti, Giuseppe; Salvatores, Massimo

    2014-04-01

    The Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD) established a Subgroup (called “Subgroup 33”) in 2009 on “Methods and issues for the combined use of integral experiments and covariance data.” The first stage was devoted to producing the description of different adjustment methodologies and assessing their merits. A detailed document related to this first stage has been issued. Nine leading organizations (often with a long and recognized expertise in the field) have contributed: ANL, CEA, INL, IPPE, JAEA, JSI, NRG, IRSN and ORNL. In the second stagemore » a practical benchmark exercise was defined in order to test the reliability of the nuclear data adjustment methodology. A comparison of the results obtained by the participants and major lessons learned in the exercise are discussed in the present paper that summarizes individual contributions which often include several original developments not reported separately. The paper provides the analysis of the most important results of the adjustment of the main nuclear data of 11 major isotopes in a 33-group energy structure. This benchmark exercise was based on a set of 20 well defined integral parameters from 7 fast assembly experiments. The exercise showed that using a common shared set of integral experiments but different starting evaluated libraries and/or different covariance matrices, there is a good convergence of trends for adjustments. Moreover, a significant reduction of the original uncertainties is often observed. Using the a–posteriori covariance data, there is a strong reduction of the uncertainties of integral parameters for reference reactor designs, mainly due to the new correlations in the a–posteriori covariance matrix. Furthermore, criteria have been proposed and applied to verify the consistency of differential and integral data used in the adjustment. Finally, recommendations are given for an appropriate use of sensitivity analysis methods and indications for future work are provided.« less

  14. 40 CFR 761.323 - Sample preparation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROHIBITIONS Self-Implementing Alternative Extraction and Chemical Analysis Procedures for Non-liquid PCB Remediation Waste Samples § 761.323 Sample preparation. (a) The comparison study requires analysis of a... of use in this chemical extraction and chemical analysis comparison study, a person may adjust PCB...

  15. PTESFinder: a computational method to identify post-transcriptional exon shuffling (PTES) events.

    PubMed

    Izuogu, Osagie G; Alhasan, Abd A; Alafghani, Hani M; Santibanez-Koref, Mauro; Elliott, David J; Elliot, David J; Jackson, Michael S

    2016-01-13

    Transcripts, which have been subject to Post-transcriptional exon shuffling (PTES), have an exon order inconsistent with the underlying genomic sequence. These have been identified in a wide variety of tissues and cell types from many eukaryotes, and are now known to be mostly circular, cytoplasmic, and non-coding. Although there is no uniformly ascribed function, several have been shown to be involved in gene regulation. Accurate identification of these transcripts can, however, be difficult due to artefacts from a wide variety of sources. Here, we present a computational method, PTESFinder, to identify these transcripts from high throughput RNAseq data. Uniquely, it systematically excludes potential artefacts emanating from pseudogenes, segmental duplications, and template switching, and outputs both PTES and canonical exon junction counts to facilitate comparative analyses. In comparison with four existing methods, PTESFinder achieves highest specificity and comparable sensitivity at a variety of read depths. PTESFinder also identifies between 13 % and 41.6 % more structures, compared to publicly available methods recently used to identify human circular RNAs. With high sensitivity and specificity, user-adjustable filters that target known sources of false positives, and tailored output to facilitate comparison of transcript levels, PTESFinder will facilitate the discovery and analysis of these poorly understood transcripts.

  16. Comparison of dynamic treatment regimes via inverse probability weighting.

    PubMed

    Hernán, Miguel A; Lanoy, Emilie; Costagliola, Dominique; Robins, James M

    2006-03-01

    Appropriate analysis of observational data is our best chance to obtain answers to many questions that involve dynamic treatment regimes. This paper describes a simple method to compare dynamic treatment regimes by artificially censoring subjects and then using inverse probability weighting (IPW) to adjust for any selection bias introduced by the artificial censoring. The basic strategy can be summarized in four steps: 1) define two regimes of interest, 2) artificially censor individuals when they stop following one of the regimes of interest, 3) estimate inverse probability weights to adjust for the potential selection bias introduced by censoring in the previous step, 4) compare the survival of the uncensored individuals under each regime of interest by fitting an inverse probability weighted Cox proportional hazards model with the dichotomous regime indicator and the baseline confounders as covariates. In the absence of model misspecification, the method is valid provided data are available on all time-varying and baseline joint predictors of survival and regime discontinuation. We present an application of the method to compare the AIDS-free survival under two dynamic treatment regimes in a large prospective study of HIV-infected patients. The paper concludes by discussing the relative advantages and disadvantages of censoring/IPW versus g-estimation of nested structural models to compare dynamic regimes.

  17. Matching-adjusted indirect treatment comparison of ribociclib and palbociclib in HR+, HER2- advanced breast cancer.

    PubMed

    Tremblay, Gabriel; Chandiwana, David; Dolph, Mike; Hearnden, Jaclyn; Forsythe, Anna; Monaco, Mauricio

    2018-01-01

    Ribociclib (RIBO) and palbociclib (PALBO), combined with letrozole (LET), have been evaluated as treatments for hormone receptor-positive, human epidermal growth factor receptor 2-negative advanced breast cancer in separate Phase III randomized controlled trials (RCTs), but not head-to-head. Population differences can lead to biased results by classical indirect treatment comparison (ITC). Matching-adjusted indirect comparison (MAIC) aims to correct these differences. We compared RIBO and PALBO in hormone receptor-positive/human epidermal growth factor receptor 2-negative advanced breast cancer using MAIC. Patient-level data were available for RIBO (MONALEESA-2), while only published summary data were available for PALBO (PALOMA-2). Weights were assigned to MONALEESA-2 patient data such that mean baseline characteristics matched those reported for PALOMA-2; the resulting matched cohort was used in comparisons. Limited by the results reported in PALOMA-2, progression-free survival (PFS) was the primary comparison. Cox regression models were used to calculate adjusted hazard ratios (HRs) for PFS, before indirect treatment comparison (ITC) was performed with 95% confidence intervals. An exploratory analysis was performed similarly for overall survival using earlier PALBO data (PALOMA-1). Grade 3/4 adverse events were also compared. Racial characteristics, prior chemotherapy setting, and the extent of metastasis were the most imbalanced baseline characteristics. The unadjusted PFS HRs were 0.556 (0.429, 0.721) for RIBO+LET versus LET alone and 0.580 (0.460, 0.720) for PALBO+LET versus LET alone. MAIC adjustment resulted in an HR of 0.524 (0.406, 0.676) for RIBO+LET versus LET. PFS ITC using unadjusted trial data produced an HR of 0.959 (0.681, 1.350) for RIBO versus PALBO, or 0.904 (0.644, 1.268) with MAIC. Unadjusted overall survival HR of RIBO versus PALBO was 0.918 (0.492, 1.710); while exploratory MAIC was 0.839 (0.440, 1.598). ITC of grade 3/4 adverse events yielded a risk ratio of 0.806 (0.604, 1.076). MAIC was performed for RIBO and PALBO in the absence of a head-to-head trial: though not statistically significant, the results favored RIBO.

  18. The Aristotle score: a complexity-adjusted method to evaluate surgical results.

    PubMed

    Lacour-Gayet, F; Clarke, D; Jacobs, J; Comas, J; Daebritz, S; Daenen, W; Gaynor, W; Hamilton, L; Jacobs, M; Maruszsewski, B; Pozzi, M; Spray, T; Stellin, G; Tchervenkov, C; Mavroudis And, C

    2004-06-01

    Quality control is difficult to achieve in Congenital Heart Surgery (CHS) because of the diversity of the procedures. It is particularly needed, considering the potential adverse outcomes associated with complex cases. The aim of this project was to develop a new method based on the complexity of the procedures. The Aristotle project, involving a panel of expert surgeons, started in 1999 and included 50 pediatric surgeons from 23 countries, representing the EACTS, STS, ECHSA and CHSS. The complexity was based on the procedures as defined by the STS/EACTS International Nomenclature and was undertaken in two steps: the first step was establishing the Basic Score, which adjusts only the complexity of the procedures. It is based on three factors: the potential for mortality, the potential for morbidity and the anticipated technical difficulty. A questionnaire was completed by the 50 centers. The second step was the development of the Comprehensive Aristotle Score, which further adjusts the complexity according to the specific patient characteristics. It includes two categories of complexity factors, the procedure dependent and independent factors. After considering the relationship between complexity and performance, the Aristotle Committee is proposing that: Performance = Complexity x Outcome. The Aristotle score, allows precise scoring of the complexity for 145 CHS procedures. One interesting notion coming out of this study is that complexity is a constant value for a given patient regardless of the center where he is operated. The Aristotle complexity score was further applied to 26 centers reporting to the EACTS congenital database. A new display of centers is presented based on the comparison of hospital survival to complexity and to our proposed definition of performance. A complexity-adjusted method named the Aristotle Score, based on the complexity of the surgical procedures has been developed by an international group of experts. The Aristotle score, electronically available, was introduced in the EACTS and STS databases. A validation process evaluating its predictive value is being developed.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tittiranonda, P.; Burastero, S.; Shih, M.

    This study presents an evaluation of the Apple Adjustable Keyboard based on subjective preference and observed joint angles during typing. Thirty five keyboard users were asked to use the Apple adjustable keyboard for 7--14 days and rate the various characteristics of the keyboard. Our findings suggest that the most preferred opening angles range from 11--20{degree}. The mean ulnar deviation on the Apple Adjustable keyboard is 11{degree}, compared to 16{degree} on the standard keyboard. The mean extension was decreased from 24{degree} to 16{degree} when using the adjustable keyboard. When asked to subjectively rate the adjustable keyboard in comparison to the standard,more » the average subject felt that the Apple Adjustable Keyboard was more comfortable and easier to use than the standard flat keyboard.« less

  20. The optimal hormonal replacement modality selection for multiple organ procurement from brain-dead organ donors

    PubMed Central

    Mi, Zhibao; Novitzky, Dimitri; Collins, Joseph F; Cooper, David KC

    2015-01-01

    The management of brain-dead organ donors is complex. The use of inotropic agents and replacement of depleted hormones (hormonal replacement therapy) is crucial for successful multiple organ procurement, yet the optimal hormonal replacement has not been identified, and the statistical adjustment to determine the best selection is not trivial. Traditional pair-wise comparisons between every pair of treatments, and multiple comparisons to all (MCA), are statistically conservative. Hsu’s multiple comparisons with the best (MCB) – adapted from the Dunnett’s multiple comparisons with control (MCC) – has been used for selecting the best treatment based on continuous variables. We selected the best hormonal replacement modality for successful multiple organ procurement using a two-step approach. First, we estimated the predicted margins by constructing generalized linear models (GLM) or generalized linear mixed models (GLMM), and then we applied the multiple comparison methods to identify the best hormonal replacement modality given that the testing of hormonal replacement modalities is independent. Based on 10-year data from the United Network for Organ Sharing (UNOS), among 16 hormonal replacement modalities, and using the 95% simultaneous confidence intervals, we found that the combination of thyroid hormone, a corticosteroid, antidiuretic hormone, and insulin was the best modality for multiple organ procurement for transplantation. PMID:25565890

  1. Multi-College Bystander Intervention Evaluation for Violence Prevention.

    PubMed

    Coker, Ann L; Bush, Heather M; Fisher, Bonnie S; Swan, Suzanne C; Williams, Corrine M; Clear, Emily R; DeGue, Sarah

    2016-03-01

    The 2013 Campus Sexual Violence Elimination Act requires U.S. colleges to provide bystander-based training to reduce sexual violence, but little is known about the efficacy of such programs for preventing violent behavior. This study provides the first multiyear evaluation of a bystander intervention's campus-level impact on reducing interpersonal violence victimization and perpetration behavior on college campuses. First-year students attending three similarly sized public university campuses were randomly selected and invited to complete online surveys in the spring terms of 2010-2013. On one campus, the Green Dot bystander intervention was implemented in 2008 (Intervention, n=2,979) and two comparison campuses had no bystander programming at baseline (Comparison, n=4,132). Data analyses conducted in 2014-2015 compared violence rates by condition over the four survey periods. Multivariable logistic regression was used to estimate violence risk on Intervention relative to Comparison campuses, adjusting for demographic factors and time (2010-2013). Interpersonal violence victimization rates (measured in the past academic year) were 17% lower among students attending the Intervention (46.4%) relative to Comparison (55.7%) campuses (adjusted rate ratio=0.83; 95% CI=0.79, 0.88); a similar pattern held for interpersonal violence perpetration (25.5% in Intervention; 32.2% in Comparison; adjusted rate ratio=0.79; 95% CI=0.71, 0.86). Violence rates were lower on Intervention versus Comparison campuses for unwanted sexual victimization, sexual harassment, stalking, and psychological dating violence victimization and perpetration (p<0.01). Green Dot may be an efficacious intervention to reduce violence at the community level and meet Campus Sexual Violence Elimination Act bystander training requirements. Copyright © 2016 American Journal of Preventive Medicine. All rights reserved.

  2. Probabilities for gravitational lensing by point masses in a locally inhomogeneous universe

    NASA Technical Reports Server (NTRS)

    Isaacson, Jeffrey A.; Canizares, Claude R.

    1989-01-01

    Probability functions for gravitational lensing by point masses that incorporate Poisson statistics and flux conservation are formulated in the Dyer-Roeder construction. Optical depths to lensing for distant sources are calculated using both the method of Press and Gunn (1973) which counts lenses in an otherwise empty cone, and the method of Ehlers and Schneider (1986) which projects lensing cross sections onto the source sphere. These are then used as parameters of the probability density for lensing in the case of a critical (q0 = 1/2) Friedmann universe. A comparison of the probability functions indicates that the effects of angle-averaging can be well approximated by adjusting the average magnification along a random line of sight so as to conserve flux.

  3. a Comparison Between Chemically Dependent Mothers and Drug-Free Mothers: Lifestyle during the Perinatal Period

    NASA Astrophysics Data System (ADS)

    Uskokovic, Lila Milica

    This study compared maternal lifestyle variables pertinent to the perinatal period in groups of chemically dependent mothers and drug-free mothers. Twenty-nine cocaine -abusing mothers were compared to 29 drug-free mothers carefully matched on age, race, education, and primipara versus multipara status. The drug history of each chemically dependent woman was explicitly documented. The chemically dependent group was subdivided into two groups, mothers who abused cocaine and those who abused cocaine with concomitant opiate use. Each of these two subgroups was compared to its respective matched drug-free control group. Finally, a comparison was made between the two drug subgroups. All subjects were interviewed within 48 hours after delivery using the following measures: State-Trait Anxiety Inventory (A-State), Center for Epidemiologic Studies - Depression Scale, The Self-Esteem Scale, Maternal Adjustment and Maternal Attitude Questionnaire, The Neonatal Perception Inventory, The Psychiatric Epidemiology Research Interview Life Events Scale, Maternal Social Support Index, and Short Marital Adjustment Test. A t-test analysis revealed significant differences (p <.05) between the total experimental group and its matched control group on state anxiety, depression, self-esteem, maternal adjustment and attitudes, and life events. An analysis of covariance indicated that life events was the only significant variable when the influence of all other variables was removed. Comparisons made between each drug subgroup and its respective matched control group showed similar results, except that those who abused opiates with cocaine did not differ from their controls on depression and maternal adjustment and attitudes. No significant differences were obtained in the drug subgroup comparisons. These results identify increased life events and specific negative affect states that clinical intervention programs should address to assure the best possible outcome for chemically dependent mothers and their infants.

  4. Cost-effectiveness of apixaban vs warfarin for secondary stroke prevention in atrial fibrillation

    PubMed Central

    Easton, J. Donald; Johnston, S. Claiborne; Kim, Anthony S.

    2012-01-01

    Objective: To compare the cost-effectiveness of apixaban vs warfarin for secondary stroke prevention in patients with atrial fibrillation (AF). Methods: Using standard methods, we created a Markov decision model based on the estimated cost of apixaban and data from the Apixaban for Reduction in Stroke and Other Thromboembolic Events in Atrial Fibrillation (ARISTOTLE) trial and other trials of warfarin therapy for AF. We quantified the cost and quality-adjusted life expectancy resulting from apixaban 5 mg twice daily compared with those from warfarin therapy targeted to an international normalized ratio of 2–3. Our base case population was a cohort of 70-year-old patients with no contraindication to anticoagulation and a history of stroke or TIA from nonvalvular AF. Results: Warfarin therapy resulted in a quality-adjusted life expectancy of 3.91 years at a cost of $378,500. In comparison, treatment with apixaban led to a quality-adjusted life expectancy of 4.19 years at a cost of $381,700. Therefore, apixaban provided a gain of 0.28 quality-adjusted life-years (QALYs) at an additional cost of $3,200, resulting in an incremental cost-effectiveness ratio of $11,400 per QALY. Our findings were robust in univariate sensitivity analyses varying model inputs across plausible ranges. In Monte Carlo analysis, apixaban was cost-effective in 62% of simulations using a threshold of $50,000 per QALY and 81% of simulations using a threshold of $100,000 per QALY. Conclusions: Apixaban appears to be cost-effective relative to warfarin for secondary stroke prevention in patients with AF, assuming that it is introduced at a price similar to that of dabigatran. PMID:22993279

  5. Public reporting of cost and quality information in orthopaedics.

    PubMed

    Marjoua, Youssra; Butler, Craig A; Bozic, Kevin J

    2012-04-01

    Public reporting of patient health outcomes offers the potential to incentivize quality improvement by fostering increased accountability among providers. Voluntary reporting of risk-adjusted outcomes in cardiac surgery, for example, is viewed as a "watershed event" in healthcare accountability. However, public reporting of outcomes, cost, and quality information in orthopaedic surgery remains limited by comparison, attributable in part to the lack of standard assessment methods and metrics, provider fear of inadequate adjustment of health outcomes for patient characteristics (risk adjustment), and historically weak market demand for this type of information. We review the origins of public reporting of outcomes in surgical care, identify existing initiatives specific to orthopaedics, outline the challenges and opportunities, and propose recommendations for public reporting of orthopaedic outcomes. We performed a comprehensive review of the literature through a bibliographic search of MEDLINE and Google Scholar databases from January 1990 to December 2010 to identify articles related to public reporting of surgical outcomes. Orthopaedic-specific quality reporting efforts include the early FDA adverse event reporting MedWatch program and the involvement of surgeons in the Physician Quality Reporting Initiative. Issues that require more work include balancing different stakeholder perspectives on quality reporting measures and methods, defining accountability and attribution for outcomes, and appropriately risk-adjusting outcomes. Given the current limitations associated with public reporting of quality and cost in orthopaedic surgery, valuable contributions can be made in developing specialty-specific evidence-based performance measures. We believe through leadership and involvement in policy formulation and development, orthopaedic surgeons are best equipped to accurately and comprehensively inform the quality reporting process and its application to improve the delivery and outcomes of orthopaedic care.

  6. The relationship between effectiveness and costs measured by a risk-adjusted case-mix system: multicentre study of Catalonian population data bases.

    PubMed

    Sicras-Mainar, Antoni; Navarro-Artieda, Ruth; Blanca-Tamayo, Milagrosa; Velasco-Velasco, Soledad; Escribano-Herranz, Esperanza; Llopart-López, Josep Ramon; Violan-Fors, Concepción; Vilaseca-Llobet, Josep Maria; Sánchez-Fontcuberta, Encarna; Benavent-Areu, Jaume; Flor-Serra, Ferran; Aguado-Jodar, Alba; Rodríguez-López, Daniel; Prados-Torres, Alejandra; Estelrich-Bennasar, Jose

    2009-06-25

    The main objective of this study is to measure the relationship between morbidity, direct health care costs and the degree of clinical effectiveness (resolution) of health centres and health professionals by the retrospective application of Adjusted Clinical Groups in a Spanish population setting. The secondary objectives are to determine the factors determining inadequate correlations and the opinion of health professionals on these instruments. We will carry out a multi-centre, retrospective study using patient records from 15 primary health care centres and population data bases. The main measurements will be: general variables (age and sex, centre, service [family medicine, paediatrics], and medical unit), dependent variables (mean number of visits, episodes and direct costs), co-morbidity (Johns Hopkins University Adjusted Clinical Groups Case-Mix System) and effectiveness.The totality of centres/patients will be considered as the standard for comparison. The efficiency index for visits, tests (laboratory, radiology, others), referrals, pharmaceutical prescriptions and total will be calculated as the ratio: observed variables/variables expected by indirect standardization.The model of cost/patient/year will differentiate fixed/semi-fixed (visits) costs of the variables for each patient attended/year (N = 350,000 inhabitants). The mean relative weights of the cost of care will be obtained. The effectiveness will be measured using a set of 50 indicators of process, efficiency and/or health results, and an adjusted synthetic index will be constructed (method: percentile 50).The correlation between the efficiency (relative-weights) and synthetic (by centre and physician) indices will be established using the coefficient of determination. The opinion/degree of acceptance of physicians (N = 1,000) will be measured using a structured questionnaire including various dimensions. multiple regression analysis (procedure: enter), ANCOVA (method: Bonferroni's adjustment) and multilevel analysis will be carried out to correct models. The level of statistical significance will be p < 0.05.

  7. Quantifying the indirect impacts of climate on agriculture: an inter-method comparison

    NASA Astrophysics Data System (ADS)

    Calvin, Kate; Fisher-Vanden, Karen

    2017-11-01

    Climate change and increases in CO2 concentration affect the productivity of land, with implications for land use, land cover, and agricultural production. Much of the literature on the effect of climate on agriculture has focused on linking projections of changes in climate to process-based or statistical crop models. However, the changes in productivity have broader economic implications that cannot be quantified in crop models alone. How important are these socio-economic feedbacks to a comprehensive assessment of the impacts of climate change on agriculture? In this paper, we attempt to measure the importance of these interaction effects through an inter-method comparison between process models, statistical models, and integrated assessment model (IAMs). We find the impacts on crop yields vary widely between these three modeling approaches. Yield impacts generated by the IAMs are 20%-40% higher than the yield impacts generated by process-based or statistical crop models, with indirect climate effects adjusting yields by between -12% and +15% (e.g. input substitution and crop switching). The remaining effects are due to technological change.

  8. Fast approach for toner saving

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Kurilin, Ilya V.; Rychagov, Michael N.; Lee, Hokeun; Kim, Sangho; Choi, Donchul

    2011-01-01

    Reducing toner consumption is an important task in modern printing devices and has a significant positive ecological impact. Existing toner saving approaches have two main drawbacks: appearance of hardcopy in toner saving mode is worse in comparison with normal mode; processing of whole rendered page bitmap requires significant computational costs. We propose to add small holes of various shapes and sizes to random places inside a character bitmap stored in font cache. Such random perforation scheme is based on processing pipeline in RIP of standard printer languages Postscript and PCL. Processing of text characters only, and moreover, processing of each character for given font and size alone, is an extremely fast procedure. The approach does not deteriorate halftoned bitmap and business graphics and provide toner saving for typical office documents up to 15-20%. Rate of toner saving is adjustable. Alteration of resulted characters' appearance is almost indistinguishable in comparison with solid black text due to random placement of small holes inside the character regions. The suggested method automatically skips small fonts to preserve its quality. Readability of text processed by proposed method is fine. OCR programs process that scanned hardcopy successfully too.

  9. Modeling of dispersed-drug delivery from planar polymeric systems: optimizing analytical solutions.

    PubMed

    Helbling, Ignacio M; Ibarra, Juan C D; Luna, Julio A; Cabrera, María I; Grau, Ricardo J A

    2010-11-15

    Analytical solutions for the case of controlled dispersed-drug release from planar non-erodible polymeric matrices, based on Refined Integral Method, are presented. A new adjusting equation is used for the dissolved drug concentration profile in the depletion zone. The set of equations match the available exact solution. In order to illustrate the usefulness of this model, comparisons with experimental profiles reported in the literature are presented. The obtained results show that the model can be employed in a broad range of applicability. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. Investigation of factors affecting the synthesis of nano-cadmium sulfide by pulsed laser ablation in liquid environment

    NASA Astrophysics Data System (ADS)

    Darwish, Ayman M.; Eisa, Wael H.; Shabaka, Ali A.; Talaat, Mohamed H.

    2016-01-01

    Pulsed laser ablation in a liquid medium is a promising technique as compared to the other synthetic methods to synthesize different materials in nanoscale form. The laser parameters (e.g., wavelength, pulse width, fluence, and repetition frequency) and liquid medium (e.g., aqueous/nonaqueous liquid or solution with surfactant) were tightly controlled during and after the ablation process. By optimizing these parameters, the particle size and distribution of materials can be adjusted. The UV-vis absorption spectra and weight changes of targets were used for the characterization and comparison of products.

  11. A Naturalistic Examination of Social Comparisons and Disordered Eating Thoughts, Urges, and Behaviors in College Women

    PubMed Central

    Fitzsimmons-Craft, Ellen E.; Ciao, Anna C.; Accurso, Erin C.

    2015-01-01

    Objective We examined the effects of body, eating, and exercise social comparisons on prospective disordered eating thoughts and urges (i.e., restriction thoughts, exercise thoughts, vomiting thoughts, binge eating urges) and behaviors (i.e., restriction attempts, exercising for weight/shape reasons, vomiting, binge eating) among college women using ecological momentary assessment (EMA). Method Participants were 232 college women who completed a two-week EMA protocol, in which they used their personal electronic devices to answer questions three times per day. Generalized estimating equation models were used to assess body, eating, and exercise comparisons as predictors of disordered eating thoughts, urges, and behaviors at the next report, adjusting for body dissatisfaction, negative affect, and the disordered eating thought/urge/behavior at the prior report, as well as body mass index. Results Body comparisons prospectively predicted more intense levels of certain disordered eating thoughts (i.e., thoughts about restriction and exercise). Eating comparisons prospectively predicted an increased likelihood of subsequent engagement in all disordered eating behaviors examined except vomiting. Exercise comparisons prospectively predicted less intense thoughts about exercise and an increased likelihood of subsequent vomiting. Discussion Social comparisons are associated with later disordered eating thoughts and behaviors in the natural environment and may need to be specifically targeted in eating disorder prevention and intervention efforts. Targeting body comparisons may be helpful in terms of reducing disordered eating thoughts, but eating and exercise comparisons are also important and may need to be addressed in order to decrease engagement in actual disordered eating behaviors. PMID:26610301

  12. Adjusting survival time estimates to account for treatment switching in randomized controlled trials--an economic evaluation context: methods, limitations, and recommendations.

    PubMed

    Latimer, Nicholas R; Abrams, Keith R; Lambert, Paul C; Crowther, Michael J; Wailoo, Allan J; Morden, James P; Akehurst, Ron L; Campbell, Michael J

    2014-04-01

    Treatment switching commonly occurs in clinical trials of novel interventions in the advanced or metastatic cancer setting. However, methods to adjust for switching have been used inconsistently and potentially inappropriately in health technology assessments (HTAs). We present recommendations on the use of methods to adjust survival estimates in the presence of treatment switching in the context of economic evaluations. We provide background on the treatment switching issue and summarize methods used to adjust for it in HTAs. We discuss the assumptions and limitations associated with adjustment methods and draw on results of a simulation study to make recommendations on their use. We demonstrate that methods used to adjust for treatment switching have important limitations and often produce bias in realistic scenarios. We present an analysis framework that aims to increase the probability that suitable adjustment methods can be identified on a case-by-case basis. We recommend that the characteristics of clinical trials, and the treatment switching mechanism observed within them, should be considered alongside the key assumptions of the adjustment methods. Key assumptions include the "no unmeasured confounders" assumption associated with the inverse probability of censoring weights (IPCW) method and the "common treatment effect" assumption associated with the rank preserving structural failure time model (RPSFTM). The limitations associated with switching adjustment methods such as the RPSFTM and IPCW mean that they are appropriate in different scenarios. In some scenarios, both methods may be prone to bias; "2-stage" methods should be considered, and intention-to-treat analyses may sometimes produce the least bias. The data requirements of adjustment methods also have important implications for clinical trialists.

  13. Peer- and Self-Rated Correlates of a Teacher-Rated Typology of Child Adjustment

    ERIC Educational Resources Information Center

    Lindstrom, William A., Jr.; Lease, A. Michele; Kamphaus, Randy W.

    2007-01-01

    External correlates of a teacher-rated typology of child adjustment developed using the Behavior Assessment System for Children were examined. Participants included 377 elementary school children recruited from 26 classrooms in the southeastern United States. Multivariate analyses of variance and planned comparisons were used to determine whether…

  14. 78 FR 7750 - Summer Food Service Program; 2013 Reimbursement Rates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-04

    .... Reimbursement is based solely on a ``meals times rates'' calculation, without comparison to actual or budgeted... public of the annual adjustments to the reimbursement rates for meals served in the Summer Food Service... adjustments to the reimbursement rates for meals served in SFSP. In accordance with sections 12(f) and 13, 42...

  15. Somali Women's Reflections on the Adjustment of Their Children in the United States

    ERIC Educational Resources Information Center

    Nilsson, Johanna E.; Barazanji, Danah M.; Heintzelman, Ashley; Siddiqi, Mubeena; Shilla, Yasmine

    2012-01-01

    Somali women were interviewed regarding their children's adjustment. Qualitative analysis revealed 5 themes: cultural comparisons, concerns about children, parents' loss of disciplinary authority, available support, and the future. The women discussed changes in their children, such as loss of respect and threats to use law enforcement against…

  16. Early Mortality Experience in a Large Military Cohort and a Comparison of Mortality Data Sources

    DTIC Science & Technology

    2010-05-24

    were enrolled from 2001 to 2003, represented all armed service branches, and included active-duty, Reserve, and National Guard members. Crude death rates , as...well as age- and sex-adjusted overall and age-adjusted, category specific death rates were calculated and compared for participants (n = 77,047

  17. Use of life course work-family profiles to predict mortality risk among US women.

    PubMed

    Sabbath, Erika L; Guevara, Ivan Mejía; Glymour, M Maria; Berkman, Lisa F

    2015-04-01

    We examined relationships between US women's exposure to midlife work-family demands and subsequent mortality risk. We used data from women born 1935 to 1956 in the Health and Retirement Study to calculate employment, marital, and parenthood statuses for each age between 16 and 50 years. We used sequence analysis to identify 7 prototypical work-family trajectories. We calculated age-standardized mortality rates and hazard ratios (HRs) for mortality associated with work-family sequences, with adjustment for covariates and potentially explanatory later-life factors. Married women staying home with children briefly before reentering the workforce had the lowest mortality rates. In comparison, after adjustment for age, race/ethnicity, and education, HRs for mortality were 2.14 (95% confidence interval [CI] = 1.58, 2.90) among single nonworking mothers, 1.48 (95% CI = 1.06, 1.98) among single working mothers, and 1.36 (95% CI = 1.02, 1.80) among married nonworking mothers. Adjustment for later-life behavioral and economic factors partially attenuated risks. Sequence analysis is a promising exposure assessment tool for life course research. This method permitted identification of certain lifetime work-family profiles associated with mortality risk before age 75 years.

  18. Recognition of emotion from body language among patients with unipolar depression

    PubMed Central

    Loi, Felice; Vaidya, Jatin G.; Paradiso, Sergio

    2013-01-01

    Major depression may be associated with abnormal perception of emotions and impairment in social adaptation. Emotion recognition from body language and its possible implications to social adjustment have not been examined in patients with depression. Three groups of participants (51 with depression; 68 with history of depression in remission; and 69 never depressed healthy volunteers) were compared on static and dynamic tasks of emotion recognition from body language. Psychosocial adjustment was assessed using the Social Adjustment Scale Self-Report (SAS-SR). Participants with current depression showed reduced recognition accuracy for happy stimuli across tasks relative to remission and comparison participants. Participants with depression tended to show poorer psychosocial adaptation relative to remission and comparison groups. Correlations between perception accuracy of happiness and scores on the SAS-SR were largely not significant. These results indicate that depression is associated with reduced ability to appraise positive stimuli of emotional body language but emotion recognition performance is not tied to social adjustment. These alterations do not appear to be present in participants in remission suggesting state-like qualities. PMID:23608159

  19. The challenging use and interpretation of circulating biomarkers of exposure to persistent organic pollutants in environmental health: Comparison of lipid adjustment approaches in a case study related to endometriosis.

    PubMed

    Cano-Sancho, German; Labrune, Léa; Ploteau, Stéphane; Marchand, Philippe; Le Bizec, Bruno; Antignac, Jean-Philippe

    2018-06-01

    The gold-standard matrix for measuring the internal levels of persistent organic pollutants (POPs) is the adipose tissue, however in epidemiological studies the use of serum is preferred due to the low cost and higher accessibility. The interpretation of serum biomarkers is tightly related to the understanding of the underlying causal structure relating the POPs, serum lipids and the disease. Considering the extended benefits of using serum biomarkers we aimed to further examine if through statistical modelling we would be able to improve the use and interpretation of serum biomarkers in the study of endometriosis. Hence, we have conducted a systematic comparison of statistical approaches commonly used to lipid-adjust the circulating biomarkers of POPs based on existing methods, using data from a pilot case-control study focused on severe deep infiltrating endometriosis. The odds ratios (ORs) obtained from unconditional regression for those models with serum biomarkers were further compared to those obtained from adipose tissue. The results of this exploratory study did not support the use of blood biomarkers as proxy estimates of POPs in adipose tissue to implement in risk models for endometriosis with the available statistical approaches to correct for lipids. The current statistical approaches commonly used to lipid-adjust circulating POPs, do not fully represent the underlying biological complexity between POPs, lipids and disease (especially those directly or indirectly affecting or affected by lipid metabolism). Hence, further investigations are warranted to improve the use and interpretation of blood biomarkers under complex scenarios of lipid dynamics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Head-to-Head Comparison and Evaluation of 92 Plasma Protein Biomarkers for Early Detection of Colorectal Cancer in a True Screening Setting.

    PubMed

    Chen, Hongda; Zucknick, Manuela; Werner, Simone; Knebel, Phillip; Brenner, Hermann

    2015-07-15

    Novel noninvasive blood-based screening tests are strongly desirable for early detection of colorectal cancer. We aimed to conduct a head-to-head comparison of the diagnostic performance of 92 plasma-based tumor-associated protein biomarkers for early detection of colorectal cancer in a true screening setting. Among all available 35 carriers of colorectal cancer and a representative sample of 54 men and women free of colorectal neoplasms recruited in a cohort of screening colonoscopy participants in 2005-2012 (N = 5,516), the plasma levels of 92 protein biomarkers were measured. ROC analyses were conducted to evaluate the diagnostic performance. A multimarker algorithm was developed through the Lasso logistic regression model and validated in an independent validation set. The .632+ bootstrap method was used to adjust for the potential overestimation of diagnostic performance. Seventeen protein markers were identified to show statistically significant differences in plasma levels between colorectal cancer cases and controls. The adjusted area under the ROC curves (AUC) of these 17 individual markers ranged from 0.55 to 0.70. An eight-marker classifier was constructed that increased the adjusted AUC to 0.77 [95% confidence interval (CI), 0.59-0.91]. When validating this algorithm in an independent validation set, the AUC was 0.76 (95% CI, 0.65-0.85), and sensitivities at cutoff levels yielding 80% and 90% specificities were 65% (95% CI, 41-80%) and 44% (95% CI, 24-72%), respectively. The identified profile of protein biomarkers could contribute to the development of a powerful multimarker blood-based test for early detection of colorectal cancer. ©2015 American Association for Cancer Research.

  1. Dental and skeletal changes in patients with mandibular retrognathism following treatment with Herbst and pre-adjusted fixed appliance

    PubMed Central

    Vigorito, Fabio de Abreu; Dominguez, Gladys Cristina; Aidar, Luís Antônio de Arruda

    2014-01-01

    Objective To assess the dentoskeletal changes observed in treatment of Class II, division 1 malocclusion patients with mandibular retrognathism. Treatment was performed with the Herbst orthopedic appliance during 13 months (phase I) and pre-adjusted orthodontic fixed appliance (phase II). Methods Lateral cephalograms of 17 adolescents were taken in phase I onset (T1) and completion (T2); in the first thirteen months of phase II (T3) and in phase II completion (T4). Differences among the cephalometric variables were statistically analyzed (Bonferroni variance and multiple comparisons). Results From T1 to T4, 42% of overall maxillary growth was observed between T1 and T2 (P < 0.01), 40.3% between T2 and T3 (P < 0.05) and 17.7% between T3 and T4 (n.s.). As for overall mandibular movement, 48.2% was observed between T1 and T2 (P < 0.001) and 51.8% between T2 and T4 (P < 0.01) of which 15.1% was observed between T2 and T3 (n.s.) and 36.7% between T3 and T4 (P < 0.01). Class II molar relationship and overjet were properly corrected. The occlusal plane which rotated clockwise between T1 and T2, returned to its initial position between T2 and T3 remaining stable until T4. The mandibular plane inclination did not change at any time during treatment. Conclusion Mandibular growth was significantly greater in comparison to maxillary, allowing sagittal maxillomandibular adjustment. The dentoalveolar changes (upper molar) that overcorrected the malocclusion in phase I, partially recurred in phase II, but did not hinder correction of the malocclusion. Facial type was preserved. PMID:24713559

  2. The relationship between fruit and vegetable intake with gastroesophageal reflux disease in Iranian adults

    PubMed Central

    Keshteli, Ammar Hassanzadeh; Shaabani, Pouria; Tabibian, Seyed-Reza; Saneei, Parvane; Esmaillzadeh, Ahmad; Adibi, Peyman

    2017-01-01

    Background: Findings from studies that investigated the relationship between fruit and vegetable intake with gastroesophageal reflux disease (GERD) were inconsistent. We aimed to assess the relationship between fruit and vegetable consumption and GERD among a large group of Iranian adults. Materials and Methods: In this cross-sectional study on 3979 adults, a validated food frequency questionnaire was used to assess usual dietary intakes including fruits and vegetables. The presence of heartburn sometimes or more during the past 3 months were considered as having GERD. Results: The prevalence of GERD among study population was 23.9%. After adjustment for potential confounding factors, those with the highest consumption of fruits had 25% lower risk for GERD, in comparison to those with the lowest intake (odds ratio [OR] = 0.75, 95% confidence interval [CI]: 0.59–0.97). Vegetable intake was not significantly related to the risk of GERD in crude or multivariable-adjusted models. However, participants with the highest intake of fruits and vegetables had 33% lower risk of GERD (OR = 0.67, 95% CI: 0.51–0.88), after adjustment for confounders. Women with the highest fruit and vegetable intake had 36% lower risk for GERD (OR = 0.64, 95% CI: 0.45–0.91). Overweight/obese participants in the last tertile of fruit consumption had 42% lower risk for GERD, in comparison to the first category (OR = 0.58, 95% CI: 0.42–0.83). Furthermore, participants with body mass index higher than 25 kg/m2 and higher intake of fruits and vegetables had 53% lower risk for GERD (OR = 0.47, 95% CI: 0.32-0.69). Conclusion: We found inverse associations between fruit intake as well as fruit and vegetable intake and risk of GERD among Iranian adults. PMID:29259636

  3. Evaluation of an in-vehicle monitoring system (IVMS) to reduce risky driving behaviors in commercial drivers: Comparison of in-cab warning lights and supervisory coaching with videos of driving behavior

    PubMed Central

    Bell, Jennifer L.; Taylor, Matthew A.; Chen, Guang-Xiang; Kirk, Rachel D.; Leatherman, Erin R.

    2017-01-01

    Problem Roadway incidents are the leading cause of work-related death in the United States. Methods The objective of this research was to evaluate whether two types of feedback from a commercially available in-vehicle monitoring system (IVMS) would reduce the incidence of risky driving behaviors in drivers from two companies. IVMS were installed in 315 vehicles representing the industries of local truck transportation and oil and gas support operations, and data were collected over an approximate two-year period in intervention and control groups. In one period, intervention group drivers were given feedback from in-cab warning lights from an IVMS that indicated occurrence of harsh vehicle maneuvers. In another period, intervention group drivers viewed video recordings of their risky driving behaviors with supervisors, and were coached by supervisors on safe driving practices. Results Risky driving behaviors declined significantly more during the period with coaching plus instant feedback with lights in comparison to the period with lights-only feedback (ORadj = 0.61 95% CI 0.43–0.86; Holm-adjusted p = 0.035) and the control group (ORadj = 0.52 95% CI 0.33–0.82; Holm-adjusted p = 0.032). Lights-only feedback was not found to be significantly different than the control group's decline from baseline (ORadj = 0.86 95% CI 0.51–1.43; Holm-adjusted p > 0.05). Conclusions The largest decline in the rate of risky driving behaviors occurred when feedback included both supervisory coaching and lights. Practical applications Supervisory coaching is an effective form of feedback to improve driving habits in the workplace. The potential advantages and limitations of this IVMS-based intervention program are discussed. PMID:28160807

  4. Pregnancy outcome after TNF-α inhibitor therapy during the first trimester: a prospective multicentre cohort study

    PubMed Central

    Weber-Schoendorfer, Corinna; Oppermann, Marc; Wacker, Evelin; Bernard, Nathalie; Beghin, Delphine; Cuppers-Maarschalkerweerd, Benedikte; Richardson, Jonathan L; Rothuizen, Laura E; Pistelli, Alessandra; Malm, Heli; Eleftheriou, Georgios; Kennedy, Debra; Kadioglu Duman, Mine; Meister, Reinhard; Schaefer, Christof

    2015-01-01

    Aims TNF-α inhibitors are considered relatively safe in pregnancy but experience is still limited. The aim of this study was to evaluate the risk of major birth defects, spontaneous abortion, preterm birth and reduced birth weight after first trimester exposure to TNF-α inhibitors. Methods Pregnancy outcomes of women on adalimumab, infliximab, etanercept, certolizumab pegol or golimumab were evaluated in a prospective observational cohort study and compared with outcomes of a non-exposed random sample. The samples were drawn from pregnancies identified by institutes collaborating in the European Network of Teratology Information Services. Results In total, 495 exposed and 1532 comparison pregnancies were contributed from nine countries. The risk of major birth defects was increased in the exposed (5.0%) compared with the non-exposed group (1.5%; adjusted odds ratio (ORadj) 2.2, 95% CI 1.0, 4.8). The risk of preterm birth was increased (17.6%; ORadj 1.69, 95% CI 1.1, 2.5), but not the risk of spontaneous abortion (16.2%; adjusted hazard ratio [HRadj] 1.06, 95% CI 0.7, 1.7). Birth weights adjusted for gestational age and sex were significantly lower in the exposed group compared to the non-exposed cohort (P = 0.02). As a diseased comparison group was not possible to ascertain, the influence of disease and treatment on birth weight and preterm birth could not be differentiated. Conclusions TNF-α inhibitors may carry a risk of adverse pregnancy outcome of moderate clinical relevance. Considering the impact of insufficiently controlled autoimmune disease on the mother and the unborn child, TNF-α inhibitors may nevertheless be a treatment option in women with severe disease refractory to established immunomodulatory drugs. PMID:25808588

  5. Psychosocial adjustment of children affected by HIV/AIDS in Ghana.

    PubMed

    Doku, Paul Narh

    2010-06-01

    The study was conducted to assess the psychosocial adjustment of children affected by HIV/ AIDS in the eastern part of Ghana. Four groups of children (children who lost their parents to AIDS, children who lost their parents through other causes, children living with HIV infected, alive parents and the comparison children who were from the same community but did not have HIV/AIDS-related illness or death in their families) were interviewed on depressive symptoms, prosocial behaviours, hyperactivity, conduct and peer problems using the Strengths and Difficulties Questionnaire (SDQ). Orphans in general and children living with HIV-infected parents consistently demonstrated poorer psychosocial adjustment than comparison children in the same community. The findings underscore the urgency and importance of culturally and developmentally appropriate intervention efforts targeting psychosocial problems among children affected by AIDS and call for more exploration of risk and resilience factors, both individual and contextual, affecting the wellbeing of these children.

  6. Simultaneous observation solutions for NASA-MOTS and SPEOPT station positions on the North American datum

    NASA Technical Reports Server (NTRS)

    Reece, J. S.; Marsh, J.

    1973-01-01

    Simultaneous observations of the GEOS-I and II flashing lamps by the NASA MOTS and SPEOPT cameras on the North American Datum (NAD) were analyzed using geometrical techniques to provide an adjustment of the station coordinates. Two separate adjustments were obtained. An optical data only solution was computed in which the solution scale was provided by the Rosman-Mojave distance obtained from a dynamic station solution. In a second adjustment, scaling was provided by processing simultaneous laser ranging data from Greenbelt and Wallops Island in a combined optical-laser solution. Comparisons of these results with previous GSFC dynamical solutions indicate an rms agreement on the order of 4 meters or better in each coordinate. Comparison with a detailed gravimetric geoid of North America yields agreement of 3 meters or better for mainland U.S. stations and 7 and 3 meters, respectively, for Bermuda and Puerto Rico.

  7. The effect of covariate mean differences on the standard error and confidence interval for the comparison of treatment means.

    PubMed

    Liu, Xiaofeng Steven

    2011-05-01

    The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.

  8. A new method for assessing the risk of accident associated with darkness.

    PubMed

    Johansson, Osten; Wanvik, Per Ole; Elvik, Rune

    2009-07-01

    This paper presents a new method for assessing the risk of accidents associated with darkness. The method estimates the risk of accident associated with darkness in terms of an odds ratio, which is defined as follows: [(number of accidents in darkness in a given hour of the day)/(number of accidents in daylight in the same hour of the day)]/[(Number of accidents in a given comparison hour when the case hour is dark)/(Number of accidents in a given comparison hour when the case hour is in daylight)]. This estimate of the risk of accident associated with darkness does not require data on exposure, but relies on the count of accidents in the same pair of hours throughout the year. One of the hours is dark part of the year, but has daylight the rest of the year. The comparison hour, which has daylight the whole year, is used to control for seasonal variations. The aim of relying on the same pair of hours throughout the year is to minimise the influence of potentially confounding factors. Estimates of the risk of injury accidents associated with darkness are developed on the basis of accident data for Norway, Sweden and the Netherlands. It is found that the risk of an injury accident increases by nearly 30% in darkness in urban areas, by nearly 50% in rural areas, and by about 40% for urban and rural areas combined (adjusted estimate).

  9. A Comparison of Functional Outcome in Patients Sustaining Major Trauma: A Multicentre, Prospective, International Study

    PubMed Central

    Rainer, Timothy H.; Yeung, Hiu Hung; Gabbe, Belinda J.; Yuen, Kai Y.; Ho, Hiu F.; Kam, Chak W.; Chang, Annice; Poon, Wai S.; Cameron, Peter A.; Graham, Colin A.

    2014-01-01

    Objectives To compare 6 month and 12 month health status and functional outcomes between regional major trauma registries in Hong Kong and Victoria, Australia. Summary Background Data Multicentres from trauma registries in Hong Kong and the Victorian State Trauma Registry (VSTR). Methods Multicentre, prospective cohort study. Major trauma patients and aged ≥18 years were included. The main outcome measures were Extended Glasgow Outcome Scale (GOSE) functional outcome and risk-adjusted Short-Form 12 (SF-12) health status at 6 and 12 months after injury. Results 261 cases from Hong Kong and 1955 cases from VSTR were included. Adjusting for age, sex, ISS, comorbid status, injury mechanism and GCS group, the odds of a better functional outcome for Hong Kong patients relative to Victorian patients at six months was 0.88 (95% CI: 0.66, 1.17), and at 12 months was 0.83 (95% CI: 0.60, 1.12). Adjusting for age, gender, ISS, GCS, injury mechanism and comorbid status, Hong Kong patients demonstrated comparable mean PCS-12 scores at 6-months (adjusted mean difference: 1.2, 95% CI: −1.2, 3.6) and 12-months (adjusted mean difference: −0.4, 95% CI: −3.2, 2.4) compared to Victorian patients. Keeping age, gender, ISS, GCS, injury mechanism and comorbid status, there was no difference in the MCS-12 scores of Hong Kong patients compared to Victorian patients at 6-months (adjusted mean difference: 0.4, 95% CI: −2.1, 2.8) or 12-months (adjusted mean difference: 1.8, 95% CI: −0.8, 4.5). Conclusion The unadjusted analyses showed better outcomes for Victorian cases compared to Hong Kong but after adjusting for key confounders, there was no difference in 6-month or 12-month functional outcomes between the jurisdictions. PMID:25157522

  10. Antidepressant use during pregnancy and risk of autism spectrum disorder and attention deficit hyperactivity disorder: systematic review of observational studies and methodological considerations.

    PubMed

    Morales, Daniel R; Slattery, Jim; Evans, Stephen; Kurz, Xavier

    2018-01-15

    Antidepressant exposure during pregnancy has been associated with an increased risk of autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD) in several observational studies. We performed a systematic review of these studies to highlight the effect that important methodological limitations have on such analyses and to consider approaches to the conduct, reporting and interpretation of future studies. A review of MEDLINE and EMBASE identified case-control, cohort and sibling studies assessing the risk of ASD and ADHD with antidepressant use during pregnancy. Approaches to confounding adjustment were described. Crude and adjusted effect estimates for comparisons between antidepressant exposure during pregnancy vs. all unexposed women were first meta-analysed using a generic inverse variance method of analysis, followed by effect estimates for alternative pre-selected comparison groups. A total of 15 studies measuring ASD as an outcome (involving 3,585,686 children and 40,585 cases) and seven studies measuring ADHD as an outcome (involving 2,765,723 patients and 52,313 cases) were identified. Variation in confounding adjustment existed between studies. Updated effect estimates for the association between maternal antidepressant exposure during pregnancy vs. all unexposed women remained statistically significant for ASD (adjusted random-effects risk ratio [RaRR] 1.53, 95% confidence interval [CI] 1.31-1.78). Similar significant associations were observed using pre-pregnancy maternal antidepressant exposure (RaRR 1.48, 95% CI 1.29-1.71) and paternal antidepressant exposure during pregnancy (1.29, 95% CI 1.08-1.53), but analyses restricted to using women with a history of affective disorder (1.18, 95% CI 0.91-1.52) and sibling studies (0.96, 95% CI 0.65-1.42) were not statistically significant. Corresponding associations for risk of ADHD with exposure were: RaRR 1.38, 95% CI 1.13-1.69 (during pregnancy), RaRR 1.38, 95% CI 1.14-1.69 (during pre-pregnancy), RaRR 1.71, 95% CI 1.31-2.23 (paternal exposure), RaRR 0.98, 95% CI 0.77-1.24 (women with a history of affective disorder) and RaRR 0.88, 95% CI 0.70-1.11 (sibling studies). Existing observational studies measuring the risk of ASD and ADHD with antidepressant exposure are heterogeneous in their design. Classical comparisons between exposed and unexposed women during pregnancy are at high risk of residual confounding. Alternative comparisons and sibling designs may aid the interpretation of causality and their utility requires further evaluation, including understanding potential limitations of undertaking meta-analyses with such data.

  11. Comparison of methods for the analysis of relatively simple mediation models.

    PubMed

    Rijnhart, Judith J M; Twisk, Jos W R; Chinapaw, Mai J M; de Boer, Michiel R; Heymans, Martijn W

    2017-09-01

    Statistical mediation analysis is an often used method in trials, to unravel the pathways underlying the effect of an intervention on a particular outcome variable. Throughout the years, several methods have been proposed, such as ordinary least square (OLS) regression, structural equation modeling (SEM), and the potential outcomes framework. Most applied researchers do not know that these methods are mathematically equivalent when applied to mediation models with a continuous mediator and outcome variable. Therefore, the aim of this paper was to demonstrate the similarities between OLS regression, SEM, and the potential outcomes framework in three mediation models: 1) a crude model, 2) a confounder-adjusted model, and 3) a model with an interaction term for exposure-mediator interaction. Secondary data analysis of a randomized controlled trial that included 546 schoolchildren. In our data example, the mediator and outcome variable were both continuous. We compared the estimates of the total, direct and indirect effects, proportion mediated, and 95% confidence intervals (CIs) for the indirect effect across OLS regression, SEM, and the potential outcomes framework. OLS regression, SEM, and the potential outcomes framework yielded the same effect estimates in the crude mediation model, the confounder-adjusted mediation model, and the mediation model with an interaction term for exposure-mediator interaction. Since OLS regression, SEM, and the potential outcomes framework yield the same results in three mediation models with a continuous mediator and outcome variable, researchers can continue using the method that is most convenient to them.

  12. Quantitative comparisons of three automated methods for estimating intracranial volume: A study of 270 longitudinal magnetic resonance images.

    PubMed

    Shang, Xiaoyan; Carlson, Michelle C; Tang, Xiaoying

    2018-04-30

    Total intracranial volume (TIV) is often used as a measure of brain size to correct for individual variability in magnetic resonance imaging (MRI) based morphometric studies. An adjustment of TIV can greatly increase the statistical power of brain morphometry methods. As such, an accurate and precise TIV estimation is of great importance in MRI studies. In this paper, we compared three automated TIV estimation methods (multi-atlas likelihood fusion (MALF), Statistical Parametric Mapping 8 (SPM8) and FreeSurfer (FS)) using longitudinal T1-weighted MR images in a cohort of 70 older participants at elevated sociodemographic risk for Alzheimer's disease. Statistical group comparisons in terms of four different metrics were performed. Furthermore, sex, education level, and intervention status were investigated separately for their impacts on the TIV estimation performance of each method. According to our experimental results, MALF was the least susceptible to atrophy, while SPM8 and FS suffered a loss in precision. In group-wise analysis, MALF was the least sensitive method to group variation, whereas SPM8 was particularly sensitive to sex and FS was unstable with respect to education level. In terms of effectiveness, both MALF and SPM8 delivered a user-friendly performance, while FS was relatively computationally intensive. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Principal stratification in causal inference.

    PubMed

    Frangakis, Constantine E; Rubin, Donald B

    2002-03-01

    Many scientific problems require that treatment comparisons be adjusted for posttreatment variables, but the estimands underlying standard methods are not causal effects. To address this deficiency, we propose a general framework for comparing treatments adjusting for posttreatment variables that yields principal effects based on principal stratification. Principal stratification with respect to a posttreatment variable is a cross-classification of subjects defined by the joint potential values of that posttreatment variable tinder each of the treatments being compared. Principal effects are causal effects within a principal stratum. The key property of principal strata is that they are not affected by treatment assignment and therefore can be used just as any pretreatment covariate. such as age category. As a result, the central property of our principal effects is that they are always causal effects and do not suffer from the complications of standard posttreatment-adjusted estimands. We discuss briefly that such principal causal effects are the link between three recent applications with adjustment for posttreatment variables: (i) treatment noncompliance, (ii) missing outcomes (dropout) following treatment noncompliance. and (iii) censoring by death. We then attack the problem of surrogate or biomarker endpoints, where we show, using principal causal effects, that all current definitions of surrogacy, even when perfectly true, do not generally have the desired interpretation as causal effects of treatment on outcome. We go on to forrmulate estimands based on principal stratification and principal causal effects and show their superiority.

  14. Sample size re-estimation and other midcourse adjustments with sequential parallel comparison design.

    PubMed

    Silverman, Rachel K; Ivanova, Anastasia

    2017-01-01

    Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.

  15. Role of the combination of FA and T2* parameters as a new diagnostic method in therapeutic evaluation of parkinson's disease.

    PubMed

    Fang, Yuan; Zheng, Tao; Liu, Lanxiang; Gao, Dawei; Shi, Qinglei; Dong, Yanchao; Du, Dan

    2017-11-17

    Simple diffusion delivery (SDD) has attained good effects with only tiny amounts of drugs. Fractional anisotropy (FA) and relaxation time T2* that indicate the integrity of fiber tracts and iron concentration within brain tissue were used to evaluate the therapeutic effect of SDD. To evaluate therapeutic effect of SDD in the Parkinson's disease (PD) rat model with FA and T2* parameters. Prospective case-control animal study. Thirty-two male Sprague Dawley rats (eight normal, eight PD, eight SDD, and eight subcutaneous injection rats). Single-shot spin echo echo-planar imaging and fast low-angle shot T 2 WI sequences at 3.0T. Parameters of FA and T2* on the treated side of the substantia nigra were measured to evaluate the therapeutic effect of SDD in a PD rat model. The effects of time on FA and T2* values were analyzed by repeated measurement tests. A one-way analysis of variance was conducted, followed by individual comparisons of the mean FA and T2* values at different timepoints. The FA values on the treated side of the substantia nigra in the SDD treatment group and subcutaneous injection treatment group were significantly higher at week 1 and lower at week 6 than that of the PD control group (SDD vs. PD, week 1, adjusted P = 0.012; subcutaneous vs. PD, week 1, adjusted P < 0.001; SDD vs. PD, week 6, adjusted P = 0.004; subcutaneous vs. PD, week 6, adjusted P = 0.024). The T2* parameter in the SDD treatment group and subcutaneous injection treatment group was significantly higher than that in the PD control group at week 6 (SDD vs. PD, adjusted P = 0.032; subcutaneous vs. PD, adjusted P < 0.001). The combination of FA and T2* parameters can potentially serve as a new effective evaluation method of the therapeutic effect of SDD. 1 Technical Efficacy: Stage 4 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.

  16. Attrition of U.S. military enlistees with waivers for hearing deficiency, 1995-2004.

    PubMed

    Niebuhr, David W; Li, Yuanzhang; Powers, Timothy E; Krauss, Margot R; Chandler, David; Helfer, Thomas

    2007-01-01

    Hearing deficiency is the condition for which accession medical waivers are most commonly granted. The retention of individuals entering service with a waiver for hearing deficiency has not been previously studied. Military retention among new enlistees with a medical waiver for hearing deficiency was compared with that among a matched comparison group of fully qualified enlistees. Comparisons according to branch of service over the first 3 years of service were performed with the Kaplan-Meier product-limit method and proportional-hazards model. Army subjects had significantly lower retention rates than did their fully qualified counterparts. In the adjusted model, Army and Navy enlistees with a waiver for hearing deficiency had a significantly lower likelihood of retention than did their matched counterparts. The increased likelihood of medical attrition in enlistees with a waiver for hearing loss provides no evidence to make the hearing accession standard more lenient and validates a selective hearing loss waiver policy.

  17. Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches

    PubMed Central

    Argo, Lauren; Stoddard, Greg; Bray, Bruce E; Zeng-Treitler, Qing

    2015-01-01

    Background Compared to traditional methods of participant recruitment, online crowdsourcing platforms provide a fast and low-cost alternative. Amazon Mechanical Turk (MTurk) is a large and well-known crowdsourcing service. It has developed into the leading platform for crowdsourcing recruitment. Objective To explore the application of online crowdsourcing for health informatics research, specifically the testing of medical pictographs. Methods A set of pictographs created for cardiovascular hospital discharge instructions was tested for recognition. This set of illustrations (n=486) was first tested through an in-person survey in a hospital setting (n=150) and then using online MTurk participants (n=150). We analyzed these survey results to determine their comparability. Results Both the demographics and the pictograph recognition rates of online participants were different from those of the in-person participants. In the multivariable linear regression model comparing the 2 groups, the MTurk group scored significantly higher than the hospital sample after adjusting for potential demographic characteristics (adjusted mean difference 0.18, 95% CI 0.08-0.28, P<.001). The adjusted mean ratings were 2.95 (95% CI 2.89-3.02) for the in-person hospital sample and 3.14 (95% CI 3.07-3.20) for the online MTurk sample on a 4-point Likert scale (1=totally incorrect, 4=totally correct). Conclusions The findings suggest that crowdsourcing is a viable complement to traditional in-person surveys, but it cannot replace them. PMID:26678085

  18. Assessing group differences in biodiversity by simultaneously testing a user-defined selection of diversity indices.

    PubMed

    Pallmann, Philip; Schaarschmidt, Frank; Hothorn, Ludwig A; Fischer, Christiane; Nacke, Heiko; Priesnitz, Kai U; Schork, Nicholas J

    2012-11-01

    Comparing diversities between groups is a task biologists are frequently faced with, for example in ecological field trials or when dealing with metagenomics data. However, researchers often waver about which measure of diversity to choose as there is a multitude of approaches available. As Jost (2008, Molecular Ecology, 17, 4015) has pointed out, widely used measures such as the Shannon or Simpson index have undesirable properties which make them hard to compare and interpret. Many of the problems associated with the use of these 'raw' indices can be corrected by transforming them into 'true' diversity measures. We introduce a technique that allows the comparison of two or more groups of observations and simultaneously tests a user-defined selection of a number of 'true' diversity measures. This procedure yields multiplicity-adjusted P-values according to the method of Westfall and Young (1993, Resampling-Based Multiple Testing: Examples and Methods for p-Value Adjustment, 49, 941), which ensures that the rate of false positives (type I error) does not rise when the number of groups and/or diversity indices is extended. Software is available in the R package 'simboot'. © 2012 Blackwell Publishing Ltd.

  19. One-dimensional Lagrangian implicit hydrodynamic algorithm for Inertial Confinement Fusion applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramis, Rafael, E-mail: rafael.ramis@upm.es

    A new one-dimensional hydrodynamic algorithm, specifically developed for Inertial Confinement Fusion (ICF) applications, is presented. The scheme uses a fully conservative Lagrangian formulation in planar, cylindrical, and spherically symmetric geometries, and supports arbitrary equations of state with separate ion and electron components. Fluid equations are discretized on a staggered grid and stabilized by means of an artificial viscosity formulation. The space discretized equations are advanced in time using an implicit algorithm. The method includes several numerical parameters that can be adjusted locally. In regions with low Courant–Friedrichs–Lewy (CFL) number, where stability is not an issue, they can be adjusted tomore » optimize the accuracy. In typical problems, the truncation error can be reduced by a factor between 2 to 10 in comparison with conventional explicit algorithms. On the other hand, in regions with high CFL numbers, the parameters can be set to guarantee unconditional stability. The method can be integrated into complex ICF codes. This is demonstrated through several examples covering a wide range of situations: from thermonuclear ignition physics, where alpha particles are managed as an additional species, to low intensity laser–matter interaction, where liquid–vapor phase transitions occur.« less

  20. Identifying Military and Combat-Specific Risk Factors for Child Adjustment: Comparing High and Low Risk Military Families and Civilian Families

    DTIC Science & Technology

    2016-08-01

    Award Number: W81XWH-12-2-0034 TITLE: Identifying Military and Combat-Specific Risk Factors for Child Adjustment: Comparing High and Low Risk...2. REPORT TYPE Final 3. DATES COVERED (From - To) 15May2012 - 31Aug2016 Identifying Military and Combat-Specific Risk Factors for Child Adjustment...deployment and has a child between the age of 3 and 7 and comparison groups of civilain single parent families (N=200) and civilian dual parent

  1. A Comparison of Variable Selection Criteria for Multiple Linear Regression: A Second Simulation Study

    DTIC Science & Technology

    1993-03-01

    statistical mathe- matics, began in the late 1800’s when Sir Francis Galton first attempted to use practical mathematical techniques to investigate the...randomly collected (sampled) many pairs of parent/child height mea- surements (data), Galton observed that for a given parent- height average, the...ty only Maximum Adjusted R2 will be discussed. However, Maximum Adjusted R’ and Minimum MSE test exactly the same 2.thing. Adjusted R is related to R

  2. 77 FR 13328 - Federal Acquisition Regulation; Information Collection; Davis Bacon Act-Price Adjustment (Actual...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-06

    ...; Information Collection; Davis Bacon Act--Price Adjustment (Actual Method) AGENCY: Department of Defense (DOD... approved information collection requirement concerning the Davis-Bacon Act price adjustment (actual method... Information Collection 9000- 0154, Davis Bacon Act--Price Adjustment (Actual Method), by any of the following...

  3. Comparison between available serologic tests for detecting antibodies against Anaplasma phagocytophilum and Borrelia burgdorferi in horses in Canada.

    PubMed

    Schvartz, Gili; Epp, Tasha; Burgess, Hilary J; Chilton, Neil B; Lohmann, Katharina L

    2015-07-01

    To investigate the agreement between available serologic tests for the detection of antibodies against Anaplasma phagocytophilum and Borrelia burgdorferi, 50 serum samples from horses of unknown clinical status and at low risk for infection were tested. In addition to a point-of-care enzyme-linked immunosorbent assay (pocELISA), the evaluated tests included 2 indirect fluorescent antibody tests (IFATs) for antibodies against A. phagocytophilum and an IFAT, an ELISA confirmed with Western blot, and the Lyme multiplex assay for antibodies against B. burgdorferi. For each pair-wise comparison between serologic tests, the difference in the proportion of seropositive results as well as kappa and the prevalence-adjusted, bias-adjusted kappa were calculated. The proportion of seropositive results differed significantly in each pairwise comparison of tests for detection of antibodies against A. phagocytophilum, and between the pocELISA and IFAT as well as between the pocELISA and Lyme multiplex assay for detection of antibodies against B. burgdorferi. Agreement based on kappa varied from poor to fair while agreement was improved when evaluating prevalence-adjusted, bias-adjusted kappa. Lack of agreement may be explained by differences in methodology between the evaluated tests, cross-reactivity or false-positive and false-negative tests. In addition to the limitations of serologic test interpretation in the absence of clinical disease, this data suggest that screening of horses for exposure to tick-borne diseases in nonendemic areas may not be warranted. © 2015 The Author(s).

  4. TREATMENT SWITCHING: STATISTICAL AND DECISION-MAKING CHALLENGES AND APPROACHES.

    PubMed

    Latimer, Nicholas R; Henshall, Chris; Siebert, Uwe; Bell, Helen

    2016-01-01

    Treatment switching refers to the situation in a randomized controlled trial where patients switch from their randomly assigned treatment onto an alternative. Often, switching is from the control group onto the experimental treatment. In this instance, a standard intention-to-treat analysis does not identify the true comparative effectiveness of the treatments under investigation. We aim to describe statistical methods for adjusting for treatment switching in a comprehensible way for nonstatisticians, and to summarize views on these methods expressed by stakeholders at the 2014 Adelaide International Workshop on Treatment Switching in Clinical Trials. We describe three statistical methods used to adjust for treatment switching: marginal structural models, two-stage adjustment, and rank preserving structural failure time models. We draw upon discussion heard at the Adelaide International Workshop to explore the views of stakeholders on the acceptability of these methods. Stakeholders noted that adjustment methods are based on assumptions, the validity of which may often be questionable. There was disagreement on the acceptability of adjustment methods, but consensus that when these are used, they should be justified rigorously. The utility of adjustment methods depends upon the decision being made and the processes used by the decision-maker. Treatment switching makes estimating the true comparative effect of a new treatment challenging. However, many decision-makers have reservations with adjustment methods. These, and how they affect the utility of adjustment methods, require further exploration. Further technical work is required to develop adjustment methods to meet real world needs, to enhance their acceptability to decision-makers.

  5. Accounting for multimorbidity can affect the estimation of the Burden of Disease: a comparison of approaches.

    PubMed

    Hilderink, Henk B M; Plasmans, Marjanne H D; Snijders, Bianca E P; Boshuizen, Hendriek C; Poos, M J J C René; van Gool, Coen H

    2016-01-01

    Various Burden of Disease (BoD) studies do not account for multimorbidity in their BoD estimates. Ignoring multimorbidity can lead to inaccuracies in BoD estimations, particularly in ageing populations that include large proportions of persons with two or more health conditions. The objective of this study is to improve BoD estimates for the Netherlands by accounting for multimorbidity. For this purpose, we analyzed different methods for 1) estimating the prevalence of multimorbidity and 2) deriving Disability Weights (DWs) for multimorbidity by using existing data on single health conditions. We included 25 health conditions from the Dutch Burden of Disease study that have a high rate of prevalence and that make a large contribution to the total number of Years Lived with a Disability (YLD). First, we analyzed four methods for estimating the prevalence of multimorbid conditions (i.e. independent, independent age- and sex-specific, dependent, and dependent sex- and age-specific). Secondly, we analyzed three methods for calculating the Combined Disability Weights (CDWs) associated with multimorbid conditions (i.e. additive, multiplicative and maximum limit). A combination of these two approaches was used to recalculate the number of YLDs, which is a component of the Disability-Adjusted Life Years (DALY). This study shows that the YLD estimates for 25 health conditions calculated using the multiplicative method for Combined Disability Weights are 5 % lower, and 14 % lower when using the maximum limit method, than when calculated using the additive method. Adjusting for sex- and age-specific dependent co-occurrence of health conditions reduces the number of YLDs by 10 % for the multiplicative method and by 26 % for the maximum limit method. The adjustment is higher for health conditions with a higher prevalence in old age, like heart failure (up to 43 %) and coronary heart diseases (up to 33 %). Health conditions with a high prevalence in middle age, such as anxiety disorders, have a moderate adjustment (up to 13 %). We conclude that BoD calculations that do not account for multimorbidity can result in an overestimation of the actual BoD. This may affect public health policy strategies that focus on single health conditions if the underlying cost-effectiveness analysis overestimates the intended effects. The methodology used in this study could be further refined to provide greater insight into co-occurrence and the possible consequences of multimorbid conditions in terms of disability for particular combinations of health conditions.

  6. Treatment of premenstrual syndrome: Appraising the effectiveness of cognitive behavioral therapy in addition to calcium supplement plus vitamin D.

    PubMed

    Karimi, Zahra; Dehkordi, Mahnaz Aliakbari; Alipour, Ahmad; Mohtashami, Tayebeh

    2018-03-01

    Premenstrual syndrome (PMS) consists of repetitious physical and psychological symptoms. The symptoms occur during the luteal phase of the menstrual period and cease when the menstrual period starts. This study included pre-test and post-test experiments between a control group and a test group. The statistical population involved 40 females, chosen based on multistage cluster sampling. The participants were then divided into four groups to undergo treatment with calcium supplement plus vitamin D together with cognitive behavioral therapy (CBT), and were screened with the Premenstrual Syndrome Screening Test (PSST). The pre-test and post-test scores in the PSST, the General Health Questionnaire (GHQ-28), and Bell's Adjustment Inventory (BAI) were used as assessment tools (p < .05). According to the parameters of PMS symptoms, when evaluating the pre-test and post-test scores, the overall score of each individual in the experimental group was improved and a significant effect for the combination of calcium supplement plus vitamin D together with CBT was observed in comparison to the post-test control group. A comparison of multivariate analysis of covariance (MANCOVA) results collected from the pre-test and post-test scores revealed that the method of treatment was beneficial for PMS, adjustment, and general health. © 2018 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  7. Comparison of Survival Models for Analyzing Prognostic Factors in Gastric Cancer Patients

    PubMed

    Habibi, Danial; Rafiei, Mohammad; Chehrei, Ali; Shayan, Zahra; Tafaqodi, Soheil

    2018-03-27

    Objective: There are a number of models for determining risk factors for survival of patients with gastric cancer. This study was conducted to select the model showing the best fit with available data. Methods: Cox regression and parametric models (Exponential, Weibull, Gompertz, Log normal, Log logistic and Generalized Gamma) were utilized in unadjusted and adjusted forms to detect factors influencing mortality of patients. Comparisons were made with Akaike Information Criterion (AIC) by using STATA 13 and R 3.1.3 softwares. Results: The results of this study indicated that all parametric models outperform the Cox regression model. The Log normal, Log logistic and Generalized Gamma provided the best performance in terms of AIC values (179.2, 179.4 and 181.1, respectively). On unadjusted analysis, the results of the Cox regression and parametric models indicated stage, grade, largest diameter of metastatic nest, largest diameter of LM, number of involved lymph nodes and the largest ratio of metastatic nests to lymph nodes, to be variables influencing the survival of patients with gastric cancer. On adjusted analysis, according to the best model (log normal), grade was found as the significant variable. Conclusion: The results suggested that all parametric models outperform the Cox model. The log normal model provides the best fit and is a good substitute for Cox regression. Creative Commons Attribution License

  8. Comparison of Scores on Two Visual-Motor Tests for Children Referred for Learning or Adjustment Difficulties.

    ERIC Educational Resources Information Center

    DeMers, Stephen T.; And Others

    1981-01-01

    This study compared the performance of school-aged children referred for learning or adjustment difficulties on Beery's Developmental Test of Visual-Motor Integration and Koppitz's version of the Bender-Gestalt test. Results indicated that the tests are related but not equivalent when administered to referred populations. (Author/AL)

  9. Parenting Stress, Alliance, Child Contact, and Adjustment of Imprisoned Mothers and Fathers

    ERIC Educational Resources Information Center

    Loper, Ann Booker; Carlson, L. Wrenn; Levitt, Lacey; Scheffel, Kathryn

    2009-01-01

    The present study contrasted the parenting stress and adjustment patterns of 100 mothers and 111 fathers incarcerated in one of 11 U.S. prisons. In comparison to inmate mothers, fathers had less contact with children, higher levels of parenting stress, and poorer alliance with caregivers. For inmate mothers, higher levels of contact with…

  10. A matching framework to improve causal inference in interrupted time-series analysis.

    PubMed

    Linden, Ariel

    2018-04-01

    Interrupted time-series analysis (ITSA) is a popular evaluation methodology in which a single treatment unit's outcome is studied over time and the intervention is expected to "interrupt" the level and/or trend of the outcome, subsequent to its introduction. When ITSA is implemented without a comparison group, the internal validity may be quite poor. Therefore, adding a comparable control group to serve as the counterfactual is always preferred. This paper introduces a novel matching framework, ITSAMATCH, to create a comparable control group by matching directly on covariates and then use these matches in the outcomes model. We evaluate the effect of California's Proposition 99 (passed in 1988) for reducing cigarette sales, by comparing California to other states not exposed to smoking reduction initiatives. We compare ITSAMATCH results to 2 commonly used matching approaches, synthetic controls (SYNTH), and regression adjustment; SYNTH reweights nontreated units to make them comparable to the treated unit, and regression adjusts covariates directly. Methods are compared by assessing covariate balance and treatment effects. Both ITSAMATCH and SYNTH achieved covariate balance and estimated similar treatment effects. The regression model found no treatment effect and produced inconsistent covariate adjustment. While the matching framework achieved results comparable to SYNTH, it has the advantage of being technically less complicated, while producing statistical estimates that are straightforward to interpret. Conversely, regression adjustment may "adjust away" a treatment effect. Given its advantages, ITSAMATCH should be considered as a primary approach for evaluating treatment effects in multiple-group time-series analysis. © 2017 John Wiley & Sons, Ltd.

  11. Case mix adjustment of health outcomes, resource use and process indicators in childbirth care: a register-based study.

    PubMed

    Mesterton, Johan; Lindgren, Peter; Ekenberg Abreu, Anna; Ladfors, Lars; Lilja, Monica; Saltvedt, Sissel; Amer-Wåhlin, Isis

    2016-05-31

    Unwarranted variation in care practice and outcomes has gained attention and inter-hospital comparisons are increasingly being used to highlight and understand differences between hospitals. Adjustment for case mix is a prerequisite for meaningful comparisons between hospitals with different patient populations. The objective of this study was to identify and quantify maternal characteristics that impact a set of important indicators of health outcomes, resource use and care process and which could be used for case mix adjustment of comparisons between hospitals. In this register-based study, 139 756 deliveries in 2011 and 2012 were identified in regional administrative systems from seven Swedish regions, which together cover 67 % of all deliveries in Sweden. Data were linked to the Medical birth register and Statistics Sweden's population data. A number of important indicators in childbirth care were studied: Caesarean section (CS), induction of labour, length of stay, perineal tears, haemorrhage > 1000 ml and post-partum infections. Sociodemographic and clinical characteristics deemed relevant for case mix adjustment of outcomes and resource use were identified based on previous literature and based on clinical expertise. Adjustment using logistic and ordinary least squares regression analysis was performed to quantify the impact of these characteristics on the studied indicators. Almost all case mix factors analysed had an impact on CS rate, induction rate and length of stay and the effect was highly statistically significant for most factors. Maternal age, parity, fetal presentation and multiple birth were strong predictors of all these indicators but a number of additional factors such as born outside the EU, body mass index (BMI) and several complications during pregnancy were also important risk factors. A number of maternal characteristics had a noticeable impact on risk of perineal tears, while the impact of case mix factors was less pronounced for risk of haemorrhage > 1000 ml and post-partum infections. Maternal characteristics have a large impact on care process, resource use and outcomes in childbirth care. For meaningful comparisons between hospitals and benchmarking, a broad spectrum of sociodemographic and clinical maternal characteristics should be accounted for.

  12. Applying the disability-adjusted life year to track health impact of social franchise programs in low- and middle-income countries

    PubMed Central

    2013-01-01

    Background Developing effective methods for measuring the health impact of social franchising programs is vital for demonstrating the value of this innovative service delivery model, particularly given its rapid expansion worldwide. Currently, these programs define success through patient volume and number of outlets, widely acknowledged as poor reflections of true program impact. An existing metric, the disability-adjusted life years averted (DALYs averted), offers promise as a measure of projected impact. Country-specific and service-specific, DALYs averted enables impact comparisons between programs operating in different contexts. This study explores the use of DALYs averted as a social franchise performance metric. Methods Using data collected by the Social Franchising Compendia in 2010 and 2011, we compared franchise performance, analyzing by region and program area. Coefficients produced by Population Services International converted each franchise's service delivery data into DALYs averted. For the 32 networks with two years of data corresponding to these metrics, a paired t-test compared all metrics. Finally, to test data reporting quality, we compared services provided to patient volume. Results Social franchising programs grew considerably from 2010 to 2011, measured by services provided (215%), patient volume (31%), and impact (couple-years of protection (CYPs): 86% and DALYs averted: 519%), but not by the total number of outlets. Non-family planning services increased by 857%, with diversification centered in Asia and Africa. However, paired t-test comparisons showed no significant increase within the networks, whether categorized as family planning or non-family planning. The ratio of services provided to patient visits yielded considerable range, with one network reporting a ratio of 16,000:1. Conclusion In theory, the DALYs averted metric is a more robust and comprehensive metric for social franchising than current program measures. As social franchising spreads beyond family planning, having a metric that captures the impact of a range of diverse services and allows comparisons will be increasingly important. However, standardizing reporting will be essential to make such comparisons useful. While not widespread, errors in self-reported data appear to have included social marketing distribution data in social franchising reporting, requiring clearer data collection and reporting guidelines. Differences noted above must be interpreted cautiously as a result. PMID:23902679

  13. Bayesian adjustment for measurement error in continuous exposures in an individually matched case-control study.

    PubMed

    Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor

    2011-05-14

    In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.

  14. Brain cortical structural differences between non-central nervous system cancer patients treated with and without chemotherapy compared to non-cancer controls: a cross-sectional pilot MRI study using clinically indicated scans

    NASA Astrophysics Data System (ADS)

    Shiroishi, Mark S.; Gupta, Vikash; Bigjahan, Bavrina; Cen, Steven Y.; Rashid, Faisal; Hwang, Darryl H.; Lerner, Alexander; Boyko, Orest B.; Liu, Chia-Shang Jason; Law, Meng; Thompson, Paul M.; Jahanshad, Neda

    2017-11-01

    Background: Increases in cancer survival have made understanding the basis of cancer-related cognitive impairment (CRCI) more important. CRCI neuroimaging studies have traditionally used dedicated research brain MRIs in breast cancer survivors with small sample sizes; little is known about other non-CNS cancers. However, there is a wealth of unused data from clinically-indicated MRIs that could be used to study CRCI. Objective: Evaluate brain cortical structural differences in those with non-CNS cancers using clinically-indicated MRIs. Design: Cross-sectional Patients: Adult non-CNS cancer and non-cancer control (C) patients who underwent clinically-indicated MRIs. Methods: Brain cortical surface area and thickness were measured using 3D T1-weighted images. An age-adjusted linear regression model was used and the Benjamini and Hochberg false discovery rate (FDR) corrected for multiple comparisons. Group comparisons were: cancer cases with chemotherapy (Ch+), cancer cases without chemotherapy (Ch-) and subgroup of lung cancer cases with and without chemotherapy vs C. Results: Sixty-four subjects were analyzed: 22 Ch+, 23 Ch- and 19 C patients. Subgroup analysis of 16 LCa was also performed. Statistically significant decreases in either cortical surface area or thickness were found in multiple ROIs primarily within the frontal and temporal lobes for all comparisons. Limitations: Several limitations were apparent including a small sample size that precluded adjustment for other covariates. Conclusions: Our preliminary results suggest that various types of non-CNS cancers, both with and without chemotherapy, may result in brain structural abnormalities. Also, there is a wealth of untapped clinical MRIs that could be used for future CRCI studies.

  15. A nationwide population-based retrospective cohort study of the risk of uterine, ovarian and breast cancer in women with polycystic ovary syndrome.

    PubMed

    Shen, Cheng-Che; Yang, Albert C; Hung, Jeng-Hsiu; Hu, Li-Yu; Tsai, Shih-Jen

    2015-01-01

    Polycystic ovary syndrome (PCOS) is one of the most common endocrine disorders among women of reproductive age. We used a nationwide population-based retrospective cohort study to explore the relationship between PCOS and the subsequent development of gynecological cancers including uterine, breast, or ovarian cancer. We identified subjects who were diagnosed with PCOS between January 1, 2000, and December 31, 2004, in the Taiwan National Health Insurance (NHI) Research Database. A comparison cohort was constructed for patients without known PCOS who were also matched according to age. All PCOS and control patients were observed until diagnosed with breast cancer, ovarian cancer, or uterine cancer or until death, withdrawal from the NHI system, or December 31, 2009. The PCOS cohort consisted of 3,566 patients, and the comparison cohort consisted of 14,264 matched control patients without PCOS. The adjusted hazard ratio (HR) of uterine cancer and breast cancer in subjects with PCOS were higher (HR: 8.42 [95% confidence interval: 1.62-43.89] and HR: 1.99 [95% confidence interval: 1.05-3.77], respectively) than that of the controls during the follow-up. With the Monte Carlo method, only the mean adjusted HR of 1,000 comparisons for developing uterine cancer during the follow-up period was greater for the PCOS group than for the control groups (HR: 4.71, 95% confidence interval: 1.57-14.11). PCOS might increase the risk of subsequent newly diagnosed uterine cancer. It is critical that further large-scale, well-designed studies be conducted to confirm the association between PCOS and gynecological cancer risk. ©AlphaMed Press.

  16. Using mixed treatment comparisons and meta-regression to perform indirect comparisons to estimate the efficacy of biologic treatments in rheumatoid arthritis.

    PubMed

    Nixon, R M; Bansback, N; Brennan, A

    2007-03-15

    Mixed treatment comparison (MTC) is a generalization of meta-analysis. Instead of the same treatment for a disease being tested in a number of studies, a number of different interventions are considered. Meta-regression is also a generalization of meta-analysis where an attempt is made to explain the heterogeneity between the treatment effects in the studies by regressing on study-level covariables. Our focus is where there are several different treatments considered in a number of randomized controlled trials in a specific disease, the same treatment can be applied in several arms within a study, and where differences in efficacy can be explained by differences in the study settings. We develop methods for simultaneously comparing several treatments and adjusting for study-level covariables by combining ideas from MTC and meta-regression. We use a case study from rheumatoid arthritis. We identified relevant trials of biologic verses standard therapy or placebo and extracted the doses, comparators and patient baseline characteristics. Efficacy is measured using the log odds ratio of achieving six-month ACR50 responder status. A random-effects meta-regression model is fitted which adjusts the log odds ratio for study-level prognostic factors. A different random-effect distribution on the log odds ratios is allowed for each different treatment. The odds ratio is found as a function of the prognostic factors for each treatment. The apparent differences in the randomized trials between tumour necrosis factor alpha (TNF- alpha) antagonists are explained by differences in prognostic factors and the analysis suggests that these drugs as a class are not different from each other. Copyright (c) 2006 John Wiley & Sons, Ltd.

  17. Antihistamine Use in Early Pregnancy and Risk of Birth Defects

    PubMed Central

    Li, Qian; Mitchell, Allen A.; Werler, Martha M.; Yau, Wai-Ping; Hernández-Díaz, Sonia

    2014-01-01

    Background Several studies have reported an association between use of specific antihistamines in early pregnancy and certain specific birth defects. Objective To test 16 previously-hypothesized associations between specific antihistamines and specific birth defects, and identify possible new associations. Methods We used 1998-2010 data from the Slone Epidemiology Center Birth Defects Study, a multicenter case-control surveillance program of birth defects in North America. Mothers were interviewed within six months of delivery about demographic, reproductive, medical, and behavioral factors, and details on use of prescription and non-prescription medications. We compared 1st trimester exposure to specific antihistamines between 13,213 infants with specific malformations and 6,982 non-malformed controls, using conditional logistic regression to estimate odds ratios (ORs) and 95% confidence intervals (CIs), with adjustment for potential confounders, including indication for use. Results Overall, 13.7% of controls were exposed to antihistamines during the 1st trimester. The most commonly-used medications were diphenhydramine (4.2%), loratadine (3.1%), doxylamine (1.9%), and chlorpheniramine (1.7%). Where estimates were stable, none supported the previously-hypothesized associations. Among over 100 exploratory comparisons of other specific antihistamine/defect pairs, 14 had ORs ≥1.5 of which 6 had 95% CI bounds excluding 1.0 before but not after adjustment for multiple comparisons. Conclusion Our findings do not provide meaningful support for previously-posited associations between antihistamines and major congenital anomalies; at the same time, we identified associations that had not been previously suggested. We suspect that previous associations may be chance findings in the context of multiple comparisons, a situation which may also apply to our new findings. PMID:24565715

  18. Parental Loss, Trusting Relationship with Current Caregivers and Psychosocial Adjustment among Children Affected by AIDS in China

    PubMed Central

    Zhao, Junfeng; Li, Xiaoming; Barnett, Douglas; Lin, Xiuyun; Fang, Xiaoyi; Zhao, Guoxiang; Naar-King, Sylvie; Stanton, Bonita

    2011-01-01

    Objective to examine the relationship between parental loss, trusting relationship with current caregivers, and psychosocial adjustment among children affected by AIDS in China. Methods Cross-sectional data were collected from 755 AIDS orphans (296 double orphans and 459 single orphans), 466 vulnerable children living with HIV-infected parents, and 404 comparison children in China. The trusting relationship with current caregivers was measured with a 15-item scale (Cronbach alpha=.84) modified from the Trusting Relationship Questionnaire (TRQ) developed by Mustillo and colleagues (2005). The psychosocial measures include rule compliance/acting out, anxiety/withdrawal, peer social skills, school interest, depressive symptoms, loneliness, self-esteem, future expectation, hopefulness about future, and perceived control over the future. Results Group mean comparisons using ANOVA suggested a significant association (p<.0001) between the trusting relationship with current caregivers and all the psychosocial measures except anxiety and depression. These associations remained significant in General Linear Model analysis, controlling for children's gender, age, family SES, orphan status (orphans, vulnerable children, and comparison children), and appropriate interaction terms among factor variables. Discussion The findings in the current study support the global literature on the importance of attachment relationship with caregivers in promoting children's psychosocial development. Future prevention intervention efforts to improve AIDS orphans' psychosocial well-being will need to take into consideration the quality of the child's attachment relationships with current caregivers and help their current caregivers to improve the quality of care for these children. Future study is needed to explore the possible reasons for the lack of association between a trusting relationship and some internalizing symptoms such as anxiety and depression among children affected by HIV/AIDS. PMID:21749241

  19. The choice of statistical methods for comparisons of dosimetric data in radiotherapy.

    PubMed

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-09-18

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (-5 ± 4.4 SD) for MB and (-4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods. This paper illustrates and justifies the use of statistical tests and graphical representations for dosimetric comparisons in radiotherapy. The statistical analysis shows the significance of dose differences resulting from two or more techniques in radiotherapy.

  20. Testing the limits of temporal stability: Willingness to pay values among Grand Canyon whitewater boaters across decades

    USGS Publications Warehouse

    Neher, Chris J.; Duffield, John; Bair, Lucas S.; Patterson, David A.; Neher, Katherine

    2017-01-01

    We directly compare trip willingness to pay (WTP) values between 1985 and 2015 stated preference surveys of private party Grand Canyon boaters using identically designed valuation methods. The temporal gap of 30 years between these two studies is well beyond that of any tests of WTP temporal stability in the literature. Comparisons were made of mean WTP estimates for four hypothetical Colorado River flow level scenarios. WTP values from the 1985 survey were adjusted to 2015 levels using the consumer price index. Mean WTP precision was estimated through simulation. No statistically significant differences were detected between the adjusted Bishop et al. (1987) and the current study mean WTP estimates. Examination of pooled models of the data from the studies suggest that while the estimated WTP values are stable over time, the underlying valuation functions may not be, particularly when the data and models are corrected to account for differing bid structures and possible panel effects.

  1. Testing the Limits of Temporal Stability: Willingness to Pay Values among Grand Canyon Whitewater Boaters Across Decades

    NASA Astrophysics Data System (ADS)

    Neher, Chris; Duffield, John; Bair, Lucas; Patterson, David; Neher, Katherine

    2017-12-01

    We directly compare trip willingness to pay (WTP) values between 1985 and 2015 stated preference surveys of private party Grand Canyon boaters using identically designed valuation methods. The temporal gap of 30 years between these two studies is well beyond that of any tests of WTP temporal stability in the literature. Comparisons were made of mean WTP estimates for four hypothetical Colorado River flow level scenarios. WTP values from the 1985 survey were adjusted to 2015 levels using the consumer price index. Mean WTP precision was estimated through simulation. No statistically significant differences were detected between the adjusted Bishop et al. (1987) and the current study mean WTP estimates. Examination of pooled models of the data from the studies suggest that while the estimated WTP values are stable over time, the underlying valuation functions may not be, particularly when the data and models are corrected to account for differing bid structures and possible panel effects.

  2. Design and modeling of flower like microring resonator

    NASA Astrophysics Data System (ADS)

    Razaghi, Mohammad; Laleh, Mohammad Sayfi

    2016-05-01

    This paper presents a novel multi-channel optical filter structure. The proposed design is based on using a set of microring resonators (MRRs) in new formation, named flower like arrangement. It is shown that instead of using 18 MRRs, by using only 5 MRRs in recommended formation, same filtering operation can be achieved. It is shown that with this structure, six filters and four integrated demultiplexers (DEMUXs) are obtained. The simplicity, extensibility and compactness of this structure make it usable in wavelength division multiplexing (WDM) networks. Filter's characteristics such as shape factor (SF), free spectral range (FSR) and stopband rejection ratio can be designed by adjusting microrings' radii and coupling coefficients. To model this structure, signal flow graph method (SFG) based on Mason's rule is used. The modeling method is discussed in depth. Furthermore, the accuracy and applicability of this method are verified through examples and comparison with other modeling schemes.

  3. Verification Of The Defense Waste Processing Facility's (DWPF) Process Digestion Methods For The Sludge Batch 8 Qualification Sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Click, D. R.; Edwards, T. B.; Wiedenman, B. J.

    2013-03-18

    This report contains the results and comparison of data generated from inductively coupled plasma – atomic emission spectroscopy (ICP-AES) analysis of Aqua Regia (AR), Sodium Peroxide/Sodium Hydroxide Fusion Dissolution (PF) and Cold Chem (CC) method digestions and Cold Vapor Atomic Absorption analysis of Hg digestions from the DWPF Hg digestion method of Sludge Batch 8 (SB8) Sludge Receipt and Adjustment Tank (SRAT) Receipt and SB8 SRAT Product samples. The SB8 SRAT Receipt and SB8 SRAT Product samples were prepared in the SRNL Shielded Cells, and the SRAT Receipt material is representative of the sludge that constitutes the SB8 Batch ormore » qualification composition. This is the sludge in Tank 51 that is to be transferred into Tank 40, which will contain the heel of Sludge Batch 7b (SB7b), to form the SB8 Blend composition.« less

  4. Adaptive torque estimation of robot joint with harmonic drive transmission

    NASA Astrophysics Data System (ADS)

    Shi, Zhiguo; Li, Yuankai; Liu, Guangjun

    2017-11-01

    Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.

  5. Calibration of AIS Data Using Ground-based Spectral Reflectance Measurements

    NASA Technical Reports Server (NTRS)

    Conel, J. E.

    1985-01-01

    Present methods of correcting airborne imaging spectrometer (AIS) data for instrumental and atmospheric effects include the flat- or curved-field correction and a deviation-from-the-average adjustment performed on a line-by-line basis throughout the image. Both methods eliminate the atmospheric absorptions, but remove the possibility of studying the atmosphere for its own sake, or of using the atmospheric information present as a possible basis for theoretical modeling. The method discussed here relies on use of ground-based measurements of the surface spectral reflectance in comparison with scanner data to fix in a least-squares sense parameters in a simplified model of the atmosphere on a wavelength-by-wavelength basis. The model parameters (for optically thin conditions) are interpretable in terms of optical depth and scattering phase function, and thus, in principle, provide an approximate description of the atmosphere as a homogeneous body intervening between the sensor and the ground.

  6. Evaluation of methods to reduce background using the Python-based ELISA_QC program.

    PubMed

    Webster, Rose P; Cohen, Cinder F; Saeed, Fatima O; Wetzel, Hanna N; Ball, William J; Kirley, Terence L; Norman, Andrew B

    2018-05-01

    Almost all immunological approaches [immunohistochemistry, enzyme-linked immunosorbent assay (ELISA), Western blot], that are used to quantitate specific proteins have had to address high backgrounds due to non-specific reactivity. We report here for the first time a quantitative comparison of methods for reduction of the background of commercial biotinylated antibodies using the Python-based ELISA_QC program. This is demonstrated using a recombinant humanized anti-cocaine monoclonal antibody. Several approaches, such as adjustment of the incubation time and the concentration of blocking agent, as well as the dilution of secondary antibodies, have been explored to address this issue. In this report, systematic comparisons of two different methods, contrasted with other more traditional methods to address this problem are provided. Addition of heparin (HP) at 1 μg/ml to the wash buffer prior to addition of the secondary biotinylated antibody reduced the elevated background absorbance values (from a mean of 0.313 ± 0.015 to 0.137 ± 0.002). A novel immunodepletion (ID) method also reduced the background (from a mean of 0.331 ± 0.010 to 0.146 ± 0.013). Overall, the ID method generated more similar results at each concentration of the ELISA standard curve to that using the standard lot 1 than the HP method, as analyzed by the Python-based ELISA_QC program. We conclude that the ID method, while more laborious, provides the best solution to resolve the high background seen with specific lots of biotinylated secondary antibody. Copyright © 2018. Published by Elsevier B.V.

  7. A comparison of two search methods for determining the scope of systematic reviews and health technology assessments.

    PubMed

    Forsetlund, Louise; Kirkehei, Ingvild; Harboe, Ingrid; Odgaard-Jensen, Jan

    2012-01-01

    This study aims to compare two different search methods for determining the scope of a requested systematic review or health technology assessment. The first method (called the Direct Search Method) included performing direct searches in the Cochrane Database of Systematic Reviews (CDSR), Database of Abstracts of Reviews of Effects (DARE) and the Health Technology Assessments (HTA). Using the comparison method (called the NHS Search Engine) we performed searches by means of the search engine of the British National Health Service, NHS Evidence. We used an adapted cross-over design with a random allocation of fifty-five requests for systematic reviews. The main analyses were based on repeated measurements adjusted for the order in which the searches were conducted. The Direct Search Method generated on average fewer hits (48 percent [95 percent confidence interval {CI} 6 percent to 72 percent], had a higher precision (0.22 [95 percent CI, 0.13 to 0.30]) and more unique hits than when searching by means of the NHS Search Engine (50 percent [95 percent CI, 7 percent to 110 percent]). On the other hand, the Direct Search Method took longer (14.58 minutes [95 percent CI, 7.20 to 21.97]) and was perceived as somewhat less user-friendly than the NHS Search Engine (-0.60 [95 percent CI, -1.11 to -0.09]). Although the Direct Search Method had some drawbacks such as being more time-consuming and less user-friendly, it generated more unique hits than the NHS Search Engine, retrieved on average fewer references and fewer irrelevant results.

  8. Experimental comparison between speech transmission index, rapid speech transmission index, and speech intelligibility index.

    PubMed

    Larm, Petra; Hongisto, Valtteri

    2006-02-01

    During the acoustical design of, e.g., auditoria or open-plan offices, it is important to know how speech can be perceived in various parts of the room. Different objective methods have been developed to measure and predict speech intelligibility, and these have been extensively used in various spaces. In this study, two such methods were compared, the speech transmission index (STI) and the speech intelligibility index (SII). Also the simplification of the STI, the room acoustics speech transmission index (RASTI), was considered. These quantities are all based on determining an apparent speech-to-noise ratio on selected frequency bands and summing them using a specific weighting. For comparison, some data were needed on the possible differences of these methods resulting from the calculation scheme and also measuring equipment. Their prediction accuracy was also of interest. Measurements were made in a laboratory having adjustable noise level and absorption, and in a real auditorium. It was found that the measurement equipment, especially the selection of the loudspeaker, can greatly affect the accuracy of the results. The prediction accuracy of the RASTI was found acceptable, if the input values for the prediction are accurately known, even though the studied space was not ideally diffuse.

  9. Bayesian propensity scores for high-dimensional causal inference: A comparison of drug-eluting to bare-metal coronary stents.

    PubMed

    Spertus, Jacob V; Normand, Sharon-Lise T

    2018-04-23

    High-dimensional data provide many potential confounders that may bolster the plausibility of the ignorability assumption in causal inference problems. Propensity score methods are powerful causal inference tools, which are popular in health care research and are particularly useful for high-dimensional data. Recent interest has surrounded a Bayesian treatment of propensity scores in order to flexibly model the treatment assignment mechanism and summarize posterior quantities while incorporating variance from the treatment model. We discuss methods for Bayesian propensity score analysis of binary treatments, focusing on modern methods for high-dimensional Bayesian regression and the propagation of uncertainty. We introduce a novel and simple estimator for the average treatment effect that capitalizes on conjugacy of the beta and binomial distributions. Through simulations, we show the utility of horseshoe priors and Bayesian additive regression trees paired with our new estimator, while demonstrating the importance of including variance from the treatment regression model. An application to cardiac stent data with almost 500 confounders and 9000 patients illustrates approaches and facilitates comparison with existing alternatives. As measured by a falsifiability endpoint, we improved confounder adjustment compared with past observational research of the same problem. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. A Comparison of Affect Ratings Obtained with Ecological Momentary Assessment and the Day Reconstruction Method

    PubMed Central

    Dockray, Samantha; Grant, Nina; Stone, Arthur A.; Kahneman, Daniel; Wardle, Jane

    2010-01-01

    Measurement of affective states in everyday life is of fundamental importance in many types of quality of life, health, and psychological research. Ecological momentary assessment (EMA) is the recognized method of choice, but the respondent burden can be high. The day reconstruction method (DRM) was developed by Kahneman and colleagues (Science, 2004, 306, 1776–1780) to assess affect, activities and time use in everyday life. We sought to validate DRM affect ratings by comparison with contemporaneous EMA ratings in a sample of 94 working women monitored over work and leisure days. Six EMA ratings of happiness, tiredness, stress, and anger/frustration were obtained over each 24 h period, and were compared with DRM ratings for the same hour, recorded retrospectively at the end of the day. Similar profiles of affect intensity were recorded with the two techniques. The between-person correlations adjusted for attenuation ranged from 0.58 (stress, working day) to 0.90 (happiness, leisure day). The strength of associations was not related to age, educational attainment, or depressed mood. We conclude that the DRM provides reasonably reliable estimates both of the intensity of affect and variations in affect over the day, so is a valuable instrument for the measurement of everyday experience in health and social research. PMID:21113328

  11. Case matching and the reduction of selection bias in quasi-experiments: The relative importance of pretest measures of outcome, of unreliable measurement, and of mode of data analysis.

    PubMed

    Cook, Thomas D; Steiner, Peter M

    2010-03-01

    In this article, we note the many ontological, epistemological, and methodological similarities between how Campbell and Rubin conceptualize causation. We then explore 3 differences in their written emphases about individual case matching in observational studies. We contend that (a) Campbell places greater emphasis than Rubin on the special role of pretest measures of outcome among matching variables; (b) Campbell is more explicitly concerned with unreliability in the covariates; and (c) for analyzing the outcome, only Rubin emphasizes the advantages of using propensity score over regression methods. To explore how well these 3 factors reduce bias, we reanalyze and review within-study comparisons that contrast experimental and statistically adjusted nonexperimental causal estimates from studies with the same target population and treatment content. In this context, the choice of covariates counts most for reducing selection bias, and the pretest usually plays a special role relative to all the other covariates considered singly. Unreliability in the covariates also influences bias reduction but by less. Furthermore, propensity score and regression methods produce comparable degrees of bias reduction, though these within-study comparisons may not have met the theoretically specified conditions most likely to produce differences due to analytic method.

  12. Efficacy and tolerability of brivaracetam compared to lacosamide, eslicarbazepine acetate, and perampanel as adjunctive treatments in uncontrolled focal epilepsy: Results of an indirect comparison meta-analysis of RCTs.

    PubMed

    Brigo, Francesco; Bragazzi, Nicola Luigi; Nardone, Raffaele; Trinka, Eugen

    2016-11-01

    Brivaracetam (BRV), eslicarbazepine acetate (ESL), lacosamide (LCM), and perampanel (PER) have been recently marketed as adjunctive treatments for focal onset seizures. To date, no randomized controlled trial (RCT) has directly compared BRV with ESL, LCM, or PER. To compare BRV with the other add-on AEDs in patients with uncontrolled focal epilepsy, estimating their efficacy and tolerability through an adjusted, common-reference based indirect comparison meta-analysis. We systematically searched RCTs in which add-on treatment with ESL or LCM in patients with focal onset seizures have been compared with placebo. Efficacy and tolerability outcomes were considered. Random-effects Mantel-Haenszel meta-analyses were performed to obtain odds ratios (ORs) for the efficacy of BRV, LCM, ESL, or PER versus placebo. Adjusted indirect comparisons were then made between BRV and the other three AEDs using the obtained results, comparing the minimum and the highest effective recommended daily dose of each drug. Seventeen RCTs, with a total of 4971 patients were included. After adjusting for dose-effects, indirect comparisons showed no difference between BRV and LCM, ESL, or PER for responder rate and seizure freedom. Lower adverse events were observed with high dose BRV compared to high dose ESL or PER, but no difference was found in withdrawing because of adverse events. Indirect comparisons do not demonstrate a significant difference in efficacy between add-on BRV and LCM, ESL, or PER in focal epilepsy, and might suggest a better tolerability of BRV than ESL, and possibly also PER, at the highest effective recommended dose. Copyright © 2016 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  13. The CACAO Method for Smoothing, Gap Filling, and Characterizing Seasonal Anomalies in Satellite Time Series

    NASA Technical Reports Server (NTRS)

    Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.

    2013-01-01

    Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.

  14. The Families of U.S. Navy Prisoners of War from Vietnam Five Years after Reunion.

    ERIC Educational Resources Information Center

    Nice, D. Stephen; And Others

    1981-01-01

    Investigated marital stability and perceptions of marital adjustment and family environment among Navy prisoners of war repatriated from Vietnam (RPWs) and a Navy comparison group. Results indicated that the post-repatriation divorce rate among the RWP group was significantly higher than for the comparison group. (Author)

  15. The Perception of Trauma Patients from Social Support in Adjustment to Lower-Limb Amputation: A Qualitative Study

    PubMed Central

    Valizadeh, Sousan; Dadkhah, Behrouz; Mohammadi, Eissa; Hassankhani, Hadi

    2014-01-01

    Introduction: The effect of amputation on an individual's psychological condition as well as family and social relationships is undeniable because physical disability not just affects the psycho-social adjustment, but also the mental health. When compared to normal people, such people are mostly experiencing social isolation. On the other hand, social support is known as the most powerful force to cope with stressful situations and it allows patients to withstand problems. The present study aims to explain understanding the trauma of patients and the experience of support sources during the process of adaptation to a lower limb amputation. Materials and Methods: The present study was conducted using qualitative content analysis. Participants included 20 patients with lower limb amputation due to trauma. Sampling was purposive initially and continued until data saturation. Unstructured interviews were used as the main method of data collection. Collected data were analyzed using qualitative content analysis and constant comparison methods. Results: The main theme extracted from the data was support sources. The classes include “supportive family”, “gaining friends’ support”, “gaining morale from peers”, and “assurance and satisfaction with the workplace.” Conclusion: Given the high number of physical, mental and social problems in trauma patients, identifying and strengthening support sources can be effective in their adaptation with the disease and improvement of the quality of their life. PMID:25191013

  16. Overcoming the Problems of Inconsistent International Migration data: A New Method Applied to Flows in Europe

    PubMed Central

    Raymer, James; van der Erf, Rob; van Wissen, Leo

    2010-01-01

    Due to differences in definitions and measurement methods, cross-country comparisons of international migration patterns are difficult and confusing. Emigration numbers reported by sending countries tend to differ from the corresponding immigration numbers reported by receiving countries. In this paper, a methodology is presented to achieve harmonised estimates of migration flows benchmarked to a specific definition of duration. This methodology accounts for both differences in definitions and the effects of measurement error due to, for example, under reporting and sampling fluctuations. More specifically, the differences between the two sets of reported data are overcome by estimating a set of adjustment factors for each country’s immigration and emigration data. The adjusted data take into account any special cases where the origin–destination patterns do not match the overall patterns. The new method for harmonising migration flows that we present is based on earlier efforts by Poulain (European Journal of Population, 9(4): 353–381 1993, Working Paper 12, joint ECE-Eurostat Work Session on Migration Statistics, Geneva, Switzerland 1999) and is illustrated for movements between 19 European countries from 2002 to 2007. The results represent a reliable and consistent set of international migration flows that can be used for understanding recent changes in migration patterns, as inputs into population projections and for developing evidence-based migration policies. PMID:21124647

  17. Overcoming the Problems of Inconsistent International Migration data: A New Method Applied to Flows in Europe.

    PubMed

    de Beer, Joop; Raymer, James; van der Erf, Rob; van Wissen, Leo

    2010-11-01

    Due to differences in definitions and measurement methods, cross-country comparisons of international migration patterns are difficult and confusing. Emigration numbers reported by sending countries tend to differ from the corresponding immigration numbers reported by receiving countries. In this paper, a methodology is presented to achieve harmonised estimates of migration flows benchmarked to a specific definition of duration. This methodology accounts for both differences in definitions and the effects of measurement error due to, for example, under reporting and sampling fluctuations. More specifically, the differences between the two sets of reported data are overcome by estimating a set of adjustment factors for each country's immigration and emigration data. The adjusted data take into account any special cases where the origin-destination patterns do not match the overall patterns. The new method for harmonising migration flows that we present is based on earlier efforts by Poulain (European Journal of Population, 9(4): 353-381 1993, Working Paper 12, joint ECE-Eurostat Work Session on Migration Statistics, Geneva, Switzerland 1999) and is illustrated for movements between 19 European countries from 2002 to 2007. The results represent a reliable and consistent set of international migration flows that can be used for understanding recent changes in migration patterns, as inputs into population projections and for developing evidence-based migration policies.

  18. The Sampling Design of the China Family Panel Studies (CFPS)

    PubMed Central

    Xie, Yu; Lu, Ping

    2018-01-01

    The China Family Panel Studies (CFPS) is an on-going, nearly nationwide, comprehensive, longitudinal social survey that is intended to serve research needs on a large variety of social phenomena in contemporary China. In this paper, we describe the sampling design of the CFPS sample for its 2010 baseline survey and methods for constructing weights to adjust for sampling design and survey nonresponses. Specifically, the CFPS used a multi-stage probability strategy to reduce operation costs and implicit stratification to increase efficiency. Respondents were oversampled in five provinces or administrative equivalents for regional comparisons. We provide operation details for both sampling and weights construction. PMID:29854418

  19. Determination of organic compounds in water using ultraviolet LED

    NASA Astrophysics Data System (ADS)

    Kim, Chihoon; Ji, Taeksoo; Eom, Joo Beom

    2018-04-01

    This paper describes a method of detecting organic compounds in water using an ultraviolet LED (280 nm) spectroscopy system and a photodetector. The LED spectroscopy system showed a high correlation between the concentration of the prepared potassium hydrogen phthalate and that calculated by multiple linear regression, indicating an adjusted coefficient of determination ranging from 0.953-0.993. In addition, a comparison between the performance of the spectroscopy system and the total organic carbon analyzer indicated that the difference in concentration was small. Based on the close correlation between the spectroscopy and photodetector absorbance values, organic measurement with a photodetector could be configured for monitoring.

  20. Efficient calculation of general Voigt profiles

    NASA Astrophysics Data System (ADS)

    Cope, D.; Khoury, R.; Lovett, R. J.

    1988-02-01

    An accurate and efficient program is presented for the computation of OIL profiles, generalizations of the Voigt profile resulting from the one-interacting-level model of Ward et al. (1974). These profiles have speed dependent shift and width functions and have asymmetric shapes. The program contains an adjustable error control parameter and includes the Voigt profile as a special case, although the general nature of this program renders it slower than a specialized Voigt profile method. Results on accuracy and computation time are presented for a broad set of test parameters, and a comparison is made with previous work on the asymptotic behavior of general Voigt profiles.

  1. 48 CFR 22.407 - Solicitation provision and contract clauses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Adjustment (None or Separately Specified Pricing Method), in solicitations and contracts if the contract is... determines the most appropriate contract price adjustment method is the method at 22.404-12(c)(1) or (2); or....222-31, Davis-Bacon Act—Price Adjustment (Percentage Method), in solicitations and contracts if the...

  2. Ensemble of trees approaches to risk adjustment for evaluating a hospital's performance.

    PubMed

    Liu, Yang; Traskin, Mikhail; Lorch, Scott A; George, Edward I; Small, Dylan

    2015-03-01

    A commonly used method for evaluating a hospital's performance on an outcome is to compare the hospital's observed outcome rate to the hospital's expected outcome rate given its patient (case) mix and service. The process of calculating the hospital's expected outcome rate given its patient mix and service is called risk adjustment (Iezzoni 1997). Risk adjustment is critical for accurately evaluating and comparing hospitals' performances since we would not want to unfairly penalize a hospital just because it treats sicker patients. The key to risk adjustment is accurately estimating the probability of an Outcome given patient characteristics. For cases with binary outcomes, the method that is commonly used in risk adjustment is logistic regression. In this paper, we consider ensemble of trees methods as alternatives for risk adjustment, including random forests and Bayesian additive regression trees (BART). Both random forests and BART are modern machine learning methods that have been shown recently to have excellent performance for prediction of outcomes in many settings. We apply these methods to carry out risk adjustment for the performance of neonatal intensive care units (NICU). We show that these ensemble of trees methods outperform logistic regression in predicting mortality among babies treated in NICU, and provide a superior method of risk adjustment compared to logistic regression.

  3. A systematic review of the quality of statistical methods employed for analysing quality of life data in cancer randomised controlled trials.

    PubMed

    Hamel, Jean-Francois; Saulnier, Patrick; Pe, Madeline; Zikos, Efstathios; Musoro, Jammbe; Coens, Corneel; Bottomley, Andrew

    2017-09-01

    Over the last decades, Health-related Quality of Life (HRQoL) end-points have become an important outcome of the randomised controlled trials (RCTs). HRQoL methodology in RCTs has improved following international consensus recommendations. However, no international recommendations exist concerning the statistical analysis of such data. The aim of our study was to identify and characterise the quality of the statistical methods commonly used for analysing HRQoL data in cancer RCTs. Building on our recently published systematic review, we analysed a total of 33 published RCTs studying the HRQoL methods reported in RCTs since 1991. We focussed on the ability of the methods to deal with the three major problems commonly encountered when analysing HRQoL data: their multidimensional and longitudinal structure and the commonly high rate of missing data. All studies reported HRQoL being assessed repeatedly over time for a period ranging from 2 to 36 months. Missing data were common, with compliance rates ranging from 45% to 90%. From the 33 studies considered, 12 different statistical methods were identified. Twenty-nine studies analysed each of the questionnaire sub-dimensions without type I error adjustment. Thirteen studies repeated the HRQoL analysis at each assessment time again without type I error adjustment. Only 8 studies used methods suitable for repeated measurements. Our findings show a lack of consistency in statistical methods for analysing HRQoL data. Problems related to multiple comparisons were rarely considered leading to a high risk of false positive results. It is therefore critical that international recommendations for improving such statistical practices are developed. Copyright © 2017. Published by Elsevier Ltd.

  4. SU-F-J-96: Comparison of Frame-Based and Mutual Information Registration Techniques for CT and MR Image Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popple, R; Bredel, M; Brezovich, I

    Purpose: To compare the accuracy of CT-MR registration using a mutual information method with registration using a frame-based localizer box. Methods: Ten patients having the Leksell head frame and scanned with a modality specific localizer box were imported into the treatment planning system. The fiducial rods of the localizer box were contoured on both the MR and CT scans. The skull was contoured on the CT images. The MR and CT images were registered by two methods. The frame-based method used the transformation that minimized the mean square distance of the centroids of the contours of the fiducial rods frommore » a mathematical model of the localizer. The mutual information method used automated image registration tools in the TPS and was restricted to a volume-of-interest defined by the skull contours with a 5 mm margin. For each case, the two registrations were adjusted by two evaluation teams, each comprised of an experienced radiation oncologist and neurosurgeon, to optimize alignment in the region of the brainstem. The teams were blinded to the registration method. Results: The mean adjustment was 0.4 mm (range 0 to 2 mm) and 0.2 mm (range 0 to 1 mm) for the frame and mutual information methods, respectively. The median difference between the frame and mutual information registrations was 0.3 mm, but was not statistically significant using the Wilcoxon signed rank test (p=0.37). Conclusion: The difference between frame and mutual information registration techniques was neither statistically significant nor, for most applications, clinically important. These results suggest that mutual information is equivalent to frame-based image registration for radiosurgery. Work is ongoing to add additional evaluators and to assess the differences between evaluators.« less

  5. Cardiometabolic Syndrome in People With Spinal Cord Injury/Disease: Guideline-Derived and Nonguideline Risk Components in a Pooled Sample.

    PubMed

    Nash, Mark S; Tractenberg, Rochelle E; Mendez, Armando J; David, Maya; Ljungberg, Inger H; Tinsley, Emily A; Burns-Drecq, Patricia A; Betancourt, Luisa F; Groah, Suzanne L

    2016-10-01

    To assess cardiometabolic syndrome (CMS) risk definitions in spinal cord injury/disease (SCI/D). Cross-sectional analysis of a pooled sample. Two SCI/D academic medical and rehabilitation centers. Baseline data from subjects in 7 clinical studies were pooled; not all variables were collected in all studies; therefore, participant numbers varied from 119 to 389. The pooled sample included men (79%) and women (21%) with SCI/D >1 year at spinal cord levels spanning C3-T2 (American Spinal Injury Association Impairment Scale [AIS] grades A-D). Not applicable. We computed the prevalence of CMS using the American Heart Association/National Heart, Lung, and Blood Institute guideline (CMS diagnosis as sum of risks ≥3 method) for the following risk components: overweight/obesity, insulin resistance, hypertension, and dyslipidemia. We compared this prevalence with the risk calculated from 2 routinely used nonguideline CMS risk assessments: (1) key cut scores identifying insulin resistance derived from the homeostatic model 2 (HOMA2) method or quantitative insulin sensitivity check index (QUICKI), and (2) a cardioendocrine risk ratio based on an inflammation (C-reactive protein [CRP])-adjusted total cholesterol/high-density lipoprotein cholesterol ratio. After adjustment for multiple comparisons, injury level and AIS grade were unrelated to CMS or risk factors. Of the participants, 13% and 32.1% had CMS when using the sum of risks or HOMA2/QUICKI model, respectively. Overweight/obesity and (pre)hypertension were highly prevalent (83% and 62.1%, respectively), with risk for overweight/obesity being significantly associated with CMS diagnosis (sum of risks, χ(2)=10.105; adjusted P=.008). Insulin resistance was significantly associated with CMS when using the HOMA2/QUICKI model (χ(2)2=21.23, adjusted P<.001). Of the subjects, 76.4% were at moderate to high risk from elevated CRP, which was significantly associated with CMS determination (both methods; sum of risks, χ(2)2=10.198; adjusted P=.048 and HOMA2/QUICKI, χ(2)2=10.532; adjusted P=.04). As expected, guideline-derived CMS risk factors were prevalent in individuals with SCI/D. Overweight/obesity, hypertension, and elevated CRP were common in SCI/D and, because they may compound risks associated with CMS, should be considered population-specific risk determinants. Heightened surveillance for risk, and adoption of healthy living recommendations specifically directed toward weight reduction, hypertension management, and inflammation control, should be incorporated as a priority for disease prevention and management. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  6. Object-Based Mapping of the Circumpolar Taiga-Tundra Ecotone with MODIS Tree Cover

    NASA Technical Reports Server (NTRS)

    Ranson, K. J.; Montesano, P. M.; Nelson, R.

    2011-01-01

    The circumpolar taiga tundra ecotone was delineated using an image-segmentation-based mapping approach with multi-annual MODIS Vegetation Continuous Fields (VCF) tree cover data. Circumpolar tree canopy cover (TCC) throughout the ecotone was derived by averaging MODIS VCF data from 2000 to 2005 and adjusting the averaged values using linear equations relating MODIS TCC to Quickbird-derived tree cover estimates. The adjustment helped mitigate VCF's overestimation of tree cover in lightly forested regions. An image segmentation procedure was used to group pixels representing similar tree cover into polygonal features (segmentation objects) that form the map of the transition zone. Each polygon represents an area much larger than the 500 m MODIS pixel and characterizes the patterns of sparse forest patches on a regional scale. Those polygons near the boreal/tundra interface with either (1) mean adjusted TCC values from5 to 20%, or (2) mean adjusted TCC values greater than 5% but with a standard deviation less than 5% were used to identify the ecotone. Comparisons of the adjusted average tree cover data were made with (1) two existing tree line definitions aggregated for each 1 degree longitudinal interval in North America and Eurasia, (2) Landsat-derived Canadian proportion of forest cover for Canada, and (3) with canopy cover estimates extracted from airborne profiling lidar data that transected 1238 of the TCC polygons. The adjusted TCC from MODIS VCF shows, on average, less than 12% TCC for all but one regional zone at the intersection with independently delineated tree lines. Adjusted values track closely with Canadian proportion of forest cover data in areas of low tree cover. A comparison of the 1238 TCC polygons with profiling lidar measurements yielded an overall accuracy of 67.7%.

  7. Risk of Retinal Vein Occlusion in Patients With End-Stage Renal Disease: A 12-Year, Retrospective, Nationwide Cohort Study in South Korea.

    PubMed

    Lee, Kyung Sik; Nam, Ki Heon; Kim, Dong Wook; Kang, Eui Chun; Koh, Hyoung Jun

    2018-01-01

    The present study aimed to evaluate the risk of retinal vein occlusion (RVO) in Korean patients with end-stage renal disease (ESRD). In this retrospective, nationwide, propensity score-matched cohort study, subjects were randomly enrolled from the 12-year longitudinal Korean National Health Insurance Service-National Sample Cohort 2002-2013 database comprising 1 million subjects. The ESRD group comprised 988 patients newly diagnosed with ESRD from 2003 onward by washing out data from 2002. The comparison group comprised 4940 (5 for each patient with ESRD) randomly selected propensity score-matched individuals not diagnosed with ESRD. Each sampled patient was tracked until 2013 for RVO development. Multiple conditional Cox regression analysis was performed to compare the risk of RVO between the two groups. The mean follow-up period was 7.37 years. The incidence of RVO was 3.95% in the ESRD group and 2.17% in the comparison group (P = 0.001). ESRD was associated with greater risk of RVO development after adjustment for possible confounders (adjusted hazard ratio [HR], 2.122; 95% confidence interval [CI], 1.396-3.226; P = 0.0004). The 50- to 60-year (adjusted HR, 2.635; 95% CI, 1.100-6.313; P = 0.0297) and 60- to 70-year (adjusted HR, 2.544; 95% CI, 1.059-6.110; P = 0.0368) age groups exhibited higher risk of RVO compared with the <40-year age group. Hyperlipidemia (adjusted HR, 1.670; 95% CI, 1.176-2.371; P = 0.0042) and hypertension (adjusted HR, 1.896; 95% CI, 1.165-3.086; P = 0.01) were also associated with RVO. An association between ESRD and subsequent RVO development was found after adjustment for possible confounding factors.

  8. Perceptions of emotion expression and sibling-parent emotion communication in Latino and non-Latino white siblings of children with intellectual disabilities.

    PubMed

    Long, Kristin A; Lobato, Debra; Kao, Barbara; Plante, Wendy; Grullón, Edicta; Cheas, Lydia; Houck, Christopher; Seifer, Ronald

    2013-06-01

    Examine general emotion expression and sibling-parent emotion communication among Latino and non-Latino white (NLW) siblings of children with intellectual disabilities (ID) and matched comparisons. 200 siblings (ages 8-15 years) completed the newly developed Sibling-Parent Emotion Communication Scale and existing measures of general emotion expression and psychosocial functioning. Preliminary analyses evaluated scale psychometrics across ethnicity. Structure and internal consistency of the emotion expression and communication measures differed by respondent ethnicity. Latino siblings endorsed more general emotion expression problems and marginally lower sibling-parent emotion communication than NLW siblings. Siblings of children with ID reported marginally more general emotion expression problems than comparisons. Emotion expression problems and lower sibling-parent emotion communication predicted more internalizing and somatic symptoms and poorer personal adjustment, regardless of ID status. Siblings of children with ID endorsed poorer personal adjustment. Cultural differences in emotion expression and communication may increase Latino siblings' risk for emotional adjustment difficulties.

  9. Upper ankle joint space detection on low contrast intraoperative fluoroscopic C-arm projections

    NASA Astrophysics Data System (ADS)

    Thomas, Sarina; Schnetzke, Marc; Brehler, Michael; Swartman, Benedict; Vetter, Sven; Franke, Jochen; Grützner, Paul A.; Meinzer, Hans-Peter; Nolden, Marco

    2017-03-01

    Intraoperative mobile C-arm fluoroscopy is widely used for interventional verification in trauma surgery, high flexibility combined with low cost being the main advantages of the method. However, the lack of global device-to- patient orientation is challenging, when comparing the acquired data to other intrapatient datasets. In upper ankle joint fracture reduction accompanied with an unstable syndesmosis, a comparison to the unfractured contralateral site is helpful for verification of the reduction result. To reduce dose and operation time, our approach aims at the comparison of single projections of the unfractured ankle with volumetric images of the reduced fracture. For precise assessment, a pre-alignment of both datasets is a crucial step. We propose a contour extraction pipeline to estimate the joint space location for a prealignment of fluoroscopic C-arm projections containing the upper ankle joint. A quadtree-based hierarchical variance comparison extracts potential feature points and a Hough transform is applied to identify bone shaft lines together with the tibiotalar joint space. By using this information we can define the coarse orientation of the projections independent from the ankle pose during acquisition in order to align those images to the volume of the fractured ankle. The proposed method was evaluated on thirteen cadaveric datasets consisting of 100 projections each with manually adjusted image planes by three trauma surgeons. The results show that the method can be used to detect the joint space orientation. The correlation between angle deviation and anatomical projection direction gives valuable input on the acquisition direction for future clinical experiments.

  10. Meeting Contraceptive Needs: Long-Term Associations of the PRACHAR Project with Married Women's Awareness and Behavior in Bihar.

    PubMed

    Jejeebhoy, Shireen J; Prakash, Ravi; Acharya, Rajib; Singh, Santosh K; Daniel, Elkan

    2015-09-01

    Although interventions such as the PRACHAR project in Bihar, India, have been associated with increased contraceptive knowledge and use in the short term, less is known about whether such gains are sustained years later. Survey data, collected in 2013 from 2,846 married women aged 15-34, were used to compare contraceptive awareness and use between those who lived in areas where the PRACHAR project had been implemented in 2002-2009 and those who lived in matched comparison areas. Multivariate analyses assessed whether, after adjustment for covariates, outcomes differed between women in comparison and intervention areas, as well as between women directly exposed to the program and those who lived in intervention areas but had been only indirectly exposed. Compared with women in comparison areas, those in intervention areas were more likely to have method-specific knowledge of oral contraceptives, IUDs, condoms and the Standard Days Method (odds ratios, 1.4-1.7); to know that oral contraceptives and condoms are appropriate for delaying first pregnancy (2.3 for each) and IUDs and injectables are appropriate for spacing births (1.4 for each); to have ever used contraceptives (2.1) or be using a modern method (1.5); and to have initiated contraception within three months of their first birth (1.8). Levels of awareness and use were elevated not only among women directly exposed to the intervention but also, for many measures, among indirectly exposed women. The association of multipronged reproductive health programs like PRACHAR with contraceptive awareness and practices may last for years beyond the project's conclusion.

  11. Reference Charts for Height and Weight of School Children from West Malaysia in Comparison with the United States Centers for Disease Control and Prevention

    PubMed Central

    Bong, YB; Shariff, AA; Majid, AM; Merican, AF

    2012-01-01

    Background: Reference charts are widely used in healthcare as a screening tool. This study aimed to produce reference growth charts for school children from West Malaysia in comparison with the United States Centers for Disease Control and Prevention (CDC) chart. Methods: A total of 14,360 school children ranging from 7 to 17 years old from six states in West Malaysia were collected. A two-stage stratified random sampling technique was used to recruit the subjects. Curves were adjusted using Cole’s LMS method. The LOWESS method was used to smooth the data. Results: The means and standard deviations for height and weight for both genders are presented. The results showed good agreement with growth patterns in other countries, i.e., males tend to be taller and heavier than females for most age groups. Height and weight of females reached a plateau at 17 years of age; however, males were still growing at this age. The growth charts for West Malaysian school children were compared with the CDC 2000 growth charts for school children in the United States. Conclusion: The height and weight for males and females at the start of school-going ages were almost similar. The comparison between the growth charts from this study and the CDC 2000 growth charts indicated that the growth patterns of West Malaysian school children have improved, although the height and weight of American school children were higher than those for West Malaysian school children. PMID:23113132

  12. Boston children's hospital community asthma initiative: Five-year cost analyses of a home visiting program.

    PubMed

    Bhaumik, Urmi; Sommer, Susan J; Giller-Leinwohl, Judith; Norris, Kerri; Tsopelas, Lindsay; Nethersole, Shari; Woods, Elizabeth R

    2017-03-01

    To evaluate the costs and benefits of the Boston Children's Hospital Community Asthma Initiative (CAI) through reduction of Emergency Department (ED) visits and hospitalizations for the full pilot-phase program participants. A cost-benefit analyses was conducted using hospital administrative data to determine an adjusted Return on Investment (ROI): on all 268 patients enrolled in the CAI program during the 33-month pilot program phase of CAI intervention between October 1, 2005 and June 30, 2008 using a comparison group of 818 patients from a similar cohort in neighboring ZIP codes without CAI intervention. Cost data through June 30, 2013 were used to examine cost changes and calculate an adjusted ROI over a 5-year post-intervention period. CAI patients had a cost reduction greater than the comparison group of $1,216 in Year 1 (P = 0.001), $1,320 in Year 2 (P < 0.001), $1,132 (P = 0.002) in Year 3, $1,123 (P = 0.004) in Year 4, and $997 (P = 0.022) in Year 5. Adjusting for the cost savings for the comparison group, the cost savings from the intervention resulted in an adjusted ROI of 1.91 over 5 years. Community-based, multidisciplinary, coordinated disease management programs can decrease the incidence of costly hospitalizations and ED visits from asthma. An ROI of greater than one, as found in this cost analysis, supports the business case for the provision of community-based asthma services as part of patient-centered medical homes and Accountable Care Organizations.

  13. Flexible sampling large-scale social networks by self-adjustable random walk

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Ke; Zhu, Jonathan J. H.

    2016-12-01

    Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.

  14. Reporting quality of statistical methods in surgical observational studies: protocol for systematic review.

    PubMed

    Wu, Robert; Glen, Peter; Ramsay, Tim; Martel, Guillaume

    2014-06-28

    Observational studies dominate the surgical literature. Statistical adjustment is an important strategy to account for confounders in observational studies. Research has shown that published articles are often poor in statistical quality, which may jeopardize their conclusions. The Statistical Analyses and Methods in the Published Literature (SAMPL) guidelines have been published to help establish standards for statistical reporting.This study will seek to determine whether the quality of statistical adjustment and the reporting of these methods are adequate in surgical observational studies. We hypothesize that incomplete reporting will be found in all surgical observational studies, and that the quality and reporting of these methods will be of lower quality in surgical journals when compared with medical journals. Finally, this work will seek to identify predictors of high-quality reporting. This work will examine the top five general surgical and medical journals, based on a 5-year impact factor (2007-2012). All observational studies investigating an intervention related to an essential component area of general surgery (defined by the American Board of Surgery), with an exposure, outcome, and comparator, will be included in this systematic review. Essential elements related to statistical reporting and quality were extracted from the SAMPL guidelines and include domains such as intent of analysis, primary analysis, multiple comparisons, numbers and descriptive statistics, association and correlation analyses, linear regression, logistic regression, Cox proportional hazard analysis, analysis of variance, survival analysis, propensity analysis, and independent and correlated analyses. Each article will be scored as a proportion based on fulfilling criteria in relevant analyses used in the study. A logistic regression model will be built to identify variables associated with high-quality reporting. A comparison will be made between the scores of surgical observational studies published in medical versus surgical journals. Secondary outcomes will pertain to individual domains of analysis. Sensitivity analyses will be conducted. This study will explore the reporting and quality of statistical analyses in surgical observational studies published in the most referenced surgical and medical journals in 2013 and examine whether variables (including the type of journal) can predict high-quality reporting.

  15. Comparison of Three Exit-Area Control Devices on an N.A.C.A. Cowling, Special Report

    NASA Technical Reports Server (NTRS)

    McHugh, James G.

    1940-01-01

    Adjustable cowling flaps, an adjustable-length cowling skirt, and a bottom opening with adjustable flap were tested as means of controlling the rate of cooling-air flow through an air-cooled radial-engine cowling. The devices were tested in the NACA 20-foot tunnel on a model wing-nacelle-propeller combination, through an airspeed range of 20 to 80 miles per hour, and with the propeller blade angle set 23 degrees at 0.75 of the tip radius. The resistance of the engine to air flow through the cowling was simulated by a perforated plate. The results indicated that the adjustable cowling flap and the bottom opening with adjustable flap were about equally effective on the basis of pressure drop obtainable and that both were more effective means of increasing the pressure drop through the cowling than the adjustable-length skirt. At conditions of equal cooling-air flow, the net efficiency obtained with the adjustable cowling flaps and the adjustable-length cowling skirt was about 1% greater than the net efficiency obtained with the bottom opening with adjustable flap.

  16. Characterizing and Addressing the Need for Statistical Adjustment of Global Climate Model Data

    NASA Astrophysics Data System (ADS)

    White, K. D.; Baker, B.; Mueller, C.; Villarini, G.; Foley, P.; Friedman, D.

    2017-12-01

    As part of its mission to research and measure the effects of the changing climate, the U. S. Army Corps of Engineers (USACE) regularly uses the World Climate Research Programme's Coupled Model Intercomparison Project Phase 5 (CMIP5) multi-model dataset. However, these data are generated at a global level and are not fine-tuned for specific watersheds. This often causes CMIP5 output to vary from locally observed patterns in the climate. Several downscaling methods have been developed to increase the resolution of the CMIP5 data and decrease systemic differences to support decision-makers as they evaluate results at the watershed scale. Evaluating preliminary comparisons of observed and projected flow frequency curves over the US revealed a simple framework for water resources decision makers to plan and design water resources management measures under changing conditions using standard tools. Using this framework as a basis, USACE has begun to explore to use of statistical adjustment to alter global climate model data to better match the locally observed patterns while preserving the general structure and behavior of the model data. When paired with careful measurement and hypothesis testing, statistical adjustment can be particularly effective at navigating the compromise between the locally observed patterns and the global climate model structures for decision makers.

  17. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    PubMed

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  18. Reconstruction method for fringe projection profilometry based on light beams.

    PubMed

    Li, Xuexing; Zhang, Zhijiang; Yang, Chen

    2016-12-01

    A novel reconstruction method for fringe projection profilometry, based on light beams, is proposed and verified by experiments. Commonly used calibration techniques require the parameters of projector calibration or the reference planes placed in many known positions. Obviously, introducing the projector calibration can reduce the accuracy of the reconstruction result, and setting the reference planes to many known positions is a time-consuming process. Therefore, in this paper, a reconstruction method without projector's parameters is proposed and only two reference planes are introduced. A series of light beams determined by the subpixel point-to-point map on the two reference planes combined with their reflected light beams determined by the camera model are used to calculate the 3D coordinates of reconstruction points. Furthermore, the bundle adjustment strategy and the complementary gray-code phase-shifting method are utilized to ensure the accuracy and stability. Qualitative and quantitative comparisons as well as experimental tests demonstrate the performance of our proposed approach, and the measurement accuracy can reach about 0.0454 mm.

  19. The Adjustment Problems Faced by International and Overseas Chinese Students Studying in Taiwan Universities: A Comparison of Student and Faculty/Staff Perceptions

    ERIC Educational Resources Information Center

    Jenkins, John R.; Galloway, Fred

    2009-01-01

    Over the last 15 years the number of international students studying at universities in Taiwan has increased dramatically; however, to date, there have been few studies that measured the cultural adjustment problems that this diverse group of students experience. To remedy this problem, this study gathered data from 1,174 international students…

  20. The Adolescent Adjustment Profile (AAP) in comparisons of patients with obesity, phenylketonuria or neurobehavioural disorders.

    PubMed

    Olsson, Gunilla Maria; Mårild, Staffan; Alm, Jan; Brodin, Ulf; Rydelius, Per-Anders; Marcus, Claude

    2008-01-01

    Psychosocial development in children with chronic disease is a key issue in paediatrics. This study investigated whether psychosocial adjustment could be reliably assessed with the 42-item Adolescent Adjustment Profile (AAP) instrument. The study mainly focused on adjustment-to-obesity measurement, although it compared three patient groups with chronic conditions. All phenylketonuria (PKU) patients in Sweden between ages 9 and 18 and their parents and teachers were invited to participate. Patients with neurobehavioural syndromes and obesity were age- and gender-matched with PKU patients. Healthy children constituted a reference group. Psychosocial adjustment was measured using the AAP, which is a multi-informant questionnaire that contains four domains. Information concerning parents' socio-economic and civil status was requested separately. Respondents to the three questionnaires judged the PKU patients to be normal in all four domains. Patients with neurobehavioural syndromes demonstrated less competence and the most problems compared with the other three groups. According to the self-rating, the parent rating and the teacher rating questionnaires, obese patients had internalizing problems. The parent rating and the teacher rating questionnaire scored obese patients as having a lower work capacity than the reference group. Compared with the reference group, not only families with obese children but also families with children with neurobehavioural syndromes had significantly higher divorce rates. Obese patients were also investigated with the Strength and Difficulties Questionnaire (SDQ), another instrument that enables comparison between two measures of adjustment. The AAP had good psychometric properties; it was judged a useful instrument in research on adolescents with chronic diseases.

  1. Evaluating the validity of multiple imputation for missing physiological data in the national trauma data bank.

    PubMed

    Moore, Lynne; Hanley, James A; Lavoie, André; Turgeon, Alexis

    2009-05-01

    The National Trauma Data Bank (NTDB) is plagued by the problem of missing physiological data. The Glasgow Coma Scale score, Respiratory Rate and Systolic Blood Pressure are an essential part of risk adjustment strategies for trauma system evaluation and clinical research. Missing data on these variables may compromise the feasibility and the validity of trauma group comparisons. To evaluate the validity of Multiple Imputation (MI) for completing missing physiological data in the National Trauma Data Bank (NTDB), by assessing the impact of MI on 1) frequency distributions, 2) associations with mortality, and 3) risk adjustment. Analyses were based on 170,956 NTDB observations with complete physiological data (observed data set). Missing physiological data were artificially imposed on this data set and then imputed using MI (MI data set). To assess the impact of MI on risk adjustment, 100 pairs of hospitals were randomly selected with replacement and compared using adjusted Odds Ratios (OR) of mortality. OR generated by the observed data set were then compared to those generated by the MI data set. Frequency distributions and associations with mortality were preserved following MI. The median absolute difference between adjusted OR of mortality generated by the observed data set and by the MI data set was 3.6% (inter-quartile range: 2.4%-6.1%). This study suggests that, provided it is implemented with care, MI of missing physiological data in the NTDB leads to valid frequency distributions, preserves associations with mortality, and does not compromise risk adjustment in inter-hospital comparisons of mortality.

  2. Science Teachers' Perceptions of and Approaches towards Students' Misconceptions on Photosynthesis: A Comparison Study between US and Korea

    ERIC Educational Resources Information Center

    Seo, Kyungwoon; Park, Soonhye; Choi, Aeran

    2017-01-01

    A critical component of teacher effectiveness is how teachers notice students' misconceptions and adjust the instructional approach accordingly. Taking a stance that the teachers' instructional quality is crucial to students' learning, a qualitative international comparison study was performed to examine science teachers' perceptions of and their…

  3. Sibling Comparison of Differential Parental Treatment in Adolescence: Gender, Self-Esteem, and Emotionality as Mediators of the Parenting-Adjustment Association.

    ERIC Educational Resources Information Center

    Feinberg, Mark E.; Neiderhiser, Jenae M.; Simmens, Sam; Reiss, David; Hetherington, E. Mavis

    2000-01-01

    Compared adolescent siblings' evaluations of parental treatment. Found support for a moderating effect for self-esteem and emotionality but not gender. Evidence of the "sibling barricade" effect was limited and interpreted as reflecting a sibling comparison process. For older siblings, emotionality and self-esteem moderated the sibling…

  4. An Illustration to Assist in Comparing and Remembering Several Multiplicity Adjustment Methods

    ERIC Educational Resources Information Center

    Hasler, Mario

    2017-01-01

    There are many well-known or new methods to adjust statistical tests for multiplicity. This article provides an illustration helping lecturers or consultants to remember the differences of three important multiplicity adjustment methods and to explain them to non-statisticians.

  5. A Numerical Study of Three Moving-Grid Methods for One-Dimensional Partial Differential Equations Which Are Based on the Method of Lines

    NASA Astrophysics Data System (ADS)

    Furzeland, R. M.; Verwer, J. G.; Zegeling, P. A.

    1990-08-01

    In recent years, several sophisticated packages based on the method of lines (MOL) have been developed for the automatic numerical integration of time-dependent problems in partial differential equations (PDEs), notably for problems in one space dimension. These packages greatly benefit from the very successful developments of automatic stiff ordinary differential equation solvers. However, from the PDE point of view, they integrate only in a semiautomatic way in the sense that they automatically adjust the time step sizes, but use just a fixed space grid, chosen a priori, for the entire calculation. For solutions possessing sharp spatial transitions that move, e.g., travelling wave fronts or emerging boundary and interior layers, a grid held fixed for the entire calculation is computationally inefficient, since for a good solution this grid often must contain a very large number of nodes. In such cases methods which attempt automatically to adjust the sizes of both the space and the time steps are likely to be more successful in efficiently resolving critical regions of high spatial and temporal activity. Methods and codes that operate this way belong to the realm of adaptive or moving-grid methods. Following the MOL approach, this paper is devoted to an evaluation and comparison, mainly based on extensive numerical tests, of three moving-grid methods for 1D problems, viz., the finite-element method of Miller and co-workers, the method published by Petzold, and a method based on ideas adopted from Dorfi and Drury. Our examination of these three methods is aimed at assessing which is the most suitable from the point of view of retaining the acknowledged features of reliability, robustness, and efficiency of the conventional MOL approach. Therefore, considerable attention is paid to the temporal performance of the methods.

  6. Evaluation of trauma care using TRISS method: the role of adjusted misclassification rate and adjusted w-statistic.

    PubMed

    Llullaku, Sadik S; Hyseni, Nexhmi Sh; Bytyçi, Cen I; Rexhepi, Sylejman K

    2009-01-15

    Major trauma is a leading cause of death worldwide. Evaluation of trauma care using Trauma Injury and Injury Severity Score (TRISS) method is focused in trauma outcome (deaths and survivors). For testing TRISS method TRISS misclassification rate is used. Calculating w-statistic, as a difference between observed and TRISS expected survivors, we compare our trauma care results with the TRISS standard. The aim of this study is to analyze interaction between misclassification rate and w-statistic and to adjust these parameters to be closer to the truth. Analysis of components of TRISS misclassification rate and w-statistic and actual trauma outcome. The component of false negative (FN) (by TRISS method unexpected deaths) has two parts: preventable (Pd) and non-preventable (nonPd) trauma deaths. Pd represents inappropriate trauma care of an institution; otherwise nonpreventable trauma deaths represents errors in TRISS method. Removing patients with preventable trauma deaths we get an Adjusted misclassification rate: (FP + FN - Pd)/N or (b+c-Pd)/N. Substracting nonPd from FN value in w-statistic formula we get an Adjusted w-statistic: [FP-(FN - nonPd)]/N, respectively (FP-Pd)/N, or (b-Pd)/N). Because adjusted formulas clean method from inappropriate trauma care, and clean trauma care from the methods error, TRISS adjusted misclassification rate and adjusted w-statistic gives more realistic results and may be used in researches of trauma outcome.

  7. The network adjustment aimed for the campaigned gravity survey using a Bayesian approach: methodology and model test

    NASA Astrophysics Data System (ADS)

    Chen, Shi; Liao, Xu; Ma, Hongsheng; Zhou, Longquan; Wang, Xingzhou; Zhuang, Jiancang

    2017-04-01

    The relative gravimeter, which generally uses zero-length springs as the gravity senor, is still as the first choice in the field of terrestrial gravity measurement because of its efficiency and low-cost. Because the drift rate of instrument can be changed with the time and meter, it is necessary for estimating the drift rate to back to the base or known gravity value stations for repeated measurement at regular hour's interval during the practical survey. However, the campaigned gravity survey for the large-scale region, which the distance of stations is far away from serval or tens kilometers, the frequent back to close measurement will highly reduce the gravity survey efficiency and extremely time-consuming. In this paper, we proposed a new gravity data adjustment method for estimating the meter drift by means of Bayesian statistical interference. In our approach, we assumed the change of drift rate is a smooth function depend on the time-lapse. The trade-off parameters were be used to control the fitting residuals. We employed the Akaike's Bayesian Information Criterion (ABIC) for the estimated these trade-off parameters. The comparison and analysis of simulated data between the classical and Bayesian adjustment show that our method is robust and has self-adaptive ability for facing to the unregularly non-linear meter drift. At last, we used this novel approach to process the realistic campaigned gravity data at the North China. Our adjustment method is suitable to recover the time-varied drift rate function of each meter, and also to detect the meter abnormal drift during the gravity survey. We also defined an alternative error estimation for the inversed gravity value at the each station on the basis of the marginal distribution theory. Acknowledgment: This research is supported by Science Foundation Institute of Geophysics, CEA from the Ministry of Science and Technology of China (Nos. DQJB16A05; DQJB16B07), China National Special Fund for Earthquake Scientific Research in Public Interest (Nos. 201508006; 201508009).

  8. Hormone-mediated adjustment of sex ratio in vertebrates.

    PubMed

    Navara, Kristen J

    2013-12-01

    The ability to adjust sex ratios at the individual level exists among all vertebrate groups studied to date. In many cases, there is evidence for facultative adjustment of sex ratios in response to environmental and/or social cues. Because environmental and social information must be first transduced into a physiological signal to influence sex ratios, hormones likely play a role in the adjustment of sex ratio in vertebrates, because the endocrine system acts as a prime communicator that directs physiological activities in response to changing external conditions. This symposium was developed to bring together investigators whose work on adjustment of sex ratio represents a variety of vertebrate groups in an effort to draw comparisons between species in which the sex-determination process is well-established and those in which more work is needed to understand how adjustments in sex ratio are occurring. This review summarizes potential hormone targets that may underlie the mechanisms of adjustment of sex ratio in humans, non-human mammals, birds, reptiles, and fishes.

  9. 77 FR 29982 - Federal Acquisition Regulation; Submission for OMB Review; Davis Bacon Act-Price Adjustment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-21

    ...; Submission for OMB Review; Davis Bacon Act-Price Adjustment (Actual Method) AGENCY: Department of Defense... (actual method). A notice was published in the Federal Register at 77 FR 13328, on March 6, 2012. No... Bacon Act-Price Adjustment (Actual Method), by any of the following methods: Regulations.gov : http...

  10. State-of-the-Art: DTM Generation Using Airborne LIDAR Data

    PubMed Central

    Chen, Ziyue; Gao, Bingbo; Devereux, Bernard

    2017-01-01

    Digital terrain model (DTM) generation is the fundamental application of airborne Lidar data. In past decades, a large body of studies has been conducted to present and experiment a variety of DTM generation methods. Although great progress has been made, DTM generation, especially DTM generation in specific terrain situations, remains challenging. This research introduces the general principles of DTM generation and reviews diverse mainstream DTM generation methods. In accordance with the filtering strategy, these methods are classified into six categories: surface-based adjustment; morphology-based filtering, triangulated irregular network (TIN)-based refinement, segmentation and classification, statistical analysis and multi-scale comparison. Typical methods for each category are briefly introduced and the merits and limitations of each category are discussed accordingly. Despite different categories of filtering strategies, these DTM generation methods present similar difficulties when implemented in sharply changing terrain, areas with dense non-ground features and complicated landscapes. This paper suggests that the fusion of multi-sources and integration of different methods can be effective ways for improving the performance of DTM generation. PMID:28098810

  11. Development and validation of a discriminating in vitro dissolution method for a poorly soluble drug, olmesartan medoxomil: comparison between commercial tablets.

    PubMed

    Bajerski, Lisiane; Rossi, Rochele Cassanta; Dias, Carolina Lupi; Bergold, Ana Maria; Fröehlich, Pedro Eduardo

    2010-06-01

    A dissolution test for tablets containing 40 mg of olmesartan medoxomil (OLM) was developed and validated using both LC-UV and UV methods. After evaluation of the sink condition, dissolution medium, and stability of the drug, the method was validated using USP apparatus 2, 50 rpm rotation speed, and 900 ml of deaerated H(2)O + 0.5% sodium lauryl sulfate (w/v) at pH 6.8 (adjusted with 18% phosphoric acid) as the dissolution medium. The model-independent method using difference factor (f(1)) and similarity factor (f(2)), model-dependent method, and dissolution efficiency were employed to compare dissolution profiles. The kinetic parameters of drug release were also investigated. The obtained results provided adequate dissolution profiles. The developed dissolution test was validated according to international guidelines. Since there is no monograph for this drug in tablets, the dissolution method presented here can be used as a quality control test for OLM in this dosage form, especially in a batch to batch evaluation.

  12. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  13. [Electormagnetic field of the mobile phone base station: case study].

    PubMed

    Bieńkowski, Paweł; Zubrzak, Bartłomiej; Surma, Robert

    2011-01-01

    The paper presents changes in the electromagnetic field intensity in a school building and its surrounding after the mobile phone base station installation on the roof of the school. The comparison of EMF intensity measured before the base station was launched (electromagnetic background measurement) and after starting its operation (two independent control measurements) is discussed. Analyses of measurements are presented and the authors also propose the method of the electromagnetic field distribution adjustment in the area of radiation antennas side lobe to reduce the intensity of the EMF level in the base station proximity. The presented method involves the regulation of the inclination. On the basis of the measurements, it was found that the EMF intensity increased in the building and its surroundings, but the values measured with wide margins meet the requirements of the Polish law on environmental protection.

  14. Research on hybrid transmission mode for HVDC with optimal thermal power and renewable energy combination

    NASA Astrophysics Data System (ADS)

    Zhang, Jinfang; Yan, Xiaoqing; Wang, Hongfu

    2018-02-01

    With the rapid development of renewable energy in Northwest China, curtailment phenomena is becoming more and more serve owing to lack of adjustment ability and enough transmission capacity. Based on the existing HVDC projects, exploring the hybrid transmission mode associated with thermal power and renewable power will be necessary and important. This paper has proposed a method on optimal thermal power and renewable energy combination for HVDC lines, based on multi-scheme comparison. Having established the mathematic model for electric power balance in time series mode, ten different schemes have been picked for figuring out the suitable one by test simulation. By the proposed related discriminated principle, including generation device utilization hours, renewable energy electricity proportion and curtailment level, the recommendation scheme has been found. The result has also validated the efficiency of the method.

  15. Determination of power and moment on shaft of special asynchronous electric drives

    NASA Astrophysics Data System (ADS)

    Karandey, V. Yu; Popov, B. K.; Popova, O. B.; Afanasyev, V. L.

    2018-03-01

    In the article, questions and tasks of determination of power and the moment on a shaft of special asynchronous electric drives are considered. Use of special asynchronous electric drives in mechanical engineering and other industries is relevant. The considered types of electric drives possess the improved mass-dimensional indicators in comparison with singleengine systems. Also these types of electric drives have constructive advantages; the improved characteristics allow one to realize the technological process. But creation and design of new electric drives demands adjustment of existing or development of new methods and approaches of calculation of parameters. Determination of power and the moment on a shaft of special asynchronous electric drives is the main objective during design of electric drives. This task has been solved based on a method of electromechanical transformation of energy.

  16. Time Domain Stability Margin Assessment Method

    NASA Technical Reports Server (NTRS)

    Clements, Keith

    2017-01-01

    The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.

  17. Apparatus and method for the spectrochemical analysis of liquids using the laser spark

    DOEpatents

    Cremers, David A.; Radziemski, Leon J.; Loree, Thomas R.

    1990-01-01

    A method and apparatus for the qualitative and quantitative spectroscopic investigation of elements present in a liquid sample using the laser spark. A series of temporally closely spaced spark pairs is induced in the liquid sample utilizing pulsed electromagnetic radiation from a pair of lasers. The light pulses are not significantly absorbed by the sample so that the sparks occur inside of the liquid. The emitted light from the breakdown events is spectrally and temporally resolved, and the time period between the two laser pulses in each spark pair is adjusted to maximize the signal-to-noise ratio of the emitted signals. In comparison with the single pulse technique, a substantial reduction in the limits of detectability for many elements has been demonstrated. Narrowing of spectral features results in improved discrimination against interfering species.

  18. Apparatus and method for the spectrochemical analysis of liquids using the laser spark

    DOEpatents

    Cremers, D.A.; Radziemski, L.J.; Loree, T.R.

    1984-05-01

    A method and apparatus are disclosed for the qualitative and quantitative spectroscopic investigation of elements present in a liquid sample using the laser spark. A series of temporally closely spaced spark pairs is induced in the liquid sample utilizing pulsed electromagnetic radiation from a pair of lasers. The light pulses are not significantly absorbed by the sample so that the sparks occur inside of the liquid. The emitted light from the breakdown events is spectrally and temporally resolved, and the time period between the two laser pulses in each spark pair is adjusted to maximize the signal-to-noise ratio of the emitted signals. In comparison with the single pulse technique, a substantial reduction in the limits of detectability for many elements has been demonstrated. Narrowing of spectral features results in improved discrimination against interfering species.

  19. Comparison of a web-based food record tool and a food-frequency questionnaire and objective validation using the doubly labelled water technique in a Swedish middle-aged population.

    PubMed

    Nybacka, Sanna; Bertéus Forslund, Heléne; Wirfält, Elisabet; Larsson, Ingrid; Ericson, Ulrika; Warensjö Lemming, Eva; Bergström, Göran; Hedblad, Bo; Winkvist, Anna; Lindroos, Anna Karin

    2016-01-01

    Two web-based dietary assessment tools have been developed for use in large-scale studies: the Riksmaten method (4-d food record) and MiniMeal-Q (food-frequency method). The aim of the present study was to examine the ability of these methods to capture energy intake against objectively measured total energy expenditure (TEE) with the doubly labelled water technique (TEE DLW ), and to compare reported energy and macronutrient intake. This study was conducted within the pilot study of the Swedish CArdioPulmonary bioImage Study (SCAPIS), which included 1111 randomly selected men and women aged 50-64 years from the Gothenburg general population. Of these, 200 were enrolled in the SCAPIS diet substudy. TEE DLW was measured in a subsample ( n 40). Compared with TEE DLW , both methods underestimated energy intake: -2·5 (sd  2·9) MJ with the Riksmaten method; -2·3 (sd 3·6) MJ with MiniMeal-Q. Mean reporting accuracy was 80 and 82 %, respectively. The correlation between reported energy intake and TEE DLW was r 0·4 for the Riksmaten method ( P  < 0·05) and r 0·28 (non-significant) for MiniMeal-Q. Women reported similar average intake of energy and macronutrients in both methods whereas men reported higher intakes with the Riksmaten method. Energy-adjusted correlations ranged from 0·14 (polyunsaturated fat) to 0·77 (alcohol). Bland-Altman plots showed acceptable agreement for energy and energy-adjusted protein and carbohydrate intake, whereas the agreement for fat intake was poorer. According to energy intake data, both methods displayed similar precision on energy intake reporting. However, MiniMeal-Q was less successful in ranking individuals than the Riksmaten method. The development of methods to achieve limited under-reporting is a major challenge for future research.

  20. Conceptualizing and Assessing Self-Enhancement Bias: A Componential Approach

    PubMed Central

    Kwan, Virginia S. Y.; Kuang, Lu Lu; John, Oliver P.; Robins, Richard W.

    2014-01-01

    Four studies implemented a componential approach to assessing self-enhancement and contrasted this approach with 2 earlier ones: social comparison (comparing self-ratings with ratings of others) and self-insight (comparing self-ratings with ratings by others). In Study 1, the authors varied the traits being rated to identify conditions that lead to more or less similarity between approaches. In Study 2, the authors examined the effects of acquaintance on the conditions identified in Study 1. In Study 3, the authors showed that using rankings renders the self-insight approach equivalent to the component-based approach but also has limitations in assessing self-enhancement. In Study 4, the authors compared the social-comparison and the component-based approaches in terms of their psychological implications; the relation between self-enhancement and adjustment depended on the self-enhancement approach used, and the positive-adjustment correlates of the social-comparison approach disappeared when the confounding influence of the target effect was controlled. PMID:18505318

  1. Climate variations and salmonellosis transmission in Adelaide, South Australia: a comparison between regression models

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Bi, Peng; Hiller, Janet

    2008-01-01

    This is the first study to identify appropriate regression models for the association between climate variation and salmonellosis transmission. A comparison between different regression models was conducted using surveillance data in Adelaide, South Australia. By using notified salmonellosis cases and climatic variables from the Adelaide metropolitan area over the period 1990-2003, four regression methods were examined: standard Poisson regression, autoregressive adjusted Poisson regression, multiple linear regression, and a seasonal autoregressive integrated moving average (SARIMA) model. Notified salmonellosis cases in 2004 were used to test the forecasting ability of the four models. Parameter estimation, goodness-of-fit and forecasting ability of the four regression models were compared. Temperatures occurring 2 weeks prior to cases were positively associated with cases of salmonellosis. Rainfall was also inversely related to the number of cases. The comparison of the goodness-of-fit and forecasting ability suggest that the SARIMA model is better than the other three regression models. Temperature and rainfall may be used as climatic predictors of salmonellosis cases in regions with climatic characteristics similar to those of Adelaide. The SARIMA model could, thus, be adopted to quantify the relationship between climate variations and salmonellosis transmission.

  2. Comparison of cumulative dissipated energy between the Infiniti and Centurion phacoemulsification systems

    PubMed Central

    Chen, Ming; Anderson, Erik; Hill, Geoffrey; Chen, John J; Patrianakos, Thomas

    2015-01-01

    Purpose To compare cumulative dissipated energy between two phacoemulsification machines. Setting An ambulatory surgical center, Honolulu, Hawaii, USA. Design Retrospective chart review. Methods A total of 2,077 consecutive cases of cataract extraction by phacoemulsification performed by five surgeons from November 2012 to November 2014 were included in the study; 1,021 consecutive cases were performed using the Infiniti Vision System, followed by 1,056 consecutive cases performed using the Centurion Vision System. Results The Centurion phacoemulsification system required less energy to remove a cataractous lens with an adjusted average energy reduction of 38% (5.09 percent-seconds) (P<0.001) across all surgeons in comparison to the Infiniti phacoemulsification system. The reduction in cumulative dissipated energy was statistically significant for each surgeon, with a range of 29%–45% (2.25–12.54 percent-seconds) (P=0.005–<0.001). Cumulative dissipated energy for both the Infiniti and Centurion systems varied directly with patient age, increasing an average of 2.38 percent-seconds/10 years. Conclusion The Centurion phacoemulsification system required less energy to remove a cataractous lens in comparison to the Infiniti phacoemulsification system. PMID:26229430

  3. Comparison of spatial interpolation of rainfall with emphasis on extreme events

    NASA Astrophysics Data System (ADS)

    Amin, Kanwal; Duan, Zheng; Disse, Markus

    2017-04-01

    The sparse network of rain-gauges has always motivated the scientists to find more robust ways to include the spatial variability of precipitation. Turning Bands Simulation, External Drift Kriging, Copula and Random Mixing are amongst one of them. Remote sensing Technologies i.e., radar and satellite estimations are widely known to provide a spatial profile of the precipitation, however during extreme events the accuracy of the resulted areal precipitation is still under discussion. The aim is to compare the areal hourly precipitation results of a flood event from RADOLAN (Radar online adjustment) with the gridded rainfall obtained via Turning Bands Simulation (TBM) and Inverse Distance Weighting (IDW) method. The comparison is mainly focused on performing the uncertainty analysis of the areal precipitation through the said simulation and remote sensing technique for the Upper Main Catchment. The comparison of the results obtained from TBM, IDW and RADOLAN show considerably similar results near the rain gauge stations, but the degree of ambiguity elevates with the increasing distance from the gauge stations. Future research will be carried out to compare the forecasted gridded precipitation simulations with the real-time rainfall forecast system (RADVOR) to make the flood evacuation process more robust and efficient.

  4. Comparison of the predictive validity of diagnosis-based risk adjusters for clinical outcomes.

    PubMed

    Petersen, Laura A; Pietz, Kenneth; Woodard, LeChauncy D; Byrne, Margaret

    2005-01-01

    Many possible methods of risk adjustment exist, but there is a dearth of comparative data on their performance. We compared the predictive validity of 2 widely used methods (Diagnostic Cost Groups [DCGs] and Adjusted Clinical Groups [ACGs]) for 2 clinical outcomes using a large national sample of patients. We studied all patients who used Veterans Health Administration (VA) medical services in fiscal year (FY) 2001 (n = 3,069,168) and assigned both a DCG and an ACG to each. We used logistic regression analyses to compare predictive ability for death or long-term care (LTC) hospitalization for age/gender models, DCG models, and ACG models. We also assessed the effect of adding age to the DCG and ACG models. Patients in the highest DCG categories, indicating higher severity of illness, were more likely to die or to require LTC hospitalization. Surprisingly, the age/gender model predicted death slightly more accurately than the ACG model (c-statistic of 0.710 versus 0.700, respectively). The addition of age to the ACG model improved the c-statistic to 0.768. The highest c-statistic for prediction of death was obtained with a DCG/age model (0.830). The lowest c-statistics were obtained for age/gender models for LTC hospitalization (c-statistic 0.593). The c-statistic for use of ACGs to predict LTC hospitalization was 0.783, and improved to 0.792 with the addition of age. The c-statistics for use of DCGs and DCG/age to predict LTC hospitalization were 0.885 and 0.890, respectively, indicating the best prediction. We found that risk adjusters based upon diagnoses predicted an increased likelihood of death or LTC hospitalization, exhibiting good predictive validity. In this comparative analysis using VA data, DCG models were generally superior to ACG models in predicting clinical outcomes, although ACG model performance was enhanced by the addition of age.

  5. Systems and methods for mirror mounting with minimized distortion

    NASA Technical Reports Server (NTRS)

    Antonille, Scott R. (Inventor); Wallace, Thomas E. (Inventor); Content, David A. (Inventor); Wake, Shane W. (Inventor)

    2012-01-01

    A method for mounting a mirror for use in a telescope includes attaching the mirror to a plurality of adjustable mounts; determining a distortion in the mirror caused by the plurality adjustable mounts, and, if the distortion is determined to be above a predetermined level: adjusting one or more of the adjustable mounts; and determining the distortion in the mirror caused by the adjustable mounts; and in the event the determined distortion is determined to be at or below the predetermined level, rigidizing the adjustable mounts.

  6. A cost analysis of single-row versus double-row and suture bridge rotator cuff repair methods.

    PubMed

    Bisson, Leslie; Zivaljevic, Nikola; Sanders, Samuel; Pula, David

    2015-02-01

    To calculate the costs to the US healthcare system of transition from single-row (SR) to double-row (DR) rotator cuff repair (RCR) and to calculate the decrease in re-operations for re-tear that DR RCR would need to accomplish in order to render the transition cost-neutral. Standard accounting methods were used to determine the cost of a single RCR, the annual cost to the US healthcare system of rotator cuff surgery, the cost of a single-revision RCR, and the decrease in revision for re-tear rate necessary to make DR or suture bridge (SB) methods cost-neutral in comparison with SR methods. We varied tear size, operating room cost, time required for implant placement, annual tear size distribution, and repair method. The cost of RCR ranged from $7,572 (SR, <1 cm tear) to $12,979 (DR, >5 cm tear). Complete conversion from SR RCR to a DR technique without an associated decrease in revision surgeries would increase the annual US healthcare cost between $80 million and $262 million per year. To obtain cost neutrality, use of DR or SB methods would need to result in one fewer revision in every 17 primary repairs (for tears <1 cm) to one fewer in every four primary repairs (for tears >5 cm). Conversion from SR to DR or SB RCR techniques would result in considerable increases in healthcare expenditures. Since the large decreases in revision surgery rates necessary to justify DR or SB repairs purely on a cost basis may not be realistic or even possible, the use of these methods should be supported by evidence of improved structural healing rates and quality-adjusted life years in comparison with SR methods. IV.

  7. Comparison of sEMG processing methods during whole-body vibration exercise.

    PubMed

    Lienhard, Karin; Cabasson, Aline; Meste, Olivier; Colson, Serge S

    2015-12-01

    The objective was to investigate the influence of surface electromyography (sEMG) processing methods on the quantification of muscle activity during whole-body vibration (WBV) exercises. sEMG activity was recorded while the participants performed squats on the platform with and without WBV. The spikes observed in the sEMG spectrum at the vibration frequency and its harmonics were deleted using state-of-the-art methods, i.e. (1) a band-stop filter, (2) a band-pass filter, and (3) spectral linear interpolation. The same filtering methods were applied on the sEMG during the no-vibration trial. The linear interpolation method showed the highest intraclass correlation coefficients (no vibration: 0.999, WBV: 0.757-0.979) with the comparison measure (unfiltered sEMG during the no-vibration trial), followed by the band-stop filter (no vibration: 0.929-0.975, WBV: 0.661-0.938). While both methods introduced a systematic bias (P < 0.001), the error increased with increasing mean values to a higher degree for the band-stop filter. After adjusting the sEMG(RMS) during WBV for the bias, the performance of the interpolation method and the band-stop filter was comparable. The band-pass filter was in poor agreement with the other methods (ICC: 0.207-0.697), unless the sEMG(RMS) was corrected for the bias (ICC ⩾ 0.931, %LOA ⩽ 32.3). In conclusion, spectral linear interpolation or a band-stop filter centered at the vibration frequency and its multiple harmonics should be applied to delete the artifacts in the sEMG signals during WBV. With the use of a band-stop filter it is recommended to correct the sEMG(RMS) for the bias as this procedure improved its performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Comparison of two approaches for calculation of the geometric and inertial characteristics of the human body of the Bulgarian population.

    PubMed

    Nikolova, Gergana; Toshev, Yuli

    2008-01-01

    On the basis of a representative anthropological investigation of 5290 individuals (2435 males and 2855 females) of the Bulgarian population at the age of 30-40 years (Yordanov et al. [1]) we proposed a 3D biomechanical model of human body of the average Bulgarian male and female and compared two different possible approaches to calculate analytically and to evaluate numerically the corresponding geometric and inertial characteristics of all the segments of the body. In the framework of the first approach, we calculated the positions of the centres of mass of the segments of human body as well as their inertial characteristics merely by using the initial original anthropometrical data, while in the second approach we adjusted the data by using the method based on regression equations. Wherever possible, we presented a comparison of our data with those available in the literature on other Caucasians and determined in which cases the use of which approach is more reliable.

  9. Labyrinth seal forces on a whirling rotor

    NASA Technical Reports Server (NTRS)

    Wright, D. V.

    1983-01-01

    An experimental investigation of air labyrinth seal forces on a subsynchronously whirling model rotor is described and test results are given for diverging, converging, and straight two-strip seals. The effects of pressure drop, provide basic experimental data needed in the development of design methods for predicting and preventing self-excited whirl of turbine rotors and other machines having labyrinth seals. The total dynamic seal forces on the whirling model rotor are measured accurately by means of an active damping and stiffness system that is adjusted to obtain neutral whirl stability of the model rotor system. In addition, the whirling pressure pattern in the seal annulus is measured for a few test conditions and the corresponding pressure forces on the rotor are compared with the total measured forces. This comparison shows that either radial and axial pressure gradients in the seal annulus or drag forces on the rotor are significant. Comparisons made between the measured seal forces and theoretical results show that present theory is inadequate.

  10. Medición de posiciones astrométricas con CCD en la zona de Rup 21

    NASA Astrophysics Data System (ADS)

    Bustos Fierro, I. H.; Calderón, J. H.

    It is shown the utilization of the block adjustment method for the measurement of astrometric positions from a mosaic of sixteen CCD images with partial overlap, which were taken with the Telescope Jorge Sahade of CASLEO. The observations cover an area of 25' x 25' around the open cluster Rup21. The source of reference positions was ACT Reference Catalog. The internal error of the measured positions is analyzed, and the external error is estimated from the comparison with the catalog USNO-A. In this comparison it is found that the direct CCD images taken with focal reducer could be distorted by severe field curvature. The effect of the distortion presumably introduced by the optics is eliminated with the suitable corrections of the stellar positions measured on every frame, but a new systematic effect on scales of the entire field is observed, which could be due to the distribution of the reference stars.

  11. Environmental Health Practice: Statistically Based Performance Measurement

    PubMed Central

    Enander, Richard T.; Gagnon, Ronald N.; Hanumara, R. Choudary; Park, Eugene; Armstrong, Thomas; Gute, David M.

    2007-01-01

    Objectives. State environmental and health protection agencies have traditionally relied on a facility-by-facility inspection-enforcement paradigm to achieve compliance with government regulations. We evaluated the effectiveness of a new approach that uses a self-certification random sampling design. Methods. Comprehensive environmental and occupational health data from a 3-year statewide industry self-certification initiative were collected from representative automotive refinishing facilities located in Rhode Island. Statistical comparisons between baseline and postintervention data facilitated a quantitative evaluation of statewide performance. Results. The analysis of field data collected from 82 randomly selected automotive refinishing facilities showed statistically significant improvements (P<.05, Fisher exact test) in 4 major performance categories: occupational health and safety, air pollution control, hazardous waste management, and wastewater discharge. Statistical significance was also shown when a modified Bonferroni adjustment for multiple comparisons was performed. Conclusions. Our findings suggest that the new self-certification approach to environmental and worker protection is effective and can be used as an adjunct to further enhance state and federal enforcement programs. PMID:17267709

  12. Historical Channel Changes in Cache Creek, Capay Valley, California

    NASA Astrophysics Data System (ADS)

    Higgins, S. A.; Kamman, G. R.

    2009-12-01

    Historical channel changes were assessed for the 21-mile segment of Cache Creek through Capay Valley in order to evaluate temporal changes in stream channel morphology. The Capay Valley segment of Cache Creek is primarily a low-gradient channel with a gravel/cobble substrate. Hydrologic conditions have been affected by dam operations that store runoff during the wet season and deliver water during the dry season for downstream irrigation uses. Widespread distribution of invasive plant species has altered the condition of the riparian corridor. The assessment evaluated a hypothesis that historical changes in hydrology and vegetation cover have triggered changes in geomorphic conditions. Historic channel alignments were digitized to assess planform channel adjustments. Results illustrate a dynamic system with frequent channel movements throughout the historic period. Evaluation of longitudinal channel adjustments revealed a relatively stable bed surface elevation since the 1930’s. Comparisons of cross-sectional channel geometry for topographic profiles surveyed in 1984 were compared to equivalent features in a LiDAR survey from 2008. The comparisons show a relatively consistent channel geometry that has maintained a similar form despite rather large planform adjustments with areas of bank retreat in excess of 500 feet. Results suggest that the study reach has maintained a relatively stable morphology through a series of dynamic planform adjustments during the historic period.

  13. Attachment orientations and psychological adjustment of parents of children with cancer: A matched-group comparison.

    PubMed

    Cusinato, Maria; Calvo, Vincenzo; Bisogno, Gianni; Viscardi, Elisabetta; Pillon, Marta; Opocher, Enrico; Basso, Giuseppe; Montanaro, Maria

    2017-01-01

    To investigate the impact of childhood cancer on parents' adult attachment, social support, marital adjustment, anxiety, and depression. 30 parents of children with childhood cancer and 30 matched controls completed the following questionnaires: Experiences in Close Relationships-Revised, Dyadic Adjustment Scale-4, Multidimensional Scale of Perceived Social Support, State-Trait Anxiety Inventory - form Y, and Beck Depression Inventory. Parents of children with childhood cancer had a significantly lower dyadic adjustment than controls, and higher levels of insecure-avoidant attachment, state anxiety, and depression. It is important for health-care personnel to take into account these parents' propensity to show increased levels of avoidant attachment during children's treatment to foster effective communication and supportive relationships between clinicians, pediatric patients, and parents.

  14. Do Differential Response Rates to Patient Surveys Between Organizations Lead to Unfair Performance Comparisons?: Evidence From the English Cancer Patient Experience Survey.

    PubMed

    Saunders, Catherine L; Elliott, Marc N; Lyratzopoulos, Georgios; Abel, Gary A

    2016-01-01

    Patient surveys typically have variable response rates between organizations, leading to concerns that such differences may affect the validity of performance comparisons. To explore the size and likely sources of associations between hospital-level survey response rates and patient experience. Cross-sectional mail survey including 60 patient experience items sent to 101,771 cancer survivors recently treated by 158 English NHS hospitals. Age, sex, race/ethnicity, socioeconomic status, clinical diagnosis, hospital type, and region were available for respondents and nonrespondents. The overall response rate was 67% (range, 39% to 77% between hospitals). Hospitals with higher response rates had higher scores for all items (Spearman correlation range, 0.03-0.44), particularly questions regarding hospital-level administrative processes, for example, procedure cancellations or medical note availability.From multivariable analysis, associations between individual patient experience and hospital-level response rates were statistically significant (P<0.05) for 53/59 analyzed questions, decreasing to 37/59 after adjusting for case-mix, and 25/59 after further adjusting for hospital-level characteristics.Predicting responses of nonrespondents, and re-estimating hypothetical hospital scores assuming a 100% response rate, we found that currently low performing hospitals would have attained even lower scores. Overall nationwide attainment would have decreased slightly to that currently observed. Higher response rate hospitals have more positive experience scores, and this is only partly explained by patient case-mix. High response rates may be a marker of efficient hospital administration, and higher quality that should not, therefore, be adjusted away in public reporting. Although nonresponse may result in slightly overestimating overall national levels of performance, it does not appear to meaningfully bias comparisons of case-mix-adjusted hospital results.

  15. Effectiveness of integrated body-mind-spirit group intervention on the well-being of Indian patients with depression: a pilot study.

    PubMed

    Sreevani, Rentala; Reddemma, Konduru; Chan, Cecilia L W; Leung, Pamela Pui Yu; Wong, Venus; Chan, Celia Hoi Yan

    2013-09-01

    Depression is a leading cause of disability worldwide. There is a need to develop effective strategies to treat depression and prevent recurrence. Treatments that combine pharmacological and psychotherapeutic approaches are preferred for treating severe forms of depression. The study assesses the effect of an integrated body-mind-spirit group intervention in patients with depression. This pilot study was a pretest-posttest design study. Thirty adult patients diagnosed with depression attending the psychiatric outpatient department at a district hospital were randomly assigned to either the intervention group or comparison group. Each group had 15 patients. The intervention group received both the intervention and routine hospital treatment and underwent four group integrated body-mind-spirit group intervention therapy sessions. These sessions were held once per week on either Saturday or Sunday, with each session lasting more than 3 hours. Comparison group participants received routine hospital treatment only. Outcome measures, including level of depression, well-being, and work and social adjustment, were measured using the Beck Depression Inventory-II, body-mind-spirit well-being scale, and work and social adjustment scale. Both groups were evaluated at baseline, 1 month, 2 months, and 3 months. Results showed that both groups had significant reductions in the level of depression, improvements in well-being, and work and social adjustment at 3-month follow-up compared with baseline. In addition, the intervention group showed significant mean differences in levels of depression, well-being, and work and social adjustment compared with the comparison group. The integrated body-mind-spirit group intervention model appears to reduce depressive symptoms and improve well-being in patients with depression.

  16. Effect of long term high altitude exposure on cardiovascular autonomic adjustment during rest and post-exercise recovery.

    PubMed

    Bhattarai, Prem; Paudel, Bishnu H; Thakur, Dilip; Bhattarai, Balkrishna; Subedi, Bijay; Khadka, Rita

    2018-01-01

    Despite the successful adaptation to high altitude, some differences do occur due to long term exposure to the hypoxic environment. The effect of long term high altitude exposure on cardiac autonomic adjustment during basal and post-exercise recovery is less known. Thus we aimed to study the differences in basal cardiac autonomic adjustment and its response to exercise in highlanders and to compare it with lowlanders. The study was conducted on 29 healthy highlander males who were born and brought up at altitude of 3000 m and above from the sea level, their cardiac autonomic adjustment was compared with age, sex, physical activity and ethnicity-matched 29 healthy lowlanders using Heart Rate Variability (HRV) during rest and recovery from sub-maximal exercise (3 m step test). Intergroup comparison between the highlanders and lowlanders and intragroup comparison between the rest and the postexercise recovery conditions were done. Resting heart rate and HRV during rest was comparable between the groups. However, heart rate recovery after 3 min step test was faster in highlanders ( p  < 0.05) along with significantly higher LF power and total power during the recovery phase. Intragroup comparison of highlanders showed higher SDNN ( p  < 0.05) and lower LF/HF ratio ( p  < 0.05) during recovery phase compared to rest which was not significantly different in two phases in lowlanders. Further highlander showed complete recovery of RMSSD, NN50, pNN50 and HF power back to resting level within five minutes, whereas, these parameters failed to return back to resting level in lowlanders within the same time frame. Highlanders completely recovered back to their resting state within five minutes from cessation of step test with parasympathetic reactivation; however, recovery in lowlanders was delayed.

  17. Comparison of Fit of Dentures Fabricated by Traditional Techniques Versus CAD/CAM Technology.

    PubMed

    McLaughlin, J Bryan; Ramos, Van; Dickinson, Douglas P

    2017-11-14

    To compare the shrinkage of denture bases fabricated by three methods: CAD/CAM, compression molding, and injection molding. The effect of arch form and palate depth was also tested. Nine titanium casts, representing combinations of tapered, ovoid, and square arch forms and shallow, medium, and deep palate depths, were fabricated using electron beam melting (EBM) technology. For each base fabrication method, three poly(vinyl siloxane) impressions were made from each cast, 27 dentures for each method. Compression-molded dentures were fabricated using Lucitone 199 poly methyl methacrylate (PMMA), and injection molded dentures with Ivobase's Hybrid Pink PMMA. For CAD/CAM, denture bases were designed and milled by Avadent using their Light PMMA. To quantify the space between the denture and the master cast, silicone duplicating material was placed in the intaglio of the dentures, the titanium master cast was seated under pressure, and the silicone was then trimmed and recovered. Three silicone measurements per denture were recorded, for a total of 243 measurements. Each silicone measurement was weighed and adjusted to the surface area of the respective arch, giving an average and standard deviation for each denture. Comparison of manufacturing methods showed a statistically significant difference (p = 0.0001). Using a ratio of the means, compression molding had on average 41% to 47% more space than injection molding and CAD/CAM. Comparison of arch/palate forms showed a statistically significant difference (p = 0.023), with shallow palate forms having more space with compression molding. The ovoid shallow form showed CAD/CAM and compression molding had more space than injection molding. Overall, injection molding and CAD/CAM fabrication methods produced equally well-fitting dentures, with both having a better fit than compression molding. Shallow palates appear to be more affected by shrinkage than medium or deep palates. Shallow ovoid arch forms appear to benefit from the use of injection molding compared to CAD/CAM and compression molding. © 2017 by the American College of Prosthodontists.

  18. Polychlorinated biphenyl exposure and effects in transformer repair workers.

    PubMed Central

    Emmett, E A

    1985-01-01

    Fifty-five present and past transformer repair workers exposed to polychlorinated biphenyls (PCBs) and 56 unexposed comparison workers were evaluated in a clinical-epidemiologic study. The groups were similar in most demographic variables. Adipose tissue lipid and serum PCBs concentrations were higher in current exposed workers (geometric means adipose 2.1 ppm, serum 12.2 ppb). Concentrations in comparison (0.6 ppm and 4.6 ppb) and previously exposed (0.83 ppm and 5.9 ppb) workers were lower. Statistically significant differences in serum albumin and lactic dehydrogenase, but not in other liver function tests, were seen between the exposed and comparison groups; however, after adjustment for confounding variables, no correlations were observed between liver function tests and either adipose or serum PCBs concentrations. Statistically significant correlation both before and after adjustment for confounding variables were seen with adipose PCBs and 24-hr urinary 17-hydroxycorticosteroid excretion and with serum PCBs and serum gamma-glutamyl transpeptidase. Both associations could reflect microsomal enzyme induction among other possibilities. No differences were seen in fasting serum triglycerides, total cholesterol, LDL, HDL or VLDL cholesterol between the two exposure groups. A statistically significant correlation between serum PCBs and serum triglycerides, total cholesterol, and VLDL cholesterol was removed by adjusting for confounding variables. No correlation was seen between adipose PCBs concentrations and any serum lipid component. Partition phenomena could account for these findings.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2863134

  19. Secondary traumatization in parents following the disclosure of extrafamilial child sexual abuse: initial effects.

    PubMed

    Manion, I G; McIntyre, J; Firestone, P; Ligezinska, M; Ensom, R; Wells, G

    1996-11-01

    Disclosure or discovery of extrafamilial sexual abuse (ESA) has the potential to traumatize the entire family system. Little controlled research has examined the initial reactions of parents to this type of trauma. The present study evaluated the adjustment of 93 parents (63 mothers and 30 fathers) within 3 months of the disclosure of ESA. Parents' functioning was compared to that of a nonclinical comparison group of 136 parents (74 mothers, 62 fathers). Parent adjustment was assessed using self-report measures of psychological distress, parent competence, family functioning, marital functioning, life stressors, and environmental support. Results revealed that mothers of sexually abused children, in comparison to mothers of nonabused children, experienced greater overall emotional distress, poorer family functioning, and lower satisfaction in their parenting role. Fathers of sexually abused children also experienced greater overall emotional distress relative to comparison fathers but their level of distress remained below that of mothers. Standard and hierarchical multiple regressions on maternal self-reports revealed that mothers' satisfaction with their parenting role and their perceived level of environmental support predicted their emotional functioning. Abuse-related variables did not contribute to the prediction of emotional functioning. These results emphasize the need to expand our focus beyond the child victims to the traumatized families and to normalize the potential for all close family members to be vulnerable to experience adjustment difficulties following ESA.

  20. Drugs Used in the Treatment of Rheumatoid Arthritis: Relationship between Current Use and Cardiovascular Risk Factors

    PubMed Central

    Rho, Young Hee; Oeser, Annette; Chung, Cecilia P; Milne, Ginger L; Stein, C Michael

    2009-01-01

    Objectives Drugs used for the treatment of rheumatoid arthritis (RA) have the potential to affect cardiovascular risk factors. There is concern that corticosteroids, non-steroidal anti-inflammatory drugs (NSAIDs) and COX-2 inhibitors could affect cardiovascular risk adversely, while drugs such as the antimalarial, hydroxychloroquine, may have beneficial effects. However, there is limited information about cardiovascular risk factors in patients with RA receiving different drugs. Methods We measured cardiovascular risk factors including systolic and diastolic blood pressure, serum HDL and LDL cholesterol, glucose and homocysteine concentrations and urinary F2-isoprostane excretion in 169 patients with RA. Risk factors were compared according to current use of corticosteroids, methotrexate, antimalarials, NSAIDs, COX-2 inhibitors, leflunomide and TNF-α blockers. Comparisons were adjusted for age, sex, race, disease activity (DAS28 score), current hypertension, diabetes, smoking status and statin use. Results No cardiovascular risk factor differed significantly among current users and non-users of NSAIDs, COX-2 inhibitors, methotrexate and TNF-α blockers. Serum HDL cholesterol concentrations were significantly higher in patients currently receiving corticosteroids (42.2 ± 10.5 vs. 50.2 ± 15.3 mg/dL, adjusted P < 0.001). Diastolic blood pressure (75.9 ± 11.2 vs. 72.0 ± 9.1 mm Hg, adjusted P = 0.02), serum LDL cholesterol (115.6 ± 34.7 vs. 103.7 ± 27.8 mg/dL, adjusted P = 0.03) and triglyceride concentrations (157.7 ± 202.6 vs. 105.5 ± 50.5 mg/dL, adjusted P = 0.03) were significantly lower in patients taking antimalarial drugs. Plasma glucose was significantly lower in current lefunomide users (93.0 ± 19.2 vs. 83.6 ± 13.4 mg/dL, adjusted P = 0.006). Conclusions In a cross-sectional setting drugs used to treat RA did not have major adverse effects on cardiovascular risk factors and use of antimalarials was associated with beneficial lipid profiles. PMID:19684849

  1. Trends in suicidal behaviour and use of mental health services in Canadian military and civilian populations

    PubMed Central

    Sareen, Jitender; Afifi, Tracie O.; Taillieu, Tamara; Cheung, Kristene; Turner, Sarah; Bolton, Shay-Lee; Erickson, Julie; Stein, Murray B.; Fikretoglu, Deniz; Zamorski, Mark A.

    2016-01-01

    Background: In the context of the Canadian mission in Afghanistan, substantial media attention has been placed on mental health and lack of access to treatment among Canadian Forces personnel. We compared trends in the prevalence of suicidal behaviour and the use of mental health services between Canadian military personnel and the general population from 2002 to 2012/13. Methods: We obtained data for respondents aged 18–60 years who participated in 4 nationally representative surveys by Statistics Canada designed to permit comparisons between populations and trends over time. Surveys of the general population were conducted in 2002 (n = 25 643) and 2012 (n = 15 981); those of military personnel were conducted in 2002 (n = 5153) and 2013 (n = 6700). We assessed the lifetime and past-year prevalence of suicidal ideation, plans and attempts, as well as use of mental health services. Results: In 2012/13, but not in 2002, military personnel had significantly higher odds of both lifetime and past-year suicidal ideation than the civilian population (lifetime: adjusted odds ratio [OR] 1.32, 95% confidence interval [CI] 1.17–1.50; past year: adjusted OR 1.34, 95% CI 1.09–1.66). The same was true for suicidal plans (lifetime: adjusted OR 1.64, 95% CI 1.35–1.99; past year: adjusted OR 1.66, 95% CI 1.18–2.33). Among respondents who reported past-year suicidal ideation, those in the military had a significantly higher past-year utilization rate of mental health services than those in the civilian population in both 2002 (adjusted OR 2.02, 95% CI 1.31–3.13) and 2012/13 (adjusted OR 3.14, 95% CI 1.86–5.28). Interpretation: Canadian Forces personnel had a higher prevalence of suicidal ideation and plans in 2012/13 and a higher use of mental health services in 2002 and 2012/13 than the civilian population. PMID:27221270

  2. An investigation of the apparent breast cancer epidemic in France: screening and incidence trends in birth cohorts

    PubMed Central

    2011-01-01

    Background Official descriptive data from France showed a strong increase in breast-cancer incidence between 1980 to 2005 without a corresponding change in breast-cancer mortality. This study quantifies the part of incidence increase due to secular changes in risk factor exposure and in overdiagnosis due to organised or opportunistic screening. Overdiagnosis was defined as non progressive tumours diagnosed as cancer at histology or progressive cancer that would remain asymptomatic until time of death for another cause. Methods Comparison between age-matched cohorts from 1980 to 2005. All women residing in France and born 1911-1915, 1926-1930 and 1941-1945 are included. Sources are official data sets and published French reports on screening by mammography, age and time specific breast-cancer incidence and mortality, hormone replacement therapy, alcohol and obesity. Outcome measures include breast-cancer incidence differences adjusted for changes in risk factor distributions between pairs of age-matched cohorts who had experienced different levels of screening intensity. Results There was an 8-fold increase in the number of mammography machines operating in France between 1980 and 2000. Opportunistic and organised screening increased over time. In comparison to age-matched cohorts born 15 years earlier, recent cohorts had adjusted incidence proportion over 11 years that were 76% higher [95% confidence limits (CL) 67%, 85%] for women aged 50 to 64 years and 23% higher [95% CL 15%, 31%] for women aged 65 to 79 years. Given that mortality did not change correspondingly, this increase in adjusted 11 year incidence proportion was considered as an estimate of overdiagnosis. Conclusions Breast cancer may be overdiagnosed because screening increases diagnosis of slowly progressing non-life threatening cancer and increases misdiagnosis among women without progressive cancer. We suggest that these effects could largely explain the reported "epidemic" of breast cancer in France. Better predictive classification of tumours is needed in order to avoid unnecessary cancer diagnoses and subsequent procedures. PMID:21936933

  3. The Noisiness of Low Frequency Bands of Noise

    NASA Technical Reports Server (NTRS)

    Lawton, B. W.

    1975-01-01

    The relative noisiness of low frequency 1/3-octave bands of noise was examined. The frequency range investigated was bounded by the bands centered at 25 and 200 Hz, with intensities ranging from 50 to 95 db (SPL). Thirty-two subjects used a method of adjustment technique, producing comparison band intensities as noisy as 100 and 200 Hz standard bands at 60 and 72 db. The work resulted in contours of equal noisiness for 1/3-octave bands, ranging in intensity from approximately 58 to 86 db (SPL). These contours were compared with the standard equal noisiness contours; in the region of overlap, between 50 and 200 Hz, the agreement was good.

  4. Analytical and numerical modeling of an axisymmetrical electrostatic transducer with interior geometrical discontinuity.

    PubMed

    Honzík, Petr; Podkovskiy, Alexey; Durand, Stéphane; Joly, Nicolas; Bruneau, Michel

    2013-11-01

    The main purpose of the paper is to contribute at presenting an analytical and a numerical modeling which would be relevant for interpreting the couplings between a circular membrane, a peripheral cavity having the same external radius as the membrane, and a thin air gap (with a geometrical discontinuity between them), and then to characterize small scale electrostatic receivers and to propose procedures that could be suitable for fitting adjustable parameters to achieve optimal behavior in terms of sensitivity and bandwidth expected. Therefore, comparison between these theoretical methods and characterization of several shapes is dealt with, which show that the models would be appropriate to address the design of such transducers.

  5. Statistical methods for change-point detection in surface temperature records

    NASA Astrophysics Data System (ADS)

    Pintar, A. L.; Possolo, A.; Zhang, N. F.

    2013-09-01

    We describe several statistical methods to detect possible change-points in a time series of values of surface temperature measured at a meteorological station, and to assess the statistical significance of such changes, taking into account the natural variability of the measured values, and the autocorrelations between them. These methods serve to determine whether the record may suffer from biases unrelated to the climate signal, hence whether there may be a need for adjustments as considered by M. J. Menne and C. N. Williams (2009) "Homogenization of Temperature Series via Pairwise Comparisons", Journal of Climate 22 (7), 1700-1717. We also review methods to characterize patterns of seasonality (seasonal decomposition using monthly medians or robust local regression), and explain the role they play in the imputation of missing values, and in enabling robust decompositions of the measured values into a seasonal component, a possible climate signal, and a station-specific remainder. The methods for change-point detection that we describe include statistical process control, wavelet multi-resolution analysis, adaptive weights smoothing, and a Bayesian procedure, all of which are applicable to single station records.

  6. A Comparison of Evaluation Metrics for Biomedical Journals, Articles, and Websites in Terms of Sensitivity to Topic

    PubMed Central

    Fu, Lawrence D.; Aphinyanaphongs, Yindalon; Wang, Lily; Aliferis, Constantin F.

    2011-01-01

    Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed’s clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations. PMID:21419864

  7. Uncertainty estimation and multi sensor fusion for kinematic laser tracker measurements

    NASA Astrophysics Data System (ADS)

    Ulrich, Thomas

    2013-08-01

    Laser trackers are widely used to measure kinematic tasks such as tracking robot movements. Common methods to evaluate the uncertainty in the kinematic measurement include approximations specified by the manufacturers, various analytical adjustment methods and the Kalman filter. In this paper a new, real-time technique is proposed, which estimates the 4D-path (3D-position + time) uncertainty of an arbitrary path in space. Here a hybrid system estimator is applied in conjunction with the kinematic measurement model. This method can be applied to processes, which include various types of kinematic behaviour, constant velocity, variable acceleration or variable turn rates. The new approach is compared with the Kalman filter and a manufacturer's approximations. The comparison was made using data obtained by tracking an industrial robot's tool centre point with a Leica laser tracker AT901 and a Leica laser tracker LTD500. It shows that the new approach is more appropriate to analysing kinematic processes than the Kalman filter, as it reduces overshoots and decreases the estimated variance. In comparison with the manufacturer's approximations, the new approach takes account of kinematic behaviour with an improved description of the real measurement process and a reduction in estimated variance. This approach is therefore well suited to the analysis of kinematic processes with unknown changes in kinematic behaviour as well as the fusion among laser trackers.

  8. A decision-theoretic approach to identifying future high-cost patients.

    PubMed

    Pietz, Kenneth; Byrne, Margaret M; Petersen, Laura A

    2006-09-01

    The objective of this study was to develop and evaluate a method of allocating funding for very-high-cost (VHC) patients among hospitals. Diagnostic cost groups (DCGs) were used for risk adjustment. The patient population consisted of 253,013 veterans who used Department of Veterans Affairs (VA) medical care services in fiscal year (FY) 2003 (October 1, 2002-September 30, 2003) in a network of 8 VA hospitals. We defined VHC as greater than 75,000 dollars (0.81%). The upper fifth percentile was also used for comparison. A Bayesian decision rule for classifying patients as VHC/not VHC using DCGs was developed and evaluated. The method uses FY 2003 DCGs to allocate VHC funds for FY 2004. We also used FY 2002 DCGs to allocate VHC funds for FY 2003 for comparison. The resulting allocation was compared with using the allocation of VHC patients among the hospitals in the previous year. The decision rule identified DCG 17 as the optimal cutoff for identifying VHC patients for the next year. The previous year's allocation came closest to the actual distribution of VHC patients. The decision-theoretic approach may provide insight into the economic consequences of classifying a patient as VHC or not VHC. More research is needed into methods of identifying future VHC patients so that capitation plans can fairly reimburse healthcare systems for appropriately treating these patients.

  9. Computer-aided system of evaluation for population-based all-in-one service screening (CASE-PASS): from study design to outcome analysis with bias adjustment.

    PubMed

    Chen, Li-Sheng; Yen, Amy Ming-Fang; Duffy, Stephen W; Tabar, Laszlo; Lin, Wen-Chou; Chen, Hsiu-Hsi

    2010-10-01

    Population-based routine service screening has gained popularity following an era of randomized controlled trials. The evaluation of these service screening programs is subject to study design, data availability, and the precise data analysis for adjusting bias. We developed a computer-aided system that allows the evaluation of population-based service screening to unify these aspects and facilitate and guide the program assessor to efficiently perform an evaluation. This system underpins two experimental designs: the posttest-only non-equivalent design and the one-group pretest-posttest design and demonstrates the type of data required at both the population and individual levels. Three major analyses were developed that included a cumulative mortality analysis, survival analysis with lead-time adjustment, and self-selection bias adjustment. We used SAS AF software to develop a graphic interface system with a pull-down menu style. We demonstrate the application of this system with data obtained from a Swedish population-based service screen and a population-based randomized controlled trial for the screening of breast, colorectal, and prostate cancer, and one service screening program for cervical cancer with Pap smears. The system provided automated descriptive results based on the various sources of available data and cumulative mortality curves corresponding to the study designs. The comparison of cumulative survival between clinically and screen-detected cases without a lead-time adjustment are also demonstrated. The intention-to-treat and noncompliance analysis with self-selection bias adjustments are also shown to assess the effectiveness of the population-based service screening program. Model validation was composed of a comparison between our adjusted self-selection bias estimates and the empirical results on effectiveness reported in the literature. We demonstrate a computer-aided system allowing the evaluation of population-based service screening programs with an adjustment for self-selection and lead-time bias. This is achieved by providing a tutorial guide from the study design to the data analysis, with bias adjustment. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach.

    PubMed

    Tipton, Elizabeth; Shuster, Jonathan

    2017-10-15

    Bland-Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold-standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the 'bias' and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non-invasive measure is preferred. Very often, multiple Bland-Altman studies have been conducted comparing the same two measures, and random-effects meta-analysis provides a means to pool these estimates. We provide a framework for the meta-analysis of Bland-Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland-Altman meta-analyses. Frequently, Bland-Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta-analyses of Bland-Altman studies frequently exclude these studies for this reason. We provide a meta-analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta-analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    NASA Technical Reports Server (NTRS)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  12. Outcomes of planned home births versus planned hospital births after regulation of midwifery in British Columbia

    PubMed Central

    Janssen, Patricia A.; Lee, Shoo K.; Ryan, Elizabeth M.; Etches, Duncan J.; Farquharson, Duncan F.; Peacock, Donlim; Klein, Michael C.

    2002-01-01

    Background The choice to give birth at home with a regulated midwife in attendance became available to expectant women in British Columbia in 1998. The purpose of this study was to evaluate the safety of home birth by comparing perinatal outcomes for planned home births attended by regulated midwives with those for planned hospital births. Methods We compared the outcomes of 862 planned home births attended by midwives with those of planned hospital births attended by either midwives (n = 571) or physicians (n = 743). Comparison subjects who were similar in their obstetric risk status were selected from hospitals in which the midwives who were conducting the home births had hospital privileges. Our study population included all home births that occurred between Jan. 1, 1998, and Dec. 31, 1999. Results Women who gave birth at home attended by a midwife had fewer procedures during labour compared with women who gave birth in hospital attended by a physician. After adjustment for maternal age, lone parent status, income quintile, use of any versus no substances and parity, women in the home birth group were less likely to have epidural analgesia (odds ratio 0.20, 95% confidence interval [CI] 0.14–0.27), be induced, have their labours augmented with oxytocin or prostaglandins, or have an episiotomy. Comparison of home births with hospital births attended by a midwife showed very similar and equally significant differences. The adjusted odds ratio for cesarean section in the home birth group compared with physician-attended hospital births was 0.3 (95% CI 0.22–0.43). Rates of perinatal mortality, 5-minute Apgar scores, meconium aspiration syndrome or need for transfer to a different hospital for specialized newborn care were very similar for the home birth group and for births in hospital attended by a physician. The adjusted odds ratio for Apgar scores lower than 7 at 5 minutes in the home birth group compared with physician-attended hospital births was 0.84 (95% CI 0.32–2.19). Interpretation There was no increased maternal or neonatal risk associated with planned home birth under the care of a regulated midwife. The rates of some adverse outcomes were too low for us to draw statistical comparisons, and ongoing evaluation of home birth is warranted. PMID:11868639

  13. A Review on Methods of Risk Adjustment and their Use in Integrated Healthcare Systems

    PubMed Central

    Juhnke, Christin; Bethge, Susanne

    2016-01-01

    Introduction: Effective risk adjustment is an aspect that is more and more given weight on the background of competitive health insurance systems and vital healthcare systems. The objective of this review was to obtain an overview of existing models of risk adjustment as well as on crucial weights in risk adjustment. Moreover, the predictive performance of selected methods in international healthcare systems should be analysed. Theory and methods: A comprehensive, systematic literature review on methods of risk adjustment was conducted in terms of an encompassing, interdisciplinary examination of the related disciplines. Results: In general, several distinctions can be made: in terms of risk horizons, in terms of risk factors or in terms of the combination of indicators included. Within these, another differentiation by three levels seems reasonable: methods based on mortality risks, methods based on morbidity risks as well as those based on information on (self-reported) health status. Conclusions and discussion: After the final examination of different methods of risk adjustment it was shown that the methodology used to adjust risks varies. The models differ greatly in terms of their included morbidity indicators. The findings of this review can be used in the evaluation of integrated healthcare delivery systems and can be integrated into quality- and patient-oriented reimbursement of care providers in the design of healthcare contracts. PMID:28316544

  14. Generalized Fourier analyses of the advection-diffusion equation - Part I: one-dimensional domains

    NASA Astrophysics Data System (ADS)

    Christon, Mark A.; Martinez, Mario J.; Voth, Thomas E.

    2004-07-01

    This paper presents a detailed multi-methods comparison of the spatial errors associated with finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. The errors are reported in terms of non-dimensional phase and group speed, discrete diffusivity, artificial diffusivity, and grid-induced anisotropy. It is demonstrated that Fourier analysis provides an automatic process for separating the discrete advective operator into its symmetric and skew-symmetric components and characterizing the spectral behaviour of each operator. For each of the numerical methods considered, asymptotic truncation error and resolution estimates are presented for the limiting cases of pure advection and pure diffusion. It is demonstrated that streamline upwind Petrov-Galerkin and its control-volume finite element analogue, the streamline upwind control-volume method, produce both an artificial diffusivity and a concomitant phase speed adjustment in addition to the usual semi-discrete artifacts observed in the phase speed, group speed and diffusivity. The Galerkin finite element method and its streamline upwind derivatives are shown to exhibit super-convergent behaviour in terms of phase and group speed when a consistent mass matrix is used in the formulation. In contrast, the CVFEM method and its streamline upwind derivatives yield strictly second-order behaviour. In Part II of this paper, we consider two-dimensional semi-discretizations of the advection-diffusion equation and also assess the affects of grid-induced anisotropy observed in the non-dimensional phase speed, and the discrete and artificial diffusivities. Although this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common analysis framework. Published in 2004 by John Wiley & Sons, Ltd.

  15. [Comparisons and analysis of the spectral response functions' difference between FY-2E's and FY2C's split window channels].

    PubMed

    Zhang, Yong; Li, Yuan; Rong, Zhi-Guo

    2010-06-01

    Remote sensors' channel spectral response function (SRF) was one of the key factors to influence the quantitative products' inversion algorithm, accuracy and the geophysical characteristics. Aiming at the adjustments of FY-2E's split window channels' SRF, detailed comparisons between the FY-2E and FY-2C corresponding channels' SRF differences were carried out based on three data collections: the NOAA AVHRR corresponding channels' calibration look up tables, field measured water surface radiance and atmospheric profiles at Lake Qinghai and radiance calculated from the PLANK function within all dynamic range of FY-2E/C. The results showed that the adjustments of FY-2E's split window channels' SRF would result in the spectral range's movements and influence the inversion algorithms of some ground quantitative products. On the other hand, these adjustments of FY-2E SRFs would increase the brightness temperature differences between FY-2E's two split window channels within all dynamic range relative to FY-2C's. This would improve the inversion ability of FY-2E's split window channels.

  16. Perceptions of Emotion Expression and Sibling–Parent Emotion Communication in Latino and Non-Latino White Siblings of Children With Intellectual Disabilities

    PubMed Central

    Lobato, Debra; Kao, Barbara; Plante, Wendy; Grullón, Edicta; Cheas, Lydia; Houck, Christopher; Seifer, Ronald

    2013-01-01

    Objective Examine general emotion expression and sibling–parent emotion communication among Latino and non-Latino white (NLW) siblings of children with intellectual disabilities (ID) and matched comparisons. Methods 200 siblings (ages 8–15 years) completed the newly developed Sibling–Parent Emotion Communication Scale and existing measures of general emotion expression and psychosocial functioning. Preliminary analyses evaluated scale psychometrics across ethnicity. Results Structure and internal consistency of the emotion expression and communication measures differed by respondent ethnicity. Latino siblings endorsed more general emotion expression problems and marginally lower sibling–parent emotion communication than NLW siblings. Siblings of children with ID reported marginally more general emotion expression problems than comparisons. Emotion expression problems and lower sibling–parent emotion communication predicted more internalizing and somatic symptoms and poorer personal adjustment, regardless of ID status. Siblings of children with ID endorsed poorer personal adjustment. Conclusion Cultural differences in emotion expression and communication may increase Latino siblings’ risk for emotional adjustment difficulties. PMID:23459309

  17. Comparison of calculation methods for estimating annual carbon stock change in German forests under forest management in the German greenhouse gas inventory.

    PubMed

    Röhling, Steffi; Dunger, Karsten; Kändler, Gerald; Klatt, Susann; Riedel, Thomas; Stümer, Wolfgang; Brötz, Johannes

    2016-12-01

    The German greenhouse gas inventory in the land use change sector strongly depends on national forest inventory data. As these data were collected periodically 1987, 2002, 2008 and 2012, the time series on emissions show several "jumps" due to biomass stock change, especially between 2001 and 2002 and between 2007 and 2008 while within the periods the emissions seem to be constant due to the application of periodical average emission factors. This does not reflect inter-annual variability in the time series, which would be assumed as the drivers for the carbon stock changes fluctuate between the years. Therefore additional data, which is available on annual basis, should be introduced into the calculations of the emissions inventories in order to get more plausible time series. This article explores the possibility of introducing an annual rather than periodical approach to calculating emission factors with the given data and thus smoothing the trajectory of time series for emissions from forest biomass. Two approaches are introduced to estimate annual changes derived from periodic data: the so-called logging factor method and the growth factor method. The logging factor method incorporates annual logging data to project annual values from periodic values. This is less complex to implement than the growth factor method, which additionally adds growth data into the calculations. Calculation of the input variables is based on sound statistical methodologies and periodically collected data that cannot be altered. Thus a discontinuous trajectory of the emissions over time remains, even after the adjustments. It is intended to adopt this approach in the German greenhouse gas reporting in order to meet the request for annually adjusted values.

  18. [Comparison of three stand-level biomass estimation methods].

    PubMed

    Dong, Li Hu; Li, Feng Ri

    2016-12-01

    At present, the forest biomass methods of regional scale attract most of attention of the researchers, and developing the stand-level biomass model is popular. Based on the forestry inventory data of larch plantation (Larix olgensis) in Jilin Province, we used non-linear seemly unrelated regression (NSUR) to estimate the parameters in two additive system of stand-level biomass equations, i.e., stand-level biomass equations including the stand variables and stand biomass equations including the biomass expansion factor (i.e., Model system 1 and Model system 2), listed the constant biomass expansion factor for larch plantation and compared the prediction accuracy of three stand-level biomass estimation methods. The results indicated that for two additive system of biomass equations, the adjusted coefficient of determination (R a 2 ) of the total and stem equations was more than 0.95, the root mean squared error (RMSE), the mean prediction error (MPE) and the mean absolute error (MAE) were smaller. The branch and foliage biomass equations were worse than total and stem biomass equations, and the adjusted coefficient of determination (R a 2 ) was less than 0.95. The prediction accuracy of a constant biomass expansion factor was relatively lower than the prediction accuracy of Model system 1 and Model system 2. Overall, although stand-level biomass equation including the biomass expansion factor belonged to the volume-derived biomass estimation method, and was different from the stand biomass equations including stand variables in essence, but the obtained prediction accuracy of the two methods was similar. The constant biomass expansion factor had the lower prediction accuracy, and was inappropriate. In addition, in order to make the model parameter estimation more effective, the established stand-level biomass equations should consider the additivity in a system of all tree component biomass and total biomass equations.

  19. Rainfall estimation for real time flood monitoring using geostationary meteorological satellite data

    NASA Astrophysics Data System (ADS)

    Veerakachen, Watcharee; Raksapatcharawong, Mongkol

    2015-09-01

    Rainfall estimation by geostationary meteorological satellite data provides good spatial and temporal resolutions. This is advantageous for real time flood monitoring and warning systems. However, a rainfall estimation algorithm developed in one region needs to be adjusted for another climatic region. This work proposes computationally-efficient rainfall estimation algorithms based on an Infrared Threshold Rainfall (ITR) method calibrated with regional ground truth. Hourly rain gauge data collected from 70 stations around the Chao-Phraya river basin were used for calibration and validation of the algorithms. The algorithm inputs were derived from FY-2E satellite observations consisting of infrared and water vapor imagery. The results were compared with the Global Satellite Mapping of Precipitation (GSMaP) near real time product (GSMaP_NRT) using the probability of detection (POD), root mean square error (RMSE) and linear correlation coefficient (CC) as performance indices. Comparison with the GSMaP_NRT product for real time monitoring purpose shows that hourly rain estimates from the proposed algorithm with the error adjustment technique (ITR_EA) offers higher POD and approximately the same RMSE and CC with less data latency.

  20. Cancer-related risk indicators and preventive screening behaviors among lesbians and bisexual women.

    PubMed Central

    Cochran, S D; Mays, V M; Bowen, D; Gage, S; Bybee, D; Roberts, S J; Goldstein, R S; Robison, A; Rankow, E J; White, J

    2001-01-01

    OBJECTIVES: This study examined whether lesbians are at increased risk for certain cancers as a result of an accumulation of behavioral risk factors and difficulties in accessing health care. METHODS: Prevalence estimates of behavioral risk factors (nulliparity, obesity, smoking, and alcohol use), cancer screening behaviors, and self-reported breast cancer histories derived from 7 independently conducted surveys of lesbians/bisexual women (n = 11,876) were compared with national estimates for women. RESULTS: In comparison with adjusted estimates for the US female population, lesbians/bisexual women exhibited greater prevalence rates of obesity, alcohol use, and tobacco use and lower rates of parity and birth control pill use. These women were also less likely to have health insurance coverage or to have had a recent pelvic examination or mammogram. Self-reported histories of breast cancer, however, did not differ from adjusted US female population estimates. CONCLUSIONS: Lesbians and bisexual women differ from heterosexual women in patterns of health risk. These women would be expected to be at especially greater risk for chronic diseases linked to smoking and obesity. PMID:11291371

  1. Efficiency of the Self Adjusting File, WaveOne, Reciproc, ProTaper and hand files in root canal debridement

    PubMed Central

    Topcu, K. Meltem; Karatas, Ertugrul; Ozsu, Damla; Ersoy, Ibrahim

    2014-01-01

    Objectives: The aim of this study was to compare the canal debridement capabilities of three single file systems, ProTaper, and K-files in oval-shaped canals. Materials and Methods: Seventy-five extracted human mandibular central incisors with oval-shaped root canals were selected. A radiopaque contrast medium (Metapex; Meta Biomed Co. Ltd., Chungcheongbuk-do, Korea) was introduced into the canal systems and the self-adjusting file (SAF), WaveOne, Reciproc, ProTaper, and K-files were used for the instrumentation of the canals. The percentage of removed contrast medium was calculated using pre- and post-operative radiographs. Results: An overall comparison between the groups revealed that the hand file (HF) and SAF groups presented the lowest percentage of removed contrast medium, whereas the WaveOne group showed the highest percentage (P < 0.001). The ProTaper group removed more contrast medium than the SAF and HF groups (P < 0.05). Conclusions: None of the instruments was able to remove the contrast medium completely. WaveOne performed significantly better than other groups. PMID:25202211

  2. Prostate Cancer Radiation Therapy and Risk of Thromboembolic Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosco, Cecilia, E-mail: Cecilia.t.bosco@kcl.ac.uk; Garmo, Hans; Regional Cancer Centre, Uppsala, Akademiska Sjukhuset, Uppsala

    Purpose: To investigate the risk of thromboembolic disease (TED) after radiation therapy (RT) with curative intent for prostate cancer (PCa). Patients and Methods: We identified all men who received RT as curative treatment (n=9410) and grouped according to external beam RT (EBRT) or brachytherapy (BT). By comparing with an age- and county-matched comparison cohort of PCa-free men (n=46,826), we investigated risk of TED after RT using Cox proportional hazard regression models. The model was adjusted for tumor characteristics, demographics, comorbidities, PCa treatments, and known risk factors of TED, such as recent surgery and disease progression. Results: Between 2006 and 2013, 6232more » men with PCa received EBRT, and 3178 underwent BT. A statistically significant association was found between EBRT and BT and risk of pulmonary embolism in the crude analysis. However, upon adjusting for known TED risk factors these associations disappeared. No significant associations were found between BT or EBRT and deep venous thrombosis. Conclusion: Curative RT for prostate cancer using contemporary methodologies was not associated with an increased risk of TED.« less

  3. Family interactions in adoptive compared to nonadoptive families.

    PubMed

    Rueter, Martha A; Keyes, Margaret A; Iacono, William G; McGue, Matt

    2009-02-01

    Despite the large and growing numbers of adoptive families, little research describes interactions in families with adopted adolescents. Yet, adopted adolescents' increased risk for adjustment problems, combined with the association between family interactions and adolescent adjustment in nonadoptive families, raises questions about differences in adoptive and nonadoptive family interactions. We compared observed and self-reported family interactions between 284 adoptive and 208 nonadoptive families and within 123 families with 1 adopted and 1 nonadopted adolescent. Adolescents averaged 14.9 years of age. Comparisons were made using analysis of variance incorporating hierarchical linear methods in SAS PROC MIXED to control family-related correlations in the data. Parents and children reported more conflict in adoptive families when compared with nonadoptive families. Families with 1 adopted and 1 nonadopted adolescent reported more conflict between parents and adopted adolescents. Observed parental behavior was similar across adoptive and nonadoptive children although adopted adolescents were less warm and, in families with 2 adopted children, more conflictual than nonadopted adolescents. These findings suggest a need for further investigation of the association between family interactions and adopted adolescent problem behavior. Copyright 2009 APA, all rights reserved.

  4. Family Interactions in Adoptive Compared to Nonadoptive Families

    PubMed Central

    Rueter, Martha A.; Keyes, Margaret A.; Iacono, William G.; McGue, Matt

    2009-01-01

    Despite the large and growing numbers of adoptive families, little research describes interactions in families with adopted adolescents. Yet, adopted adolescents’ increased risk for adjustment problems, combined with the association between family interactions and adolescent adjustment in nonadoptive families, raises questions about differences in adoptive and nonadoptive family interactions. We compared observed and self-reported family interactions between 284 adoptive and 208 nonadoptive families and within 123 families with 1 adopted and 1 nonadopted adolescent. Adolescents averaged 14.9 years of age. Comparisons were made using analysis of variance incorporating hierarchical linear methods in SAS PROC MIXED to control family-related correlations in the data. Parents and children reported more conflict in adoptive families when compared with nonadoptive families. Families with 1 adopted and 1 nonadopted adolescent reported more conflict between parents and adopted adolescents. Observed parental behavior was similar across adoptive and nonadoptive children although adopted adolescents were less warm and, in families with 2 adopted children, more conflictual than nonadopted adolescents. These findings suggest a need for further investigation of the association between family interactions and adopted adolescent problem behavior. PMID:19203160

  5. 77 FR 13663 - Order Making Fiscal Year 2012 Mid-Year Adjustments to Transaction Fee Rates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-07

    ... the Exchange Act establish a new method for annually adjusting the fee rates applicable under Sections... 31(j)(2) specifies the method for determining the mid-year adjustment for fiscal 2012. Specifically... the month ($4,797,592,302,406). Repeat the method to generate forecasts for subsequent months. \\14...

  6. Comparison of two ways of altering carpal tunnel pressure with ultrasound surface wave elastography.

    PubMed

    Cheng, Yu-Shiuan; Zhou, Boran; Kubo, Kazutoshi; An, Kai-Nan; Moran, Steven L; Amadio, Peter C; Zhang, Xiaoming; Zhao, Chunfeng

    2018-06-06

    Higher carpal tunnel pressure is related to the development of carpal tunnel syndrome. Currently, the measurement of carpal tunnel pressure is invasive and therefore, a noninvasive technique is needed. We previously demonstrated that speed of wave propagation through a tendon in the carpal tunnel measured by ultrasound elastography could be used as an indicator of carpal tunnel pressure in a cadaveric model, in which a balloon had to be inserted into the carpal tunnel to adjust the carpal tunnel pressure. However, the method for adjusting the carpal tunnel pressure in the cadaveric model is not applicable for the in vivo model. The objective of this study was to utilize a different technique to adjust carpal tunnel pressure via pressing the palm and to validate it with ultrasound surface wave elastography in a human cadaveric model. The outcome was also compared with a previous balloon insertion technique. Results showed that wave speed of intra-carpal tunnel tendon and the ratio of wave speed of intra-and outer-carpal tunnel tendons increased linearly with carpal tunnel pressure. Moreover, wave speed of intra carpal tunnel tendon via both ways of altering carpal tunnel pressure showed similar results with high correlation. Therefore, it was concluded that the technique of pressing the palm can be used to adjust carpal tunnel pressure, and pressure changes can be detected via ultrasound surface wave elastography in an ex vivo model. Future studies will utilize this technique in vivo to validate the usefulness of ultrasound surface wave elastography for measuring carpal tunnel pressure. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. [Risk adjusted assessment of quality of perinatal centers - results of perinatal/neonatal quality surveillance in Saxonia].

    PubMed

    Koch, R; Gmyrek, D; Vogtmann, Ch

    2005-12-01

    The weak point of the country-wide perinatal/neonatal quality surveillance as a tool for evaluation of achievements of a distinct clinic, is the ignorance of interhospital differences in the case-mix of patients. Therefore, that approach can not result in a reliable bench marking. To adjust the results of quality assessment of different hospitals according to their risk profile of patients by multivariate analysis. The perinatal/neonatal data base of 12.783 newborns of the saxonian quality surveillance from 1998 to 2000 was analyzed. 4 relevant quality indicators of newborn outcome -- a) severe intraventricular hemorrhage in preterm infants < 1500 g, b) death in hospital of preterm infants < 1500 g, c) death in newborns with birth weight > 2500 g and d) hypoxic-ischemic encephalopathy -- were targeted to find out specific risk predictors by considering 26 risk factors. A logistic regression model was used to develop the risk predictors. Risk predictors for the 4 quality indicators could be described by 3 - 9 out of 26 analyzed risk factors. The AUC (ROC)-values for these quality indicators were 82, 89, 89 and 89 %, what signifies their reliability. Using the new specific predictors for calculation the risk adjusted incidence rates of quality indicator yielded in some remarkable changes. The apparent differences in the outcome criteria of analyzed hospitals were found to be much less pronounced. The application of the proposed method for risk adjustment of quality indicators makes it possible to perform a more objective comparison of neonatal outcome criteria between different hospitals or regions.

  8. Development of Kinematic 3D Laser Scanning System for Indoor Mapping and As-Built BIM Using Constrained SLAM

    PubMed Central

    Jung, Jaehoon; Yoon, Sanghyun; Ju, Sungha; Heo, Joon

    2015-01-01

    The growing interest and use of indoor mapping is driving a demand for improved data-acquisition facility, efficiency and productivity in the era of the Building Information Model (BIM). The conventional static laser scanning method suffers from some limitations on its operability in complex indoor environments, due to the presence of occlusions. Full scanning of indoor spaces without loss of information requires that surveyors change the scanner position many times, which incurs extra work for registration of each scanned point cloud. Alternatively, a kinematic 3D laser scanning system, proposed herein, uses line-feature-based Simultaneous Localization and Mapping (SLAM) technique for continuous mapping. Moreover, to reduce the uncertainty of line-feature extraction, we incorporated constrained adjustment based on an assumption made with respect to typical indoor environments: that the main structures are formed of parallel or orthogonal line features. The superiority of the proposed constrained adjustment is its reduction for uncertainties of the adjusted lines, leading to successful data association process. In the present study, kinematic scanning with and without constrained adjustment were comparatively evaluated in two test sites, and the results confirmed the effectiveness of the proposed system. The accuracy of the 3D mapping result was additionally evaluated by comparison with the reference points acquired by a total station: the Euclidean average distance error was 0.034 m for the seminar room and 0.043 m for the corridor, which satisfied the error tolerance for point cloud acquisition (0.051 m) according to the guidelines of the General Services Administration for BIM accuracy. PMID:26501292

  9. Behavioral Adjustment Moderates the Link Between Neuroticism and Biological Health Risk: A U.S.-Japan Comparison Study.

    PubMed

    Kitayama, Shinobu; Park, Jiyoung; Miyamoto, Yuri; Date, Heiwa; Boylan, Jennifer Morozink; Markus, Hazel R; Karasawa, Mayumi; Kawakami, Norito; Coe, Christopher L; Love, Gayle D; Ryff, Carol D

    2018-06-01

    Neuroticism, a broad personality trait linked to negative emotions, is consistently linked to ill health when self-report is used to assess health. However, when health risk is assessed with biomarkers, the evidence is inconsistent. Here, we tested the hypothesis that the association between neuroticism and biological health risk is moderated by behavioral adjustment, a propensity to flexibly adjust behaviors to environmental contingencies. Using a U.S.-Japan cross-cultural survey, we found that neuroticism was linked to lower biological health risk for those who are high, but not low, in behavioral adjustment. Importantly, Japanese were higher in behavioral adjustment than European Americans, and as predicted by this cultural difference, neuroticism was linked to lower biological health risk for Japanese but not for European Americans. Finally, consistent with prior evidence, neuroticism was associated with worse self-reported health regardless of behavioral adjustment or culture. Discussion focused on the significance of identifying sociocultural correlates of biological health.

  10. Observed Macro- and Micro-Level Parenting Behaviors During Preadolescent Family Interactions as Predictors of Adjustment in Emerging Adults With and Without Spina Bifida

    PubMed Central

    Amaro, Christina M.; Devine, Katie A.; Psihogios, Alexandra M.; Murphy, Lexa K.; Holmbeck, Grayson N.

    2015-01-01

    Objective To examine observed autonomy-promoting and -inhibiting parenting behaviors during preadolescence as predictors of adjustment outcomes in emerging adults with and without spina bifida (SB). Methods Demographic and videotaped interaction data were collected from families with 8/9-year-old children with SB (n = 68) and a matched group of typically developing youth (n = 68). Observed interaction data were coded with macro- and micro-coding schemes. Measures of emerging adulthood adjustment were collected 10 years later (ages 18/19 years; n = 50 and n = 60 for SB and comparison groups, respectively). Results Autonomy-promoting (behavioral control, autonomy-relatedness) and -inhibiting (psychological control) observed preadolescent parenting behaviors prospectively predicted emerging adulthood adjustment, particularly within educational, social, and emotional domains. Interestingly, high parent undermining of relatedness predicted better educational and social adjustment in the SB sample. Conclusions Parenting behaviors related to autonomy have long-term consequences for adjustment in emerging adults with and without SB. PMID:24864277

  11. Solving the Value Equation: Assessing Surgeon Performance Using Risk-Adjusted Quality-Cost Diagrams and Surgical Outcomes.

    PubMed

    Knechtle, William S; Perez, Sebastian D; Raval, Mehul V; Sullivan, Patrick S; Duwayri, Yazan M; Fernandez, Felix; Sharma, Joe; Sweeney, John F

    Quality-cost diagrams have been used previously to assess interventions and their cost-effectiveness. This study explores the use of risk-adjusted quality-cost diagrams to compare the value provided by surgeons by presenting cost and outcomes simultaneously. Colectomy cases from a single institution captured in the National Surgical Quality Improvement Program database were linked to hospital cost-accounting data to determine costs per encounter. Risk adjustment models were developed and observed average cost and complication rates per surgeon were compared to expected cost and complication rates using the diagrams. Surgeons were surveyed to determine if the diagrams could provide information that would result in practice adjustment. Of 55 surgeons surveyed on the utility of the diagrams, 92% of respondents believed the diagrams were useful. The diagrams seemed intuitive to interpret, and making risk-adjusted comparisons accounted for patient differences in the evaluation.

  12. The 2012 Long-Term Budget Outlook

    DTIC Science & Technology

    2012-06-01

    cuts or tax increases would give families, businesses , and state and local governments little time to plan and adjust, and would require more... busi - nesses, and state and local governments little time to plan and adjust, and would require more sacrifices sooner from current older workers...path. By comparison, if productivity growth was 0.3 per- centage points lower every year than CBO had assumed, GDP in the 10th year would be 3

  13. Population Synthesis of Radio and Y-ray Normal, Isolated Pulsars Using Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Billman, Caleb; Gonthier, P. L.; Harding, A. K.

    2013-04-01

    We present preliminary results of a population statistics study of normal pulsars (NP) from the Galactic disk using Markov Chain Monte Carlo techniques optimized according to two different methods. The first method compares the detected and simulated cumulative distributions of series of pulsar characteristics, varying the model parameters to maximize the overall agreement. The advantage of this method is that the distributions do not have to be binned. The other method varies the model parameters to maximize the log of the maximum likelihood obtained from the comparisons of four-two dimensional distributions of radio and γ-ray pulsar characteristics. The advantage of this method is that it provides a confidence region of the model parameter space. The computer code simulates neutron stars at birth using Monte Carlo procedures and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and γ-ray emission characteristics, implementing an empirical γ-ray luminosity model. A comparison group of radio NPs detected in ten-radio surveys is used to normalize the simulation, adjusting the model radio luminosity to match a birth rate. We include the Fermi pulsars in the forthcoming second pulsar catalog. We present preliminary results comparing the simulated and detected distributions of radio and γ-ray NPs along with a confidence region in the parameter space of the assumed models. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.

  14. [Effect of 2 methods of occlusion adjustment on occlusal balance and muscles of mastication in patient with implant restoration].

    PubMed

    Wang, Rong; Xu, Xin

    2015-12-01

    To compare the effect of 2 methods of occlusion adjustment on occlusal balance and muscles of mastication in patients with dental implant restoration. Twenty patients, each with a single edentulous posterior dentition with no distal dentition were selected, and divided into 2 groups. Patients in group A underwent original occlusion adjustment method and patients in group B underwent occlusal plane reduction technique. Ankylos implants were implanted in the edentulous space in each patient and restored with fixed prosthodontics single unit crown. Occlusion was adjusted in each restoration accordingly. Electromyograms were conducted to determine the effect of adjustment methods on occlusion and muscles of mastication 3 months and 6 months after initial restoration and adjustment. Data was collected and measurements for balanced occlusal measuring standards were obtained, including central occlusion force (COF), asymmetry index of molar occlusal force(AMOF). Balanced muscles of mastication measuring standards were also obtained including measurements from electromyogram for the muscles of mastication and the anterior bundle of the temporalis muscle at the mandibular rest position, average electromyogram measurements of the anterior bundle of the temporalis muscle at the intercuspal position(ICP), Astot, masseter muscle asymmetry index, and anterior temporalis asymmetry index (ASTA). Statistical analysis was performed using Student 's t test with SPSS 18.0 software package. Three months after occlusion adjustment, parameters of the original occlusion adjustment method were significantly different between group A and group B in balanced occlusal measuring standards and balanced muscles of mastication measuring standards. Six months after occlusion adjustment, parameters of the original occlusion adjustment methods were significantly different between group A and group B in balanced muscles of mastication measuring standards, but was no significant difference in balanced occlusal measuring standards. Using occlusion plane reduction adjustment technique, it is possible to obtain occlusion index and muscles of mastication's electromyogram index similar to the opposite side's natural dentition in patients with single unit fix prosthodontics crown and single posterior edentulous dentition without distal dentitions.

  15. New Model for Estimating Glomerular Filtration Rate in Patients With Cancer

    PubMed Central

    Janowitz, Tobias; Williams, Edward H.; Marshall, Andrea; Ainsworth, Nicola; Thomas, Peter B.; Sammut, Stephen J.; Shepherd, Scott; White, Jeff; Mark, Patrick B.; Lynch, Andy G.; Jodrell, Duncan I.; Tavaré, Simon; Earl, Helena

    2017-01-01

    Purpose The glomerular filtration rate (GFR) is essential for carboplatin chemotherapy dosing; however, the best method to estimate GFR in patients with cancer is unknown. We identify the most accurate and least biased method. Methods We obtained data on age, sex, height, weight, serum creatinine concentrations, and results for GFR from chromium-51 (51Cr) EDTA excretion measurements (51Cr-EDTA GFR) from white patients ≥ 18 years of age with histologically confirmed cancer diagnoses at the Cambridge University Hospital NHS Trust, United Kingdom. We developed a new multivariable linear model for GFR using statistical regression analysis. 51Cr-EDTA GFR was compared with the estimated GFR (eGFR) from seven published models and our new model, using the statistics root-mean-squared-error (RMSE) and median residual and on an internal and external validation data set. We performed a comparison of carboplatin dosing accuracy on the basis of an absolute percentage error > 20%. Results Between August 2006 and January 2013, data from 2,471 patients were obtained. The new model improved the eGFR accuracy (RMSE, 15.00 mL/min; 95% CI, 14.12 to 16.00 mL/min) compared with all published models. Body surface area (BSA)–adjusted chronic kidney disease epidemiology (CKD-EPI) was the most accurate published model for eGFR (RMSE, 16.30 mL/min; 95% CI, 15.34 to 17.38 mL/min) for the internal validation set. Importantly, the new model reduced the fraction of patients with a carboplatin dose absolute percentage error > 20% to 14.17% in contrast to 18.62% for the BSA-adjusted CKD-EPI and 25.51% for the Cockcroft-Gault formula. The results were externally validated. Conclusion In a large data set from patients with cancer, BSA-adjusted CKD-EPI is the most accurate published model to predict GFR. The new model improves this estimation and may present a new standard of care. PMID:28686534

  16. New Model for Estimating Glomerular Filtration Rate in Patients With Cancer.

    PubMed

    Janowitz, Tobias; Williams, Edward H; Marshall, Andrea; Ainsworth, Nicola; Thomas, Peter B; Sammut, Stephen J; Shepherd, Scott; White, Jeff; Mark, Patrick B; Lynch, Andy G; Jodrell, Duncan I; Tavaré, Simon; Earl, Helena

    2017-08-20

    Purpose The glomerular filtration rate (GFR) is essential for carboplatin chemotherapy dosing; however, the best method to estimate GFR in patients with cancer is unknown. We identify the most accurate and least biased method. Methods We obtained data on age, sex, height, weight, serum creatinine concentrations, and results for GFR from chromium-51 ( 51 Cr) EDTA excretion measurements ( 51 Cr-EDTA GFR) from white patients ≥ 18 years of age with histologically confirmed cancer diagnoses at the Cambridge University Hospital NHS Trust, United Kingdom. We developed a new multivariable linear model for GFR using statistical regression analysis. 51 Cr-EDTA GFR was compared with the estimated GFR (eGFR) from seven published models and our new model, using the statistics root-mean-squared-error (RMSE) and median residual and on an internal and external validation data set. We performed a comparison of carboplatin dosing accuracy on the basis of an absolute percentage error > 20%. Results Between August 2006 and January 2013, data from 2,471 patients were obtained. The new model improved the eGFR accuracy (RMSE, 15.00 mL/min; 95% CI, 14.12 to 16.00 mL/min) compared with all published models. Body surface area (BSA)-adjusted chronic kidney disease epidemiology (CKD-EPI) was the most accurate published model for eGFR (RMSE, 16.30 mL/min; 95% CI, 15.34 to 17.38 mL/min) for the internal validation set. Importantly, the new model reduced the fraction of patients with a carboplatin dose absolute percentage error > 20% to 14.17% in contrast to 18.62% for the BSA-adjusted CKD-EPI and 25.51% for the Cockcroft-Gault formula. The results were externally validated. Conclusion In a large data set from patients with cancer, BSA-adjusted CKD-EPI is the most accurate published model to predict GFR. The new model improves this estimation and may present a new standard of care.

  17. A comparison of automated dispensing cabinet optimization methods.

    PubMed

    O'Neil, Daniel P; Miller, Adam; Cronin, Daniel; Hatfield, Chad J

    2016-07-01

    Results of a study comparing two methods of optimizing automated dispensing cabinets (ADCs) are reported. Eight nonprofiled ADCs were optimized over six months. Optimization of each cabinet involved three steps: (1) removal of medications that had not been dispensed for at least 180 days, (2) movement of ADC stock to better suit end-user needs and available space, and (3) adjustment of par levels (desired on-hand inventory levels). The par levels of four ADCs (the Day Supply group) were adjusted according to average daily usage; the par levels of the other four ADCs (the Formula group) were adjusted using a standard inventory formula. The primary outcome was the vend:fill ratio, while secondary outcomes included total inventory, inventory cost, quantity of expired medications, and ADC stockout percentage. The total number of medications stocked in the eight machines was reduced from 1,273 in a designated two-month preoptimization period to 1,182 in a designated two-month postoptimization period, yielding a carrying cost savings of $44,981. The mean vend:fill ratios before and after optimization were 4.43 and 4.46, respectively. The vend:fill ratio for ADCs in the Formula group increased from 4.33 before optimization to 5.2 after optimization; in the Day Supply group, the ratio declined (from 4.52 to 3.90). The postoptimization interaction difference between the Formula and Day Supply groups was found to be significant (p = 0.0477). ADC optimization via a standard inventory formula had a positive impact on inventory costs, refills, vend:fill ratios, and stockout percentages. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  18. Pembrolizumab versus the standard of care for relapsed and refractory classical Hodgkin's lymphoma progressing after brentuximab vedotin: an indirect treatment comparison.

    PubMed

    Keeping, Sam; Wu, Elise; Chan, Keith; Mojebi, Ali; Ferrante, Shannon Allen; Balakumaran, Arun

    2018-05-15

    There is significant unmet need among patients with relapsed and refractory classical Hodgkin's lymphoma (RRcHL) who have failed multiple lines of therapy, including brentuximab vedotin (BV). Pembrolizumab, an immune checkpoint inhibitor, is one possible treatment solution for this population. The objective of this study was to compare progression-free survival (PFS) with standard of care (SOC) versus pembrolizumab in previously BV treated RRcHL patients. A systematic literature review identified one observational study (Cheah et al., 2016) of SOC that was suitable for comparison with KEYNOTE-087, the principal trial of pembrolizumab in this population. Both naïve and population-adjusted (using outcomes regression) pairwise indirect comparisons were conducted. The primary analysis included all patients who had failed BV, with a secondary analysis conducted including only those known to have failed BV that was part of definitive treatment. In the primary analysis, SOC was inferior to pembrolizumab in both the unadjusted comparison (HR 5.00 [95% confidence interval (CI) 3.56-7.01]) and the adjusted comparison (HR 6.35 [95% CI 4.04-9.98]). These HRs increased to 5.16 (95% CI 3.61-7.38) and 6.56 (95% CI 4.01-10.72), respectively, in the secondary analysis. Pembrolizumab offers a significant improvement in PFS compared to SOC in this population.

  19. A Type of Low-Latency Data Gathering Method with Multi-Sink for Sensor Networks

    PubMed Central

    Sha, Chao; Qiu, Jian-mei; Li, Shu-yan; Qiang, Meng-ye; Wang, Ru-chuan

    2016-01-01

    To balance energy consumption and reduce latency on data transmission in Wireless Sensor Networks (WSNs), a type of low-latency data gathering method with multi-Sink (LDGM for short) is proposed in this paper. The network is divided into several virtual regions consisting of three or less data gathering units and the leader of each region is selected according to its residual energy as well as distance to all of the other nodes. Only the leaders in each region need to communicate with the mobile Sinks which have effectively reduced energy consumption and the end-to-end delay. Moreover, with the help of the sleep scheduling and the sensing radius adjustment strategies, redundancy in network coverage could also be effectively reduced. Simulation results show that LDGM is energy efficient in comparison with MST as well as MWST and its time efficiency on data collection is higher than one Sink based data gathering methods. PMID:27338401

  20. Development of energy saving automatic air conditioner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okada, T.; Iijima, T.; Kakinuma, A.

    1986-01-01

    This paper discusses an automatic air conditioner which adopts a new energy saving control method for controlling heat exchange at the heater and the cooler instead of the conventional reheat air-mix one. In this new air conditioner, the cooler does not work when the passenger room is heated and similarly the heater does not work when the passenger room is cooled, minimizing the use rate of the cooler which accounts for the most of the air conditioner's power consumption. Nonetheless, the heat released from the air conditioner to the room can be adjusted smoothly from maximum cooling to maximum heatingmore » just the same as in the conventional type. The results of on-vehicle comparison tests of the above two methods have shown that the energy saving control method saves nearly half of the energy which is consumed in a year with the conventional one, with the room being kept around 25/sup 0/C (77/sup 0/F).« less

  1. Numerical simulation study on rolling-chemical milling process of aluminum-lithium alloy skin panel

    NASA Astrophysics Data System (ADS)

    Huang, Z. B.; Sun, Z. G.; Sun, X. F.; Li, X. Q.

    2017-09-01

    Single curvature parts such as aircraft fuselage skin panels are usually manufactured by rolling-chemical milling process, which is usually faced with the problem of geometric accuracy caused by springback. In most cases, the methods of manual adjustment and multiple roll bending are used to control or eliminate the springback. However, these methods can cause the increase of product cost and cycle, and lead to material performance degradation. Therefore, it is of significance to precisely control the springback of rolling-chemical milling process. In this paper, using the method of experiment and numerical simulation on rolling-chemical milling process, the simulation model for rolling-chemical milling process of 2060-T8 aluminum-lithium alloy skin was established and testified by the comparison between numerical simulation and experiment results for the validity. Then, based on the numerical simulation model, the relative technological parameters which influence on the curvature of the skin panel were analyzed. Finally, the prediction of springback and the compensation can be realized by controlling the process parameters.

  2. A method to estimate the contribution of regional genetic associations to complex traits from summary association statistics.

    PubMed

    Pare, Guillaume; Mao, Shihong; Deng, Wei Q

    2016-06-08

    Despite considerable efforts, known genetic associations only explain a small fraction of predicted heritability. Regional associations combine information from multiple contiguous genetic variants and can improve variance explained at established association loci. However, regional associations are not easily amenable to estimation using summary association statistics because of sensitivity to linkage disequilibrium (LD). We now propose a novel method, LD Adjusted Regional Genetic Variance (LARGV), to estimate phenotypic variance explained by regional associations using summary statistics while accounting for LD. Our method is asymptotically equivalent to a multiple linear regression model when no interaction or haplotype effects are present. It has several applications, such as ranking of genetic regions according to variance explained or comparison of variance explained by two or more regions. Using height and BMI data from the Health Retirement Study (N = 7,776), we show that most genetic variance lies in a small proportion of the genome and that previously identified linkage peaks have higher than expected regional variance.

  3. Energetics of jellyfish locomotion determined from field measurements using a Self- Contained Underwater Velocimetry Apparatus (SCUVA)

    NASA Astrophysics Data System (ADS)

    Katija, K.; Dabiri, J. O.

    2007-12-01

    We conduct laboratory measurements of the flow fields induced by Aurelia labiata over a range of sizes using the method of digital particle image velocimetry (DPIV). The flow field measurements are used to directly quantify the kinetic energy induced by the swimming motions of individual medusae. This method provides details regarding the temporal evolution of the energetics during a swimming cycle and its scaling with bell diameter. These types of measurements also allow for the determination of propulsive efficiency, which can be used to compare various methods of propulsion, both biological and artificial. We then describe the development and application of a Self-Contained Underwater Velocimetry Apparatus (SCUVA), a device that enables a single SCUBA diver to make DPIV measurements of animal-fluid interactions in the field. Improvements and adjustments made to the original system will be presented, and a comparison between the animal-induced flow fields in the laboratory and in the field will be made.

  4. A method to estimate the contribution of regional genetic associations to complex traits from summary association statistics

    PubMed Central

    Pare, Guillaume; Mao, Shihong; Deng, Wei Q.

    2016-01-01

    Despite considerable efforts, known genetic associations only explain a small fraction of predicted heritability. Regional associations combine information from multiple contiguous genetic variants and can improve variance explained at established association loci. However, regional associations are not easily amenable to estimation using summary association statistics because of sensitivity to linkage disequilibrium (LD). We now propose a novel method, LD Adjusted Regional Genetic Variance (LARGV), to estimate phenotypic variance explained by regional associations using summary statistics while accounting for LD. Our method is asymptotically equivalent to a multiple linear regression model when no interaction or haplotype effects are present. It has several applications, such as ranking of genetic regions according to variance explained or comparison of variance explained by two or more regions. Using height and BMI data from the Health Retirement Study (N = 7,776), we show that most genetic variance lies in a small proportion of the genome and that previously identified linkage peaks have higher than expected regional variance. PMID:27273519

  5. Association between hyperglycaemic crisis and long-term major adverse cardiovascular events: a nationwide population-based, propensity score-matched, cohort study.

    PubMed

    Chang, Li-Hsin; Lin, Liang-Yu; Tsai, Ming-Tsun; How, Chorng-Kuang; Chiang, Jen-Huai; Hsieh, Vivian Chia-Rong; Hu, Sung-Yuan; Hsieh, Ming-Shun

    2016-08-23

    Hyperglycaemic crisis was associated with significant intrahospital morbidity and mortality. However, the association between hyperglycaemic crisis and long-term cardiovascular outcomes remained unknown. This study aimed to investigate the association between hyperglycaemic crisis and subsequent long-term major adverse cardiovascular events (MACEs). This population-based cohort study was conducted using data from Taiwan's National Health Insurance Research Database for the period of 1996-2012. A total of 2171 diabetic patients with hyperglycaemic crisis fit the inclusion criteria. Propensity score matching was used to match the baseline characteristics of the study cohort to construct a comparison cohort which comprised 8684 diabetic patients without hyperglycaemic crisis. The risk of long-term MACEs was compared between the two cohorts. Six hundred and seventy-six MACEs occurred in the study cohort and the event rate was higher than that in the comparison cohort (31.1% vs 24.1%, p<0.001). Patients with hyperglycaemic crisis were associated with a higher risk of long-term MACEs even after adjusting for all baseline characteristics and medications (adjusted HR=1.76, 95% CI 1.62 to 1.92, p<0.001). Acute myocardial infarction had the highest adjusted HR (adjusted HR=2.19, 95% CI 1.75 to 2.75, p<0.001) in the four types of MACEs, followed by congestive heart failure (adjusted HR=1.97, 95% CI 1.70 to 2.28, p<0.001). Younger patients with hyperglycaemic crisis had a higher risk of MACEs than older patients (adjusted HR=2.69 for patients aged 20-39 years vs adjusted HR=1.58 for patients aged >65 years). Hyperglycaemic crisis was significantly associated with long-term MACEs, especially in the young population. Further prospective longitudinal study should be conducted for validation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Phase-Locked Loop for Precisely Timed Acoustic Stimulation during Sleep

    PubMed Central

    Santostasi, Giovanni; Malkani, Roneil; Riedner, Brady; Bellesi, Michele; Tononi, Giulio; Paller, Ken A.; Zee, Phyllis C.

    2016-01-01

    Background A Brain-Computer Interface could potentially enhance the various benefits of sleep. New Method We describe a strategy for enhancing slow-wave sleep (SWS) by stimulating the sleeping brain with periodic acoustic stimuli that produce resonance in the form of enhanced slow-wave activity in the electroencephalogram (EEG). The system delivers each acoustic stimulus at a particular phase of an electrophysiological rhythm using a Phase-Locked Loop (PLL). Results The PLL is computationally economical and well suited to follow and predict the temporal behavior of the EEG during slow-wave sleep. Comparison with Existing Methods Acoustic stimulation methods may be able to enhance SWS without the risks inherent in electrical stimulation or pharmacological methods. The PLL method differs from other acoustic stimulation methods that are based on detecting a single slow wave rather than modeling slow-wave activity over an extended period of time. Conclusions By providing real-time estimates of the phase of ongoing EEG oscillations, the PLL can rapidly adjust to physiological changes, thus opening up new possibilities to study brain dynamics during sleep. Future application of these methods hold promise for enhancing sleep quality and associated daytime behavior and improving physiologic function. PMID:26617321

  7. Performance evaluation of inpatient service in Beijing: a horizontal comparison with risk adjustment based on Diagnosis Related Groups

    PubMed Central

    Jian, Weiyan; Huang, Yinmin; Hu, Mu; Zhang, Xiumei

    2009-01-01

    Background The medical performance evaluation, which provides a basis for rational decision-making, is an important part of medical service research. Current progress with health services reform in China is far from satisfactory, without sufficient regulation. To achieve better progress, an effective tool for evaluating medical performance needs to be established. In view of this, this study attempted to develop such a tool appropriate for the Chinese context. Methods Data was collected from the front pages of medical records (FPMR) of all large general public hospitals (21 hospitals) in the third and fourth quarter of 2007. Locally developed Diagnosis Related Groups (DRGs) were introduced as a tool for risk adjustment and performance evaluation indicators were established: Charge Efficiency Index (CEI), Time Efficiency Index (TEI) and inpatient mortality of low-risk group cases (IMLRG), to reflect respectively work efficiency and medical service quality. Using these indicators, the inpatient services' performance was horizontally compared among hospitals. Case-mix Index (CMI) was used to adjust efficiency indices and then produce adjusted CEI (aCEI) and adjusted TEI (aTEI). Poisson distribution analysis was used to test the statistical significance of the IMLRG differences between different hospitals. Results Using the aCEI, aTEI and IMLRG scores for the 21 hospitals, Hospital A and C had relatively good overall performance because their medical charges were lower, LOS shorter and IMLRG smaller. The performance of Hospital P and Q was the worst due to their relatively high charge level, long LOS and high IMLRG. Various performance problems also existed in the other hospitals. Conclusion It is possible to develop an accurate and easy to run performance evaluation system using Case-Mix as the tool for risk adjustment, choosing indicators close to consumers and managers, and utilizing routine report forms as the basic information source. To keep such a system running effectively, it is necessary to improve the reliability of clinical information and the risk-adjustment ability of Case-Mix. PMID:19402913

  8. Highly selective detection of individual nuclear spins with rotary echo on an electron spin probe

    PubMed Central

    Mkhitaryan, V. V.; Jelezko, F.; Dobrovitski, V. V.

    2015-01-01

    We consider an electronic spin, such as a nitrogen-vacancy center in diamond, weakly coupled to a large number of nuclear spins, and subjected to the Rabi driving with a periodically alternating phase. We show that by switching the driving phase synchronously with the precession of a given nuclear spin, the interaction to this spin is selectively enhanced, while the rest of the bath remains decoupled. The enhancement is of resonant character. The key feature of the suggested scheme is that the width of the resonance is adjustable, and can be greatly decreased by increasing the driving strength. Thus, the resonance can be significantly narrowed, by a factor of 10–100 in comparison with the existing detection methods. Significant improvement in selectivity is explained analytically and confirmed by direct numerical many-spin simulations. The method can be applied to a wide range of solid-state systems. PMID:26497777

  9. An algorithm for synchronizing a clock when the data are received over a network with an unstable delay

    PubMed Central

    Levine, Judah

    2016-01-01

    A method is presented for synchronizing the time of a clock to a remote time standard when the channel connecting the two has significant delay variation that can be described only statistically. The method compares the Allan deviation of the channel fluctuations to the free-running stability of the local clock, and computes the optimum interval between requests based on one of three selectable requirements: (1) choosing the highest possible accuracy, (2) choosing the best tradeoff of cost vs. accuracy, or (3) minimizing the number of requests to realize a specific accuracy. Once the interval between requests is chosen, the final step is to steer the local clock based on the received data. A typical adjustment algorithm, which supports both the statistical considerations based on the Allan deviation comparison and the timely detection of errors is included as an example. PMID:26529759

  10. Method and apparatus for lead-unity-lag electric power generation system

    NASA Technical Reports Server (NTRS)

    Ganev, Evgeni (Inventor); Warr, William (Inventor); Salam, Mohamed (Arif) (Inventor)

    2013-01-01

    A method employing a lead-unity-lag adjustment on a power generation system is disclosed. The method may include calculating a unity power factor point and adjusting system parameters to shift a power factor angle to substantially match an operating power angle creating a new unity power factor point. The method may then define operation parameters for a high reactance permanent magnet machine based on the adjusted power level.

  11. Lung cancer survival in England: trends in non-small-cell lung cancer survival over the duration of the National Lung Cancer Audit

    PubMed Central

    Khakwani, A; Rich, A L; Powell, H A; Tata, L J; Stanley, R A; Baldwin, D R; Duffy, J P; Hubbard, R B

    2013-01-01

    Background: In comparison with other European and North American countries, England has poor survival figures for lung cancer. Our aim was to evaluate the changes in survival since the introduction of the National Lung Cancer Audit (NLCA). Methods: We used data from the NLCA to identify people with non-small-cell lung cancer (NSCLC) and stratified people according to their performance status (PS) and clinical stage. Using Cox regression, we calculated hazard ratios (HRs) for death according to the year of diagnosis from 2004/2005 to 2010; adjusted for patient features including age, sex and co-morbidity. We also assessed whether any changes in survival were explained by the changes in surgical resection rates or histological subtype. Results: In this cohort of 120 745 patients, the overall median survival did not change; but there was a 1% annual improvement in survival over the study period (adjusted HR 0.99, 95% confidence interval (CI) 0.98–0.99). Survival improvement was only seen in patients with good PS and early stage (adjusted HR 0.97, 95% CI 0.95–0.99) and this was partly accounted for by changes in resection rates. Conclusion: Survival has only improved for a limited group of people with NSCLC and increasing surgical resection rates appeared to explain some of this improvement. PMID:24052044

  12. Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches.

    PubMed

    Kuang, Jinqiu; Argo, Lauren; Stoddard, Greg; Bray, Bruce E; Zeng-Treitler, Qing

    2015-12-17

    Compared to traditional methods of participant recruitment, online crowdsourcing platforms provide a fast and low-cost alternative. Amazon Mechanical Turk (MTurk) is a large and well-known crowdsourcing service. It has developed into the leading platform for crowdsourcing recruitment. To explore the application of online crowdsourcing for health informatics research, specifically the testing of medical pictographs. A set of pictographs created for cardiovascular hospital discharge instructions was tested for recognition. This set of illustrations (n=486) was first tested through an in-person survey in a hospital setting (n=150) and then using online MTurk participants (n=150). We analyzed these survey results to determine their comparability. Both the demographics and the pictograph recognition rates of online participants were different from those of the in-person participants. In the multivariable linear regression model comparing the 2 groups, the MTurk group scored significantly higher than the hospital sample after adjusting for potential demographic characteristics (adjusted mean difference 0.18, 95% CI 0.08-0.28, P<.001). The adjusted mean ratings were 2.95 (95% CI 2.89-3.02) for the in-person hospital sample and 3.14 (95% CI 3.07-3.20) for the online MTurk sample on a 4-point Likert scale (1=totally incorrect, 4=totally correct). The findings suggest that crowdsourcing is a viable complement to traditional in-person surveys, but it cannot replace them.

  13. Community-Based Management of Acute Malnutrition to Reduce Wasting in Urban Informal Settlements of Mumbai, India: A Mixed-Methods Evaluation

    PubMed Central

    Shah More, Neena; Waingankar, Anagha; Ramani, Sudha; Chanani, Sheila; D'Souza, Vanessa; Pantvaidya, Shanti; Fernandez, Armida; Jayaraman, Anuja

    2018-01-01

    Background: We evaluated an adaptation of a large-scale community-based management of acute malnutrition program run by an NGO with government partnerships, in informal settlements of Mumbai, India. The program aimed to reduce the prevalence of wasting among children under age 3 and covered a population of approximately 300,000. Methods: This study used a mixed-methods approach including a quasi-experimental design to compare prevalence estimates of wasting in intervention areas with neighboring informal settlements. Cross-sectional data were collected from March through November 2014 for the baseline and October through December 2015 for the endline. Endline data were analyzed using mixed-effects logistic regression models, adjusting for child, maternal, and household characteristics. In addition, we conducted in-depth interviews with 37 stakeholders (13 staff and 24 mothers) who reported on salient features that contributed to successful implementation of the program. Results: We interviewed 2,578 caregivers at baseline and 3,455 at endline in intervention areas. In comparison areas, we interviewed 2,082 caregivers at baseline and 2,122 at endline. At endline, the prevalence of wasting decreased by 28% (18% to 13%) in intervention areas and by 5% (16.9% to 16%) in comparison areas. Analysis of the endline data indicated that children in intervention areas were significantly less likely to be malnourished (adjusted odds ratio, 0.81; confidence interval, 0.67 to 0.99). Stakeholders identified 4 main features as contributing to the success of the program: (1) tailoring and reinforcement of information provided to caregivers in informal settings, (2) constant field presence of staff, (3) holistic case management of issues beyond immediate malnourishment, and (4) persistence of field staff in persuading reluctant families. Staff capabilities were enhanced through training, stringent monitoring mechanisms, and support from senior staff in tackling difficult cases. Conclusion: NGO–government partnerships can revitalize existing community-based programs in urban India. Critical to success are processes that include reinforced knowledge-building of caregivers, a high level of field support and encouragement to the community, and constant monitoring and follow-up of cases by all staff levels. PMID:29602868

  14. Predicting work-related disability and medical cost outcomes: a comparison of injury severity scoring methods.

    PubMed

    Sears, Jeanne M; Blanar, Laura; Bowman, Stephen M

    2014-01-01

    Acute work-related trauma is a leading cause of death and disability among U.S. workers. Occupational health services researchers have described the pressing need to identify valid injury severity measures for purposes such as case-mix adjustment and the construction of appropriate comparison groups in programme evaluation, intervention, quality improvement, and outcome studies. The objective of this study was to compare the performance of several injury severity scores and scoring methods in the context of predicting work-related disability and medical cost outcomes. Washington State Trauma Registry (WTR) records for injuries treated from 1998 to 2008 were linked with workers' compensation claims. Several Abbreviated Injury Scale (AIS)-based injury severity measures (ISS, New ISS, maximum AIS) were estimated directly from ICD-9-CM codes using two software packages: (1) ICDMAP-90, and (2) Stata's user-written ICDPIC programme (ICDPIC). ICDMAP-90 and ICDPIC scores were compared with existing WTR scores using the Akaike Information Criterion, amount of variance explained, and estimated effects on outcomes. Competing risks survival analysis was used to evaluate work disability outcomes. Adjusted total medical costs were modelled using linear regression. The linked sample contained 6052 work-related injury events. There was substantial agreement between WTR scores and those estimated by ICDMAP-90 (kappa=0.73), and between WTR scores and those estimated by ICDPIC (kappa=0.68). Work disability and medical costs increased monotonically with injury severity, and injury severity was a significant predictor of work disability and medical cost outcomes in all models. WTR and ICDMAP-90 scores performed better with regard to predicting outcomes than did ICDPIC scores, but effect estimates were similar. Of the three severity measures, maxAIS was usually weakest, except when predicting total permanent disability. Injury severity was significantly associated with work disability and medical cost outcomes for work-related injuries. Injury severity can be estimated using either ICDMAP-90 or ICDPIC when ICD-9-CM codes are available. We observed little practical difference between severity measures or scoring methods. This study demonstrated that using existing software to estimate injury severity may be useful to enhance occupational injury surveillance and research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. The influence of pelvic adjustment on vertical jump height in female university students with functional leg length inequality.

    PubMed

    Gong, Wontae

    2015-01-01

    [Purpose] This study aimed to investigate the effect of pelvic adjustment on vertical jump height (VJH) in female university students with functional leg length inequality (FLLI). [Subjects] Thirty female university students with FLLI were divided into a pelvic adjustment group (n = 15) and a stretching (control) group (n = 15). [Methods] VJH was measured using an OptoGait. [Results] After the intervention, jump height improved significantly compared with the pre-intervention height only in the pelvic adjustment group, while FLLI showed statistically significant improvement in both groups. [Conclusion] Pelvic adjustment as per the Gonstead method can be applied as a method of reducing FLLI and increasing VJH.

  16. Nonlinear predictive control for adaptive adjustments of deep brain stimulation parameters in basal ganglia-thalamic network.

    PubMed

    Su, Fei; Wang, Jiang; Niu, Shuangxia; Li, Huiyan; Deng, Bin; Liu, Chen; Wei, Xile

    2018-02-01

    The efficacy of deep brain stimulation (DBS) for Parkinson's disease (PD) depends in part on the post-operative programming of stimulation parameters. Closed-loop stimulation is one method to realize the frequent adjustment of stimulation parameters. This paper introduced the nonlinear predictive control method into the online adjustment of DBS amplitude and frequency. This approach was tested in a computational model of basal ganglia-thalamic network. The autoregressive Volterra model was used to identify the process model based on physiological data. Simulation results illustrated the efficiency of closed-loop stimulation methods (amplitude adjustment and frequency adjustment) in improving the relay reliability of thalamic neurons compared with the PD state. Besides, compared with the 130Hz constant DBS the closed-loop stimulation methods can significantly reduce the energy consumption. Through the analysis of inter-spike-intervals (ISIs) distribution of basal ganglia neurons, the evoked network activity by the closed-loop frequency adjustment stimulation was closer to the normal state. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Psychological adjustment to IDDM: 10-year follow-up of an onset cohort of child and adolescent patients.

    PubMed

    Jacobson, A M; Hauser, S T; Willett, J B; Wolfsdorf, J I; Dvorak, R; Herman, L; de Groot, M

    1997-05-01

    To evaluate the psychological adjustment of young adults with IDDM in comparison with similarly aged individuals without chronic illness. An onset cohort of young adults (n = 57), ages 19-26 years, who have been followed over a 10-year period since diagnosis, was compared with a similarly aged group of young adults identified at the time of a moderately severe, acute illness (n = 54) and followed over the same 10-year period. The groups were assessed at 10-year follow-up in terms of 1) sociodemographic indices (e.g., schooling, employment, delinquent activities, drug use), 2) psychiatric symptoms, and 3) perceived competence. In addition, IDDM patients were examined for longitudinal change in adjustment to diabetes. The groups differed only minimally in terms of sociodemographic indices, with similar rates of high school graduation, post-high school education, employment, and drug use. The IDDM group reported fewer criminal convictions and fewer non-diabetes-related illness episodes than the comparison group. There were no differences in psychiatric symptoms. However, IDDM patients reported lower perceived competence, with specific differences found on the global self-worth, sociability, physical appearance, being an adequate provider, and humor subscales. The IDDM patients reported improving adjustment to their diabetes over the course of the 10-year follow-up. Overall, the young adults with IDDM appeared to be as psychologically well adjusted as the young adults without a chronic illness. There were, however, indications of lower self-esteem in the IDDM patients that could either portend or predispose them to risk for future depression or other difficulties in adaptation.

  18. Comparing maximum intercuspal contacts of virtual dental patients and mounted dental casts.

    PubMed

    Delong, Ralph; Ko, Ching-Chang; Anderson, Gary C; Hodges, James S; Douglas, W H

    2002-12-01

    Quantitative measures of occlusal contacts are of paramount importance in the study of chewing dysfunction. A tool is needed to identify and quantify occlusal parameters without occlusal interference caused by the technique of analysis. This laboratory simulation study compared occlusal contacts constructed from 3-dimensional images of dental casts and interocclusal records with contacts found by use of conventional methods. Dental casts of 10 completely dentate adults were mounted in a semi-adjustable Denar articulator. Maximum intercuspal contacts were marked on the casts using red film. Intercuspal records made with an experimental vinyl polysiloxane impression material recorded maximum intercuspation. Three-dimensional virtual models of the casts and interocclusal records were made using custom software and an optical scanner. Contacts were calculated between virtual casts aligned manually (CM), aligned with interocclusal records scanned seated on the mandibular casts (C1) or scanned independently (C2), and directly from virtual interocclusal records (IR). Sensitivity and specificity calculations used the marked contacts as the standard. Contact parameters were compared between method pairs. Statistical comparisons used analysis of variance and the Tukey-Kramer post hoc test (P=<.05). Sensitivities (range 0.76-0.89) did not differ significantly among the 4 methods (P=.14); however, specificities (range 0.89-0.98) were significantly lower for IR (P=.0001). Contact parameters of methods CM, C1, and C2 differed significantly from those of method IR (P<.02). The ranking based on method pair comparisons was C2/C1 > CM/C1 = CM/C2 > C2/IR > CM/IR > C1/IR, where ">" means "closer than." Within the limits of this study, occlusal contacts calculated from aligned virtual casts accurately reproduce articulator contacts.

  19. HIV quality report cards: impact of case-mix adjustment and statistical methods.

    PubMed

    Ohl, Michael E; Richardson, Kelly K; Goto, Michihiko; Vaughan-Sarrazin, Mary; Schweizer, Marin L; Perencevich, Eli N

    2014-10-15

    There will be increasing pressure to publicly report and rank the performance of healthcare systems on human immunodeficiency virus (HIV) quality measures. To inform discussion of public reporting, we evaluated the influence of case-mix adjustment when ranking individual care systems on the viral control quality measure. We used data from the Veterans Health Administration (VHA) HIV Clinical Case Registry and administrative databases to estimate case-mix adjusted viral control for 91 local systems caring for 12 368 patients. We compared results using 2 adjustment methods, the observed-to-expected estimator and the risk-standardized ratio. Overall, 10 913 patients (88.2%) achieved viral control (viral load ≤400 copies/mL). Prior to case-mix adjustment, system-level viral control ranged from 51% to 100%. Seventeen (19%) systems were labeled as low outliers (performance significantly below the overall mean) and 11 (12%) as high outliers. Adjustment for case mix (patient demographics, comorbidity, CD4 nadir, time on therapy, and income from VHA administrative databases) reduced the number of low outliers by approximately one-third, but results differed by method. The adjustment model had moderate discrimination (c statistic = 0.66), suggesting potential for unadjusted risk when using administrative data to measure case mix. Case-mix adjustment affects rankings of care systems on the viral control quality measure. Given the sensitivity of rankings to selection of case-mix adjustment methods-and potential for unadjusted risk when using variables limited to current administrative databases-the HIV care community should explore optimal methods for case-mix adjustment before moving forward with public reporting. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  20. Apertured averaged scintillation of fully and partially coherent Gaussian, annular Gaussian, flat toped and dark hollow beams

    NASA Astrophysics Data System (ADS)

    Eyyuboğlu, Halil T.

    2015-03-01

    Apertured averaged scintillation requires the evaluation of rather complicated irradiance covariance function. Here we develop a much simpler numerical method based on our earlier introduced semi-analytic approach. Using this method, we calculate aperture averaged scintillation of fully and partially coherent Gaussian, annular Gaussian flat topped and dark hollow beams. For comparison, the principles of equal source beam power and normalizing the aperture averaged scintillation with respect to received power are applied. Our results indicate that for fully coherent beams, upon adjusting the aperture sizes to capture 10 and 20% of the equal source power, Gaussian beam needs the largest aperture opening, yielding the lowest aperture average scintillation, whilst the opposite occurs for annular Gaussian and dark hollow beams. When assessed on the basis of received power normalized aperture averaged scintillation, fixed propagation distance and aperture size, annular Gaussian and dark hollow beams seem to have the lowest scintillation. Just like the case of point-like scintillation, partially coherent beams will offer less aperture averaged scintillation in comparison to fully coherent beams. But this performance improvement relies on larger aperture openings. Upon normalizing the aperture averaged scintillation with respect to received power, fully coherent beams become more advantageous than partially coherent ones.

  1. Comparison of four methods for deriving hospital standardised mortality ratios from a single hierarchical logistic regression model.

    PubMed

    Mohammed, Mohammed A; Manktelow, Bradley N; Hofer, Timothy P

    2016-04-01

    There is interest in deriving case-mix adjusted standardised mortality ratios so that comparisons between healthcare providers, such as hospitals, can be undertaken in the controversial belief that variability in standardised mortality ratios reflects quality of care. Typically standardised mortality ratios are derived using a fixed effects logistic regression model, without a hospital term in the model. This fails to account for the hierarchical structure of the data - patients nested within hospitals - and so a hierarchical logistic regression model is more appropriate. However, four methods have been advocated for deriving standardised mortality ratios from a hierarchical logistic regression model, but their agreement is not known and neither do we know which is to be preferred. We found significant differences between the four types of standardised mortality ratios because they reflect a range of underlying conceptual issues. The most subtle issue is the distinction between asking how an average patient fares in different hospitals versus how patients at a given hospital fare at an average hospital. Since the answers to these questions are not the same and since the choice between these two approaches is not obvious, the extent to which profiling hospitals on mortality can be undertaken safely and reliably, without resolving these methodological issues, remains questionable. © The Author(s) 2012.

  2. Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment

    PubMed Central

    O’Brien, Katie M.; Upson, Kristen; Cook, Nancy R.; Weinberg, Clarice R.

    2015-01-01

    Background Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. Objectives We compared adjustment methods, including novel approaches, using simulated case–control data. Methods Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. Results For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. Conclusions To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals. Citation O’Brien KM, Upson K, Cook NR, Weinberg CR. 2016. Environmental chemicals in urine and blood: improving methods for creatinine and lipid adjustment. Environ Health Perspect 124:220–227; http://dx.doi.org/10.1289/ehp.1509693 PMID:26219104

  3. Comparison of the Predictive Validity of Three Questionnaires Measuring Psychological Defenses.

    DTIC Science & Technology

    1980-05-01

    and a mean of 11.2 years (+ 1.20 S.D.) of education . Fifty-four participants had less than 12 years of schooling, 41 had 12 years of schooling, and...major behavioral adjustment probleros. VYears of education " was also used as a measure of behavioral adjustment, as completion of high school may...ofdefenstve-dtMermnce-betwen-theTF, US,-and MC groups using analysis of . variance procedures.,- Years -of, education " was related -to defenses

  4. Socioeconomic and gender inequalities in job dissatisfaction among Japanese civil servants: the roles of work, family and personality characteristics.

    PubMed

    Sekine, Michikazu; Tatsuse, Takashi; Cable, Noriko; Chandola, Tarani; Marmot, Michael

    2014-01-01

    This study examines (1) whether there are employment grade and gender differences in job dissatisfaction and (2) whether work, family, and personality characteristics explain grade and gender differences in job dissatisfaction. The participants were 3,812 civil servants, aged 20-65, working at a local government in Japan. In both males and females, low control, low social support, work-to-family conflict, type A behaviour pattern and negative affectivity were significantly associated with job dissatisfaction. In females, high demands, long work hours and being unmarried were also associated with job dissatisfaction. Among males, in comparison with the highest grade employees, the age-adjusted odds ratio (OR) for job dissatisfaction in the lowest grade employees was 1.90 (95% CI: 1.40-2.59). The grade differences reduced to 1.08 (0.76-1.54) after adjustment for work, family and personality characteristics. Among females, similar grade differences were observed, although the differences were not statistically significant. In comparison with males, the age-adjusted OR in females for job dissatisfaction was 1.32 (1.14-1.52). This gender difference was reduced to 0.95 (0.79-1.14) following adjustment for the other factors. The majority of employees belong to low to middle grades, and female employees have increased. Reducing grade and gender differences in work and family characteristics is needed.

  5. Comparative Analysis of Treatment Costs in EUROHOPE.

    PubMed

    Iversen, Tor; Aas, Eline; Rosenqvist, Gunnar; Häkkinen, Unto

    2015-12-01

    This study examines the challenges of estimating risk-adjusted treatment costs in international comparative research, specifically in the European Health Care Outcomes, Performance, and Efficiency (EuroHOPE) project. We describe the diverse format of resource data and challenges of converting these data into resource use indicators that allow meaningful cross-country comparisons. The three cost indicators developed in EuroHOPE are then described, discussed, and applied. We compare the risk-adjusted mean treatment costs of acute myocardial infarction for four of the seven countries in the EuroHOPE project, namely, Finland, Hungary, Norway, and Sweden. The outcome of the comparison depends on the time perspective as well as on the particular resource use indicator. We argue that these complementary indicators add to our understanding of the variation in resource use across countries. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Number of 24-Hour Diet Recalls Needed to Estimate Energy Intake

    PubMed Central

    MA, Yunsheng; Olendzki, Barbara C.; Pagoto, Sherry L.; Hurley, Thomas G.; Magner, Robert P.; Ockene, Ira S.; Schneider, Kristin L.; Merriam, Philip A.; Hébert, James R.

    2009-01-01

    Purpose Twenty-four-hour diet recall interviews (24HRs) are used to assess diet and to validate other diet assessment instruments. Therefore it is important to know how many 24HRs are required to describe an individual's intake. Method Seventy-nine middle-aged white women completed seven 24HRs over a 14-day period, during which energy expenditure (EE) was determined by the doubly labeled water method (DLW). Mean daily intakes were compared to DLW-derived EE using paired t tests. Linear mixed models were used to evaluate the effect of call sequence and day of the week on 24HR-derived energy intake while adjusting for education, relative body weight, social desirability, and an interaction between call sequence and social desirability. Results Mean EE from DLW was 2115 kcal/day. Adjusted 24HR-derived energy intake was lowest at call 1 (1501 kcal/day); significantly higher energy intake was observed at calls 2 and 3 (2246 and 2315 kcal/day, respectively). Energy intake on Friday was significantly lower than on Sunday. Averaging energy intake from the first two calls better approximated true energy expenditure than did the first call, and averaging the first three calls further improved the estimate (p = 0.02 for both comparisons). Additional calls did not improve estimation. Conclusions Energy intake is underreported on the first 24HR. Three 24HRs appear optimal for estimating energy intake. PMID:19576535

  7. Influence of rotational energy barriers to the conformational search of protein loops in molecular dynamics and ranking the conformations.

    PubMed

    Tappura, K

    2001-08-15

    An adjustable-barrier dihedral angle potential was added as an extension to a novel, previously presented soft-core potential to study its contribution to the efficacy of the search of the conformational space in molecular dynamics. As opposed to the conventional soft-core potential functions, the leading principle in the design of the new soft-core potential, as well as of its extension, the soft-core and adjustable-barrier dihedral angle (SCADA) potential (referred as the SCADA potential), was to maintain the main equilibrium properties of the original force field. This qualifies the methods for a variety of a priori modeling problems without need for additional restraints typically required with the conventional soft-core potentials. In the present study, the different potential energy functions are applied to the problem of predicting loop conformations in proteins. Comparison of the performance of the soft-core and SCADA potential showed that the main hurdles for the efficient sampling of the conformational space of (loops in) proteins are related to the high-energy barriers caused by the Lennard-Jones and Coulombic energy terms, and not to the rotational barriers, although the conformational search can be further enhanced by lowering the rotational barriers of the dihedral angles. Finally, different evaluation methods were studied and a few promising criteria found to distinguish the near-native loop conformations from the wrong ones.

  8. Comparison of referral and non-referral hypertensive disorders during pregnancy: an analysis of 271 consecutive cases at a tertiary hospital.

    PubMed

    Liu, Ching-Ming; Chang, Shuenn-Dyh; Cheng, Po-Jen

    2005-05-01

    This retrospective cohort study analyzed the clinical manifestations in patients with preeclampsia and eclampsia, assessed the risk factors compared to the severity of hypertensive disorders on maternal and perinatal morbidity, and mortality between the referral and non-referral patients. 271 pregnant women with preeclampsia and eclampsia were assessed (1993 to 1997). Chi-square analysis was used for the comparison of categorical variables, and the comparison of the two independent variables of proportions in estimation of confidence intervals and calculated odds ratio of the referral and non-referral groups. Multivariate logistic regression was used for adjusting potential confounding risk factors. Of the 271 patients included in this study, 71 (26.2%) patients were referrals from other hospitals. Most of the 62 (87.3%) referral patients were transferred during the period 21 and 37 weeks of gestation. Univariate analysis revealed that referral patients with hypertensive disorder were significantly associated with SBP > or =180, DBP > or =105, severe preclampsia, haemolysis, elevated liver enzymes, low platelets (HELLP), emergency C/S, maternal complications, and low birth weight babies, as well as poor Apgar score. Multivariate logistic regression analyses revealed that the risk factors identified to be significantly associated with increased risk of referral patients included: diastolic blood pressure above 105 mmHg (adjusted odds ratio, 2.09; 95 percent confidence interval, 1.06 to 4.13; P = 0.034), severe preeclampsia (adjusted odds ratio, 3.46; 95 percent confidence interval, 1.76 to 6.81; P < 0.001), eclampsia (adjusted odds ratio, 2.77; 95 percent confidence interval, 0.92 to 8.35; P = 0.071), HELLP syndrome (adjusted odds ratio, 18.81; 95 percent confidence interval, 2.14 to 164.99; P = 0.008). The significant factors associated with the referral patients with hypertensive disorders were severe preeclampsia, HELLP, and eclampsia. Lack of prenatal care was the major avoidable factor found in referral and high risk patients. Time constraints relating to referral patients and the appropriateness of patient-centered care for patient safety and better quality of health care need further investigation on national and multi-center clinical trials.

  9. A comparison of patient characteristics and survival in two trauma centres located in different countries.

    PubMed

    Templeton, J; Oakley, P A; MacKenzie, G; Cook, A L; Brand, D; Mullins, R J; Trunkey, D D

    2000-09-01

    The aim of the study was to compare patient characteristics and mortality in severely injured patients in two trauma centres located in different countries, allowing for differences in case-mix. It represents a direct bench-marking exercise between the trauma centres at the North Staffordshire Hospital (NSH), Stoke-on-Trent, UK and the Oregon Health Sciences University (OHSU) Hospital, Portland, Oregon, USA. Patients of all ages admitted to the two hospitals during 1995 and 1996 with an Injury Severity Score >15 were included, except for those who died in the emergency departments. Twenty-three factors were studied, including the Injury Severity Score, Glasgow Coma Score, mechanism of injury and anatomical site of injury. Outcome analysis was based on mortality at discharge. The pattern of trauma differed significantly between Stoke and Portland. Patients from Stoke tended to be older, presented with a lower conscious level and a lower systolic blood pressure and were intubated less frequently before arriving at hospital. Mortality depended on similar factors in both centres, especially age, highest AIS score, systolic blood pressure and Glasgow Coma Score.The crude analysis of mortality showed a highly significant odds-ratio of 1.64 in Stoke compared with Portland. Single-factor adjustments were made for the above four factors, which had a similar influence on mortality in both centres. Adjusting for the first three factors individually did not alter the odds-ratio, which stayed in the range 1.53-1.59 and remained highly significant. Adjusting for the Glasgow Coma Score reduced the odds-ratio to 0.82 and rendered it non-significant. In a multi-factor logistic regression model incorporating all of the factors shown to influence mortality in either centre, the odds-ratio was 1.7 but was not significant. The analysis illustrates the limitations and pitfalls of making crude outcome comparisons between centres. Highly significant differences in crude mortality were rendered non-significant by case-mix adjustments, supporting the null hypothesis that the two centres were equally effective in terms of this short-term indicator of outcome. To achieve a meaningful comparison between centres, adjustments must be made for the factors which affect mortality.

  10. A prevalence study on outdoor air pollution and respiratory diseases in children in Zasavje, Slovenia, as a lever to trigger evidence-based environmental health activities.

    PubMed

    Kukec, Andreja; Farkas, Jerneja; Erzen, Ivan; Zaletel-Kragelj, Lijana

    2013-01-01

    The aim of this study was to estimate the population burden of respiratory diseases in the Zasavje region of Slovenia that can be attributed to outdoor air pollution in order to gain relevant grounds for evidence based public health activities. In 2008, 981 schoolchildren (age 6 to 12 years) were observed in a prevalence study. The prevalence of chronic respiratory diseases (CRD) and frequent acute respiratory symptoms (FARS) was related to the level of outdoor air pollution in the local environment (low, moderate and high pollution areas). Logistic regression was used as a method for statistical analysis. The prevalence of CRD was 3.0 % in low pollution areas, 7.5 % in moderate pollution areas, and 9.7 % in high pollution areas (p=0.005). After adjustment for the effects of confounders, 2.91-times higher odds for CRD were registered in high pollution areas in comparison to low pollution areas (p=0.017). The prevalence of FARS was: 7.8 % in low pollution areas, 13.3 % in moderate pollution areas and 15.9 % in high pollution areas (p=0.010). After adjustment for the effects of confounders, 2.02-times higher odds for FARS were registered in high pollution areas in comparison to low pollution areas (p=0.023). The study confirmed a significantly higher prevalence of CRD and FARS in children living in high pollution areas of Zasavje. These results at least partially prompted mutual understanding and cross-sectoral cooperation - prerequisites for solving complex problems involving the impact of air pollution on health.

  11. Effectiveness of the Lunch is in the Bag program on communication between the parent, child and child-care provider around fruits, vegetables and whole grain foods: a group-randomized controlled trial

    PubMed Central

    Rashid, Tasnuva; Ranjit, Nalini; Byrd-Williams, Courtney; Chuang, Ru-Jye; Roberts-Gray, Cindy; Briley, Margaret; Sweitzer, Sara; Hoelscher, Deanna M.

    2015-01-01

    Objective To evaluate the effectiveness of the parent- and early care education (ECE) center-based Lunch is in the Bag program on communication between parent, child, and their ECE center providers around fruits, vegetables and whole grain foods (FVWG). Method A total of n=30 ECE center; 577 parent-child dyads participated in this group-randomized controlled trial conducted from 2011–2013 in Texas (n=15 ECE center, 327 dyads intervention group; n=15 ECE center, 250 dyads comparison group). Parent-child and parent-ECE center provider communication was measured using a parent-reported survey administered at baseline and end of the five-week intervention period. Multilevel linear regression analysis was used to compare the pre-to-post intervention changes in the parent-child and parent-ECE center provider communication scales. Significance was set at p<0.05. Results At baseline, parent-child and parent-ECE center provider communication scores were low. There was a significant increase post-intervention in the parent-ECE center provider communication around vegetables (Adjusted β = 0.78, 95%CI: 0.13, 1.43, p=0.002), and around fruit (Adjusted β = 0.62, 95%CI: 0.04, 0.20, p=0.04) among the parents in the intervention group as compared to those in the comparison group. There were no significant intervention effects on parent-child communication. Conclusion Lunch is in the Bag had significant positive effects on improving communication between the parents and ECE center providers around FVWG. PMID:26190371

  12. Associations of interleukin-1 gene cluster polymorphisms with C-reactive protein concentration and lung function decline in smoking-induced chronic obstructive pulmonary disease

    PubMed Central

    Wang, Yu; Shumansky, Karey; Sin, Don D; Man, SF Paul; Akhabir, Loubna; Connett, John E; Anthonisen, Nicholas R; Paré, Peter D; Sandford, Andrew J; He, Jian-Qing

    2015-01-01

    Objective: We reported association of haplotypes formed by IL-1b (IL1B)-511C/T (rs16944) and a variable number of tandem repeats (rs2234663) in intron 3 of IL-1 receptor antagonist (IL1RN) with rate of lung function decline in smoking-induced COPD. The aim of current study was to further investigate this association. Methods: We genotyped an additional 19 polymorphisms in IL1 cluster (including IL1A, IL1B and IL1RN) in non-Hispanic whites who had the fastest (n = 268) and the slowest (n = 292) decline of FEV1% predicted in the same study. We also analyzed the association of all 21 polymorphisms with serum CRP levels. Results: None of 21 polymorphisms showed significant association with rate of decline of lung function or CRP levels after adjusting for multiple comparisons. Before adjusting for multiple comparisons, only IL1RN_19327 (rs315949) showed significant association with lung function decline (P = 0.03, additive model). The frequencies of genotypes containing the IL1RN_19327A allele were 71.9% and 62.2%, respectively in the fast and slow decline groups (P = 0.02, odds ratio = 1.6, 95% confidence interval = 1.1-2.3); the IL1B_5200 (rs1143633) and rs2234663 in IL1RN were associated with serum CRP levels (P=0.04 and 0.03, respectively). Conclusions: No single marker was significantly associated with either rate of lung function decline or serum CRP levels. PMID:26722511

  13. The Medicare Drug Benefit (Part D) and Treatment of Heart Failure in Older Adults

    PubMed Central

    Donohue, Julie M.; Zhang, Yuting; Lave, Judith R.; Gellad, Walid F.; Men, Aiju; Perera, Subashan; Hanlon, Joseph T.

    2010-01-01

    Background Adherence to pharmacotherapy for heart failure is poor among older adults due, in part, to high prescription drug costs. We examined the impact of improvements in drug coverage under Medicare Part D on utilization of, and adherence to, medications for heart failure in older adults. Methods We used a quasi-experimental approach to analyze pharmacy claims for 6,950 individuals age≥65 years with heart failure enrolled in a Medicare managed care organization two years before and after Part D’s implementation. We compared prescription fill patterns among individuals who moved from limited (quarterly benefits caps of $150 or $350) or no drug coverage to Part D in 2006 to those who had generous employer-sponsored coverage throughout the study period. Results Individuals who previously lacked drug coverage filled approximately 6 more heart failure prescriptions annually after Part D (Adjusted Ratio of Prescription Counts = 1.36, 95% Confidence Interval=CI=1.29-1.44; p<0.0001 relative to the comparison group). Those previously lacking drug coverage were more likely to fill prescriptions for an angiotensin converting enzyme inhibitor/angiotensin II receptor blocker plus a beta blocker after Part D (adjusted ratio of odds ratios=AROR=1.73; 95% CI=1.42-2.10; p<0.0001), and more likely to be adherent to such pharmacotherapy (AROR=2.95; 95% CI=1.85-4.69; p<0.0001) relative to the comparison group. Conclusions Medicare Part D was associated with improved access to medications and adherence to pharmacotherapy in older adults with heart failure. PMID:20598987

  14. Comparison of DSMC Reaction Models with QCT Reaction Rates for Nitrogen

    DTIC Science & Technology

    2016-07-17

    The U.S. Government is joint author of the work and has the right to use, modify, reproduce, release, perform, display, or disclose the work. 13...Distribution A: Approved for Public Release, Distribution Unlimited PA #16299 Introduction • Comparison with measurements is final goal • Validation...model verification and parameter adjustment • Four chemistry models: total collision energy (TCE), quantum kinetic (QK), vibration-dissociation favoring

  15. Increased risk of concurrent hepatitis C among Male patients with schizophrenia.

    PubMed

    Chiu, Yu-Lung; Lin, Herng-Ching; Kao, Nai-Wen; Kao, Senyong; Lee, Hsin-Chien

    2017-12-01

    Prior studies attempted to explore the association between schizophrenia and hepatitis C virus (HCV). However, their conclusions were inconsistent. This study aimed to examine the association of schizophrenia with HCV using a population-based dataset in Taiwan. There were 6097 patients with schizophrenia and 6097 sex- and age-matched comparison patients without schizophrenia included in this study. We defined the dependent variable of interest as whether or not a patient had received a diagnosis of HCV. We found that of the sampled patients, 2.1% of patients with schizophrenia and 1.4% of comparison patients had concurrent HCV. We further found that schizophrenia was not significantly associated with concurrent HCV after adjusting for sex, age, urbanization level, geographic region, monthly income, and drug abuse. However, of the sampled male patients, the adjusted odds of concurrent hepatitis C for patients with schizophrenia were 1.72-times higher than the odds of concurrent HCV among comparison patients. We failed to observe this association among female sampled patients. We concluded that schizophrenia was not significantly associated with concurrent HCV. However, of the sampled male patients, the risk of concurrent HCV among patients with schizophrenia was higher than comparison patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Considerations in comparing the U.S. Geological Survey one‐year induced‐seismicity hazard models with “Did You Feel It?” and instrumental data

    USGS Publications Warehouse

    White, Isabel; Liu, Taojun; Luco, Nicolas; Liel, Abbie

    2017-01-01

    The recent steep increase in seismicity rates in Oklahoma, southern Kansas, and other parts of the central United States led the U.S. Geological Survey (USGS) to develop, for the first time, a probabilistic seismic hazard forecast for one year (2016) that incorporates induced seismicity. In this study, we explore a process to ground‐truth the hazard model by comparing it with two databases of observations: modified Mercalli intensity (MMI) data from the “Did You Feel It?” (DYFI) system and peak ground acceleration (PGA) values from instrumental data. Because the 2016 hazard model was heavily based on earthquake catalogs from 2014 to 2015, this initial comparison utilized observations from these years. Annualized exceedance rates were calculated with the DYFI and instrumental data for direct comparison with the model. These comparisons required assessment of the options for converting hazard model results and instrumental data from PGA to MMI for comparison with the DYFI data. In addition, to account for known differences that affect the comparisons, the instrumental PGA and DYFI data were declustered, and the hazard model was adjusted for local site conditions. With these adjustments, examples at sites with the most data show reasonable agreement in the exceedance rates. However, the comparisons were complicated by the spatial and temporal completeness of the instrumental and DYFI observations. Furthermore, most of the DYFI responses are in the MMI II–IV range, whereas the hazard model is oriented toward forecasts at higher ground‐motion intensities, usually above about MMI IV. Nevertheless, the study demonstrates some of the issues that arise in making these comparisons, thereby informing future efforts to ground‐truth and improve hazard modeling for induced‐seismicity applications.

  17. Improved automatic adjustment of density and contrast in FCR system using neural network

    NASA Astrophysics Data System (ADS)

    Takeo, Hideya; Nakajima, Nobuyoshi; Ishida, Masamitsu; Kato, Hisatoyo

    1994-05-01

    FCR system has an automatic adjustment of image density and contrast by analyzing the histogram of image data in the radiation field. Advanced image recognition methods proposed in this paper can improve the automatic adjustment performance, in which neural network technology is used. There are two methods. Both methods are basically used 3-layer neural network with back propagation. The image data are directly input to the input-layer in one method and the histogram data is input in the other method. The former is effective to the imaging menu such as shoulder joint in which the position of interest region occupied on the histogram changes by difference of positioning and the latter is effective to the imaging menu such as chest-pediatrics in which the histogram shape changes by difference of positioning. We experimentally confirm the validity of these methods (about the automatic adjustment performance) as compared with the conventional histogram analysis methods.

  18. Payment methods for outpatient care facilities.

    PubMed

    Yuan, Beibei; He, Li; Meng, Qingyue; Jia, Liying

    2017-03-03

    Outpatient care facilities provide a variety of basic healthcare services to individuals who do not require hospitalisation or institutionalisation, and are usually the patient's first contact. The provision of outpatient care contributes to immediate and large gains in health status, and a large portion of total health expenditure goes to outpatient healthcare services. Payment method is one of the most important incentive methods applied by purchasers to guide the performance of outpatient care providers. To assess the impact of different payment methods on the performance of outpatient care facilities and to analyse the differences in impact of payment methods in different settings. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), 2016, Issue 3, part of the Cochrane Library (searched 8 March 2016); MEDLINE, OvidSP (searched 8 March 2016); Embase, OvidSP (searched 24 April 2014); PubMed (NCBI) (searched 8 March 2016); Dissertations and Theses Database, ProQuest (searched 8 March 2016); Conference Proceedings Citation Index (ISI Web of Science) (searched 8 March 2016); IDEAS (searched 8 March 2016); EconLit, ProQuest (searched 8 March 2016); POPLINE, K4Health (searched 8 March 2016); China National Knowledge Infrastructure (searched 8 March 2016); Chinese Medicine Premier (searched 8 March 2016); OpenGrey (searched 8 March 2016); ClinicalTrials.gov, US National Institutes of Health (NIH) (searched 8 March 2016); World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (searched 8 March 2016); and the website of the World Bank (searched 8 March 2016).In addition, we searched the reference lists of included studies and carried out a citation search for the included studies via ISI Web of Science to find other potentially relevant studies. We also contacted authors of the main included studies regarding any further published or unpublished work. Randomised trials, non-randomised trials, controlled before-after studies, interrupted time series, and repeated measures studies that compared different payment methods for outpatient health facilities. We defined outpatient care facilities in this review as facilities that provide health services to individuals who do not require hospitalisation or institutionalisation. We only included methods used to transfer funds from the purchaser of healthcare services to health facilities (including groups of individual professionals). These include global budgets, line-item budgets, capitation, fee-for-service (fixed and unconstrained), pay for performance, and mixed payment. The primary outcomes were service provision outcomes, patient outcomes, healthcare provider outcomes, costs for providers, and any adverse effects. At least two review authors independently extracted data and assessed the risk of bias. We conducted a structured synthesis. We first categorised the comparisons and outcomes and then described the effects of different types of payment methods on different categories of outcomes. We used a fixed-effect model for meta-analysis within a study if a study included more than one indicator in the same category of outcomes. We used a random-effects model for meta-analysis across studies. If the data for meta-analysis were not available in some studies, we calculated the median and interquartile range. We reported the risk ratio (RR) for dichotomous outcomes and the relative change for continuous outcomes. We included 21 studies from Afghanistan, Burundi, China, Democratic Republic of Congo, Rwanda, Tanzania, the United Kingdom, and the United States of health facilities providing primary health care and mental health care. There were three kinds of payment comparisons. 1) Pay for performance (P4P) combined with some existing payment method (capitation or different kinds of input-based payment) compared to the existing payment methodWe included 18 studies in this comparison, however we did not include five studies in the effects analysis due to high risk of bias. From the 13 studies, we found that the extra P4P incentives probably slightly improved the health professionals' use of some tests and treatments (adjusted RR median = 1.095, range 1.01 to 1.17; moderate-certainty evidence), and probably led to little or no difference in adherence to quality assurance criteria (adjusted percentage change median = -1.345%, range -8.49% to 5.8%; moderate-certainty evidence). We also found that P4P incentives may have led to little or no difference in patients' utilisation of health services (adjusted RR median = 1.01, range 0.96 to 1.15; low-certainty evidence) and may have led to little or no difference in the control of blood pressure or cholesterol (adjusted RR = 1.01, range 0.98 to 1.04; low-certainty evidence). 2) Capitation combined with P4P compared to fee-for-service (FFS)One study found that compared with FFS, a capitated budget combined with payment based on providers' performance on antibiotic prescriptions and patient satisfaction probably slightly reduced antibiotic prescriptions in primary health facilities (adjusted RR 0.84, 95% confidence interval 0.74 to 0.96; moderate-certainty evidence). 3) Capitation compared to FFSTwo studies compared capitation to FFS in mental health centres in the United States. Based on these studies, the effects of capitation compared to FFS on the utilisation and costs of services were uncertain (very low-certainty evidence). Our review found that if policymakers intend to apply P4P incentives to pay health facilities providing outpatient services, this intervention will probably lead to a slight improvement in health professionals' use of tests or treatments, particularly for chronic diseases. However, it may lead to little or no improvement in patients' utilisation of health services or health outcomes. When considering using P4P to improve the performance of health facilities, policymakers should carefully consider each component of their P4P design, including the choice of performance measures, the performance target, payment frequency, if there will be additional funding, whether the payment level is sufficient to change the behaviours of health providers, and whether the payment to facilities will be allocated to individual professionals. Unfortunately, the studies included in this review did not help to inform those considerations.Well-designed comparisons of different payment methods for outpatient health facilities in low- and middle-income countries and studies directly comparing different designs (e.g. different payment levels) of the same payment method (e.g. P4P or FFS) are needed.

  19. Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images.

    PubMed

    Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde

    2017-03-01

    Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality.

  20. Study of micro piezoelectric vibration generator with added mass and capacitance suitable for broadband vibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Qing, E-mail: hqng@163.com; Mao, Xinhua, E-mail: 30400414@qq.com; Chu, Dongliang, E-mail: 569256386@qq.com

    This study proposes an optimized frequency adjustment method that uses a micro-cantilever beam-based piezoelectric vibration generator based on a combination of added mass and capacitance. The most important concept of the proposed method is that the frequency adjustment process is divided into two steps: the first is a rough adjustment step that changes the size of the mass added at the end of cantilever to adjust the frequency in a large-scale and discontinuous manner; the second step is a continuous but short-range frequency adjustment via the adjustable added capacitance. Experimental results show that when the initial natural frequency of amore » micro piezoelectric vibration generator is 69.8 Hz, then this natural frequency can be adjusted to any value in the range from 54.2 Hz to 42.1 Hz using the combination of the added mass and the capacitance. This method simply and effectively matches a piezoelectric vibration generator’s natural frequency to the vibration source frequency.« less

  1. Comparison of propeller cruise noise data taken in the NASA Lewis 8- by 6-foot wind tunnel with other tunnel and flight data

    NASA Technical Reports Server (NTRS)

    Dittmar, James H.

    1989-01-01

    The noise of advanced high speed propeller models measured in the NASA 8- by 6-foot wind tunnel has been compared with model propeller noise measured in another tunnel and with full-scale propeller noise measured in flight. Good agreement was obtained for the noise of a model counterrotation propeller tested in the 8- by 6-foot wind tunnel and in the acoustically treated test section of the Boeing Transonic Wind Tunnel. This good agreement indicates the relative validity of taking cruise noise data on a plate in the 8- by 6-foot wind tunnel compared with the free-field method in the Boeing tunnel. Good agreement was also obtained for both single rotation and counter-rotation model noise comparisons with full-scale propeller noise in flight. The good scale model to full-scale comparisons indicate both the validity of the 8- by 6-foot wind tunnel data and the ability to scale to full size. Boundary layer refraction on the plate provides a limitation to the measurement of forward arc noise in the 8- by 6-foot wind tunnel at the higher harmonics of the blade passing tone. The use of a validated boundary layer refraction model to adjust the data could remove this limitation.

  2. Comparison of propeller cruise noise data taken in the NASA Lewis 8- by 6-foot wind tunnel with other tunnel and flight data

    NASA Technical Reports Server (NTRS)

    Dittmar, James

    1989-01-01

    The noise of advanced high speed propeller models measured in the NASA 8- by 6-foot wind tunnel has been compared with model propeller noise measured in another tunnel and with full-scale propeller noise measured in flight. Good agreement was obtained for the noise of a model counterrotation propeller tested in the 8- by 6-foot wind tunnel and in the acoustically treated test section of the Boeing Transonic Wind Tunnel. This good agreement indicates the relative validity of taking cruise noise data on a plate in the 8- by 6-foot wind tunnel compared with the free-field method in the Boeing tunnel. Good agreement was also obtained for both single rotation and counter-rotation model noise comparisons with full-scale propeller noise in flight. The good scale model to full-scale comparisons indicate both the validity of the 8- by 6-foot wind tunnel data and the ability to scale to full size. Boundary layer refraction on the plate provides a limitation to the measurement of forward arc noise in the 8- by 6-foot wind tunnel at the higher harmonics of the blade passing tone. The sue of a validated boundary layer refraction model to adjust the data could remove this limitation.

  3. Mortality Among Adults With Intellectual Disability in England: Comparisons With the General Population

    PubMed Central

    Hosking, Fay J.; Shah, Sunil M.; Harris, Tess; DeWilde, Stephen; Beighton, Carole; Cook, Derek G.

    2016-01-01

    Objectives. To describe mortality among adults with intellectual disability in England in comparison with the general population. Methods. We conducted a cohort study from 2009 to 2013 using data from 343 general practices. Adults with intellectual disability (n = 16 666; 656 deaths) were compared with age-, gender-, and practice-matched controls (n = 113 562; 1358 deaths). Results. Adults with intellectual disability had higher mortality rates than controls (hazard ratio [HR] = 3.6; 95% confidence interval [CI] = 3.3, 3.9). This risk remained high after adjustment for comorbidity, smoking, and deprivation (HR = 3.1; 95% CI = 2.7, 3.4); it was even higher among adults with intellectual disability and Down syndrome or epilepsy. A total of 37.0% of all deaths among adults with intellectual disability were classified as being amenable to health care intervention, compared with 22.5% in the general population (HR = 5.9; 95% CI = 5.1, 6.8). Conclusions. Mortality among adults with intellectual disability is markedly elevated in comparison with the general population, with more than a third of deaths potentially amenable to health care interventions. This mortality disparity suggests the need to improve access to, and quality of, health care among people with intellectual disability. PMID:27310347

  4. Device for sectioning prostatectomy specimens to facilitate comparison between histology and in vivo MRI

    PubMed Central

    Drew, Bryn; Jones, Edward C.; Reinsberg, Stefan; Yung, Andrew C.; Goldenberg, S. Larry; Kozlowski, Piotr

    2012-01-01

    Purpose To develop a device for sectioning prostatectomy specimens that would facilitate comparison between histology and in vivo MRI. Materials and methods A multi-bladed cutting device was developed, which consists of an adjustable box capable of accommodating a prostatectomy specimen up to 85 mm in size in the lateral direction, a “plunger” tool to press on the excised gland from the top to prevent it from rolling or sliding during sectioning, and a multi-bladed knife assembly capable of holding up to 21 blades at 4 mm intervals. The device was tested on a formalin fixed piece of meat and subsequently used to section a prostatectomy specimen. Histology sections were compared with T2-weighted MR images acquired in vivo prior to the prostatectomy procedure. Results The prostatectomy specimen slices were very uniform in thickness with each face parallel to the other with no visible sawing marks on the sections by the blades after the cut. MRI and histology comparison showed good correspondence between the two images. Conclusion The developed device allows sectioning of prostatectomy specimens into parallel cuts at a specific orientation and fixed intervals. Such a device is useful in facilitating accurate correlation between histology and MRI data. PMID:20882632

  5. Comparison of Diagnostic Accuracy between Octopus 900 and Goldmann Kinetic Visual Fields

    PubMed Central

    Rowe, Fiona J.; Rowlands, Alison

    2014-01-01

    Purpose. To determine diagnostic accuracy of kinetic visual field assessment by Octopus 900 perimetry compared with Goldmann perimetry. Methods. Prospective cross section evaluation of 40 control subjects with full visual fields and 50 patients with known visual field loss. Comparison of test duration and area measurement of isopters for Octopus 3, 5, and 10°/sec stimulus speeds. Comparison of test duration and type of visual field classification for Octopus versus Goldmann perimetry. Results were independently graded for presence/absence of field defect and for type and location of defect. Statistical evaluation comprised of ANOVA and paired t test for evaluation of parametric data with Bonferroni adjustment. Bland Altman and Kappa tests were used for measurement of agreement between data. Results. Octopus 5°/sec perimetry had comparable test duration to Goldmann perimetry. Octopus perimetry reliably detected type and location of visual field loss with visual fields matched to Goldmann results in 88.8% of results (K = 0.775). Conclusions. Kinetic perimetry requires individual tailoring to ensure accuracy. Octopus perimetry was reproducible for presence/absence of visual field defect. Our screening protocol when using Octopus perimetry is 5°/sec for determining boundaries of peripheral isopters and 3°/sec for blind spot mapping with further evaluation of area of field loss for defect depth and size. PMID:24587983

  6. Contact angle adjustment in equation-of-state-based pseudopotential model.

    PubMed

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  7. Contact angle adjustment in equation-of-state-based pseudopotential model

    NASA Astrophysics Data System (ADS)

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  8. Cost analysis of adjustments of the epidemiological surveillance system to mass gatherings.

    PubMed

    Zieliński, Andrzej

    2011-01-01

    The article deals with the problem of economical analysis of public health activities at mass gatherings. After presentation of elementary review of basic economical approaches to cost analysis author tries to analyze applicability of those methods to planning of mass gatherings. Difficulties in comparability of different events and lack of the outcome data at the stage of planning make most of the economic approaches unsuitable to application at the planning stage. Even applicability of cost minimization analysis may be limited to comparison of predicted costs of preconceived standards of epidemiological surveillance. Cost effectiveness performed ex post after the event when both costs and obtained effects are known, may bring more information for future selection of most effective procedures.

  9. Geometry of illumination, luminance contrast, and gloss perception.

    PubMed

    Leloup, Frédéric B; Pointer, Michael R; Dutré, Philip; Hanselaer, Peter

    2010-09-01

    The influence of both the geometry of illumination and luminance contrast on gloss perception has been examined using the method of paired comparison. Six achromatic glass samples having different lightness were illuminated by two light sources. Only one of these light sources was visible in reflection by the observer. By separate adjustment of the intensity of both light sources, the luminance of both the reflected image and the adjacent off-specular surroundings could be individually varied. It was found that visual gloss appraisal did not correlate with instrumentally measured specular gloss; however, psychometric contrast seemed to be a much better correlate. It has become clear that not only the sample surface characteristics determine gloss perception: the illumination geometry could be an even more important factor.

  10. Analysis of free turbulent shear flows by numerical methods

    NASA Technical Reports Server (NTRS)

    Korst, H. H.; Chow, W. L.; Hurt, R. F.; White, R. A.; Addy, A. L.

    1973-01-01

    Studies are described in which the effort was essentially directed to classes of problems where the phenomenologically interpreted effective transport coefficients could be absorbed by, and subsequently extracted from (by comparison with experimental data), appropriate coordinate transformations. The transformed system of differential equations could then be solved without further specifications or assumptions by numerical integration procedures. An attempt was made to delineate different regimes for which specific eddy viscosity models could be formulated. In particular, this would account for the carryover of turbulence from attached boundary layers, the transitory adjustment, and the asymptotic behavior of initially disturbed mixing regions. Such models were subsequently used in seeking solutions for the prescribed two-dimensional test cases, yielding a better insight into overall aspects of the exchange mechanisms.

  11. Single-step fabrication of thin-film linear variable bandpass filters based on metal-insulator-metal geometry.

    PubMed

    Williams, Calum; Rughoobur, Girish; Flewitt, Andrew J; Wilkinson, Timothy D

    2016-11-10

    A single-step fabrication method is presented for ultra-thin, linearly variable optical bandpass filters (LVBFs) based on a metal-insulator-metal arrangement using modified evaporation deposition techniques. This alternate process methodology offers reduced complexity and cost in comparison to conventional techniques for fabricating LVBFs. We are able to achieve linear variation of insulator thickness across a sample, by adjusting the geometrical parameters of a typical physical vapor deposition process. We demonstrate LVBFs with spectral selectivity from 400 to 850 nm based on Ag (25 nm) and MgF2 (75-250 nm). Maximum spectral transmittance is measured at ∼70% with a Q-factor of ∼20.

  12. Active control of continuous air jet with bifurcated synthetic jets

    NASA Astrophysics Data System (ADS)

    Dančová, Petra; Vít, Tomáš; Jašíková, Darina; Novosád, Jan

    The synthetic jets (SJs) have many significant applications and the number of applications is increasing all the time. In this research the main focus is on the primary flow control which can be used effectively for the heat transfer increasing. This paper deals with the experimental research of the effect of two SJs worked in the bifurcated mode used for control of an axisymmetric air jet. First, the control synthetic jets were measured alone. After an adjustment, the primary axisymmetric jet was added in to the system. For comparison, the primary flow without synthetic jets control was also measured. All experiments were performed using PIV method whereby the synchronization between synthetic jets and PIV system was necessary to do.

  13. Effect of Facet Displacement on Radiation Field and Its Application for Panel Adjustment of Large Reflector Antenna

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Lian, Peiyuan; Zhang, Shuxin; Xiang, Binbin; Xu, Qian

    2017-05-01

    Large reflector antennas are widely used in radars, satellite communication, radio astronomy, and so on. The rapid developments in these fields have created demands for development of better performance and higher surface accuracy. However, low accuracy and low efficiency are the common disadvantages for traditional panel alignment and adjustment. In order to improve the surface accuracy of large reflector antenna, a new method is presented to determinate panel adjustment values from far field pattern. Based on the method of Physical Optics (PO), the effect of panel facet displacement on radiation field value is derived. Then the linear system is constructed between panel adjustment vector and far field pattern. Using the method of Singular Value Decomposition (SVD), the adjustment value for all panel adjustors are obtained by solving the linear equations. An experiment is conducted on a 3.7 m reflector antenna with 12 segmented panels. The results of simulation and test are similar, which shows that the presented method is feasible. Moreover, the discussion about validation shows that the method can be used for many cases of reflector shape. The proposed research provides the instruction to adjust surface panels efficiently and accurately.

  14. Population Synthesis of Radio & Gamma-Ray Millisecond Pulsars

    NASA Astrophysics Data System (ADS)

    Frederick, Sara; Gonthier, P. L.; Harding, A. K.

    2014-01-01

    In recent years, the number of known gamma-ray millisecond pulsars (MSPs) in the Galactic disk has risen substantially thanks to confirmed detections by Fermi Gamma-ray Space Telescope (Fermi). We have developed a new population synthesis of gamma-ray and radio MSPs in the galaxy which uses Markov Chain Monte Carlo techniques to explore the large and small worlds of the model parameter space and allows for comparisons of the simulated and detected MSP distributions. The simulation employs empirical radio and gamma-ray luminosity models that are dependent upon the pulsar period and period derivative with freely varying exponents. Parameters associated with the birth distributions are also free to vary. The computer code adjusts the magnitudes of the model luminosities to reproduce the number of MSPs detected by a group of ten radio surveys, thus normalizing the simulation and predicting the MSP birth rates in the Galaxy. Computing many Markov chains leads to preferred sets of model parameters that are further explored through two statistical methods. Marginalized plots define confidence regions in the model parameter space using maximum likelihood methods. A secondary set of confidence regions is determined in parallel using Kuiper statistics calculated from comparisons of cumulative distributions. These two techniques provide feedback to affirm the results and to check for consistency. Radio flux and dispersion measure constraints have been imposed on the simulated gamma-ray distributions in order to reproduce realistic detection conditions. The simulated and detected distributions agree well for both sets of radio and gamma-ray pulsar characteristics, as evidenced by our various comparisons.

  15. Predicting urban stormwater runoff with quantitative precipitation estimates from commercial microwave links

    NASA Astrophysics Data System (ADS)

    Pastorek, Jaroslav; Fencl, Martin; Stránský, David; Rieckermann, Jörg; Bareš, Vojtěch

    2017-04-01

    Reliable and representative rainfall data are crucial for urban runoff modelling. However, traditional precipitation measurement devices often fail to provide sufficient information about the spatial variability of rainfall, especially when heavy storm events (determining design of urban stormwater systems) are considered. Commercial microwave links (CMLs), typically very dense in urban areas, allow for indirect precipitation detection with desired spatial and temporal resolution. Fencl et al. (2016) recognised the high bias in quantitative precipitation estimates (QPEs) from CMLs which significantly limits their usability and, in order to reduce the bias, suggested a novel method for adjusting the QPEs to existing rain gauge networks. Studies evaluating the potential of CMLs for rainfall detection so far focused primarily on direct comparison of the QPEs from CMLs to ground observations. In contrast, this investigation evaluates the suitability of these innovative rainfall data for stormwater runoff modelling on a case study of a small ungauged (in long-term perspective) urban catchment in Prague-Letňany, Czech Republic (Fencl et al., 2016). We compare the runoff measured at the outlet from the catchment with the outputs of a rainfall-runoff model operated using (i) CML data adjusted by distant rain gauges, (ii) rainfall data from the distant gauges alone and (iii) data from a single temporary rain gauge located directly in the catchment, as it is common practice in drainage engineering. Uncertainties of the simulated runoff are analysed using the Bayesian method for uncertainty evaluation incorporating a statistical bias description as formulated by Del Giudice et al. (2013). Our results show that adjusted CML data are able to yield reliable runoff modelling results, primarily for rainfall events with convective character. Performance statistics, most significantly the timing of maximal discharge, reach better (less uncertain) values with the adjusted CML data than with the distant rain gauges. When the relative error of the volume discharged during the maximum flow period is concerned, the adjusted CMLs perform even better than the rain gauge in the catchment. This seem to be very promising, especially for urban catchments with sparse rain gauge networks. References: Del Giudice, D., Honti, M., Scheidegger, A., Albert, C., Reichert, P., and Rieckermann, J. 2013. Improving uncertainty estimation in urban hydrological modeling by statistically describing bias. Hydrology and Earth System Sciences 17, 4209-4225. Fencl, M., Dohnal, M., Rieckermann, J., and Bareš, V. 2016. Gauge-Adjusted Rainfall Estimates from Commercial Microwave Links, Hydrology and Earth System Sciences Discussions, doi:10.5194/hess-2016- 397, in review. Acknowledgements to the Czech Science Foundation projects No. 14-22978S and No. 17-16389S.

  16. Modelling a Compensation Standard for a Regional Forest Ecosystem: A Case Study in Yanqing District, Beijing, China

    PubMed Central

    Li, Tan; Zhang, Qingguo; Zhang, Ying

    2018-01-01

    The assessment of forest ecosystem services can quantify the impact of these services on human life and is the main basis for formulating a standard of compensation for these services. Moreover, the calculation of the indirect value of forest ecosystem services should not be ignored, as has been the case in some previous publications. A low compensation standard and the lack of a dynamic coordination mechanism are the main problems existing in compensation implementation. Using comparison and analysis, this paper employed accounting for both the costs and benefits of various alternatives. The analytic hierarchy process (AHP) method and the Pearl growth-curve method were used to adjust the results. This research analyzed the contribution of each service value from the aspects of forest produce services, ecology services, and society services. We also conducted separate accounting for cost and benefit, made a comparison of accounting and evaluation methods, and estimated the implementation period of the compensation standard. The main conclusions of this research include the fact that any compensation standard should be determined from the points of view of both benefit and cost in a region. The results presented here allow the range between the benefit and cost compensation to be laid out more reasonably. The practical implications of this research include the proposal that regional decision-makers should consider a dynamic compensation method to meet with the local economic level by using diversified ways to raise the compensation standard, and that compensation channels should offer a mixed mode involving both the market and government. PMID:29561789

  17. Modelling a Compensation Standard for a Regional Forest Ecosystem: A Case Study in Yanqing District, Beijing, China.

    PubMed

    Li, Tan; Zhang, Qingguo; Zhang, Ying

    2018-03-21

    The assessment of forest ecosystem services can quantify the impact of these services on human life and is the main basis for formulating a standard of compensation for these services. Moreover, the calculation of the indirect value of forest ecosystem services should not be ignored, as has been the case in some previous publications. A low compensation standard and the lack of a dynamic coordination mechanism are the main problems existing in compensation implementation. Using comparison and analysis, this paper employed accounting for both the costs and benefits of various alternatives. The analytic hierarchy process (AHP) method and the Pearl growth-curve method were used to adjust the results. This research analyzed the contribution of each service value from the aspects of forest produce services, ecology services, and society services. We also conducted separate accounting for cost and benefit, made a comparison of accounting and evaluation methods, and estimated the implementation period of the compensation standard. The main conclusions of this research include the fact that any compensation standard should be determined from the points of view of both benefit and cost in a region. The results presented here allow the range between the benefit and cost compensation to be laid out more reasonably. The practical implications of this research include the proposal that regional decision-makers should consider a dynamic compensation method to meet with the local economic level by using diversified ways to raise the compensation standard, and that compensation channels should offer a mixed mode involving both the market and government.

  18. Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment.

    PubMed

    O'Brien, Katie M; Upson, Kristen; Cook, Nancy R; Weinberg, Clarice R

    2016-02-01

    Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. We compared adjustment methods, including novel approaches, using simulated case-control data. Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals.

  19. Authentication via wavefront-shaped optical responses

    NASA Astrophysics Data System (ADS)

    Eilers, Hergen; Anderson, Benjamin R.; Gunawidjaja, Ray

    2018-02-01

    Authentication/tamper-indication is required in a wide range of applications, including nuclear materials management and product counterfeit detection. State-of-the-art techniques include reflective particle tags, laser speckle authentication, and birefringent seals. Each of these passive techniques has its own advantages and disadvantages, including the need for complex image comparisons, limited flexibility, sensitivity to environmental conditions, limited functionality, etc. We have developed a new active approach to address some of these short-comings. The use of an active characterization technique adds more flexibility and additional layers of security over current techniques. Our approach uses randomly-distributed nanoparticles embedded in a polymer matrix (tag/seal) which is attached to the item to be secured. A spatial light modulator is used to adjust the wavefront of a laser which interacts with the tag/seal, and a detector is used to monitor this interaction. The interaction can occur in various ways, including transmittance, reflectance, fluorescence, random lasing, etc. For example, at the time of origination, the wavefront-shaped reflectance from a tag/seal can be adjusted to result in a specific pattern (symbol, words, etc.) Any tampering with the tag/seal would results in a disturbance of the random orientation of the nanoparticles and thus distort the reflectance pattern. A holographic waveplate could be inserted into the laser beam for verification. The absence/distortion of the original pattern would then indicate that tampering has occurred. We have tested the tag/seal's and authentication method's tamper-indicating ability using various attack methods, including mechanical, thermal, and chemical attacks, and have verified our material/method's robust tamper-indicating ability.

  20. Prediction of the optimum surface orientation angles to achieve maximum solar radiation using Particle Swarm Optimization in Sabha City Libya

    NASA Astrophysics Data System (ADS)

    Mansour, F. A.; Nizam, M.; Anwar, M.

    2017-02-01

    This research aims to predict the optimum surface orientation angles in solar panel installation to achieve maximum solar radiation. Incident solar radiation is calculated using koronakis mathematical model. Particle Swarm Optimization (PSO) is used as computational method to find optimum angle orientation for solar panel installation in order to get maximum solar radiation. A series of simulation has been carried out to calculate solar radiation based on monthly, seasonally, semi-yearly and yearly period. South-facing was calculated also as comparison of proposed method. South-facing considers azimuth of 0°. Proposed method attains higher incident predictions than South-facing that recorded 2511.03 kWh/m2for monthly. It were about 2486.49 kWh/m2, 2482.13 kWh/m2and 2367.68 kWh/m2 for seasonally, semi-yearly and yearly. South-facing predicted approximately 2496.89 kWh/m2, 2472.40 kWh/m2, 2468.96 kWh/m2, 2356.09 kWh/m2for monthly, seasonally, semi-yearly and yearly periods respectively. Semi-yearly is the best choice because it needs twice adjustments of solar panel in a year. Yet it considers inefficient to adjust solar panel position in every season or monthly with no significant solar radiation increase than semi-yearly and solar tracking device still considers costly in solar energy system. PSO was able to predict accurately with simple concept, easy and computationally efficient. It has been proven by finding the best fitness faster.

  1. Are food and beverage purchases in households with preschoolers changing? A longitudinal analysis from 2000–2011

    PubMed Central

    Ford, Christopher N.; Ng, Shu Wen; Popkin, Barry M.

    2014-01-01

    Background US dietary studies from 2003–2010 show decreases in children’s caloric intake. We examine purchases of consumer-packaged foods/beverages in the US between 2000- and 2011 among households with children ages 2–5y. Objectives Describe changes in consumer-packaged goods purchases between 2000 and 2011 after adjusting for economic indicators, and explore differences by race, education, and household income level. Methods Consumer-packaged goods purchases data were obtained for 42,753 US households with ≥1 child aged 2–5y using the Nielsen Homescan Panel. Top sources of calories purchased were grouped, and random effects regression was used to model the relationship between calories purchased from each food/beverage group and race, female head of household education, and household income. Models adjusted for household composition, market-level unemployment rate, prices, and quarter. Bonferroni correction was used to adjust for multiple comparisons (α=0.05). Results Between 2000 and 2011, adjusted total calories purchased from foods (−182 kcal/d) and beverages (−100 kcal/d) declined significantly. Decreases in purchases of milk (−40 kcal), soft drinks (−27 kcal/d), juice and juice drinks (−24 kcal/d), grain-based desserts (−24 kcal/d), savory snacks (−17 kcal/d), and sweet snacks and candy (−13 kcal/d) were among the major changes observed. There were significant differences by race, female head of household education, and household income for changes in consumer-packaged food and beverage purchases between 2000 and 2011. Conclusions Trends in consumer-packaged goods purchases suggest that solid fats and added sugars are decreasing in the food ply of US preschool children. Yet, pronounced differences by race, education, and household income persist. PMID:25049217

  2. Cost Effectiveness of Revascularization Strategies: Results from The American College of Cardiology Foundation and The Society of Thoracic Surgeons Collaboration on the Comparative Effectiveness of Revascularization Strategies (ASCERT)

    PubMed Central

    Zhang, Zugui; Kolm, Paul; Grau-Sepulveda, Maria V.; Ponirakis, Angelo; O’Brien, Sean M.; Klein, Lloyd W.; Shaw, Richard E.; McKay, Charles; Shahian, David M.; Grover, Frederick L.; Mayer, John E.; Garratt, Kirk N.; Hlatky, Mark; Edwards, Fred H.; Weintraub, William S.

    2017-01-01

    BACKGROUND The American College of Cardiology Foundation (ACCF) and the Society of Thoracic Surgeons (STS) Collaboration on the Comparative Effectiveness of Revascularization Strategies (ASCERT) was a large observational study designed to compare the long-term effectiveness of coronary artery bypass graft (CABG) and percutaneous coronary intervention (PCI) to treat coronary artery disease (CAD) over 4 to 5 years. OBJECTIVES We examined the cost effectiveness of CABG compared to PCI for stable ischemic heart disease. METHODS The STS and ACCF databases were linked to the Centers for Medicare and Medicaid Services claims data. Costs for the index and observation period (2004 to 2008) hospitalizations were assessed by diagnosis-related group Medicare reimbursement rates; costs beyond the observation period were estimated from average Medicare participant per capita expenditure. Effectiveness was measured via mortality and life expectancy data. Cost and effectiveness comparisons were adjusted using propensity score matching with the incremental cost-effectiveness ratio (ICER) expressed as cost per quality-adjusted life year (QALY) gained. RESULTS CABG patients (n = 86,244) and PCI patients (n = 103,549) were at least 65-yearsold with 2 or 3-vessel CAD. Adjusted costs were higher for CABG for the index hospitalization, study period, and lifetime by $10,670, $8,145, and $11,575, respectively. Patients undergoing CABG gained an adjusted average of 0.2525 and 0.3801 life-years relative to PCI over the observation period and lifetime, respectively. The life-time ICER of CABG compared to PCI was $30,454/QALY gained. CONCLUSIONS Over a period of 4 years or longer, patients undergoing CABG had better outcomes but at higher costs than those undergoing PCI. PMID:25572503

  3. Cost comparison between uterine-sparing fibroid treatments one year following treatment

    PubMed Central

    2014-01-01

    Background To compare one-year all-cause and uterine fibroid (UF)-related direct costs in patients treated with one of the following three uterine-sparing procedures: magnetic resonance-guided focused ultrasound (MRgFUS), uterine artery embolization (UAE) and myomectomy. Methods This retrospective observational cohort study used healthcare claims for several million individuals with healthcare coverage from employers in the MarketScan Database for the period 2003–2010. UF patients aged 25–54 on their first UF procedure (index) date with 366-day baseline experience, 366-day follow-up period, continuous health plan enrollment during baseline and follow-up, and absence of any baseline UF procedures were included in the final sample. Cost outcomes were measured by allowed charges (sum of insurer-paid and patient-paid amounts). UF-related cost was defined as difference in mean cost between study cohorts and propensity-score-matched control cohorts without UF. Multivariate adjustment of cost outcomes was conducted using generalized linear models. Results The study sample comprised 14,426 patients (MRgFUS = 14; UAE = 4,092; myomectomy = 10,320) with a higher percent of older patients in MRgFUS cohort (71% vs. 50% vs. 12% in age-group 45–54, P < 0.001). Adjusted all-cause mean cost was lowest for MRgFUS ($19,763; 95% CI: $10,425-$38,694) followed by myomectomy ($20,407; 95% CI: $19,483-$21,381) and UAE ($25,019; 95% CI: $23,738-$26,376) but without statistical significance. Adjusted UF-related costs were also not significantly different between the three procedures. Conclusions Adjusted all-cause and UF-related costs at one year were not significantly different between patients undergoing MRgFUS, myomectomy and UAE. PMID:25512868

  4. Cytosolic 5′-nucleotidase 1A autoantibody profile and clinical characteristics in inclusion body myositis

    PubMed Central

    Rietveld, A; Pye, S R; Mariampillai, K; Benveniste, O; Peeters, M T J; Miller, J A L; Hanna, M G; Machado, P M; Parton, M J; Gheorghe, K R; Badrising, U A; Lundberg, I E; Sacconi, S; Herbert, M K; McHugh, N J; Lecky, B R F; Brierley, C; Hilton-Jones, D; Lamb, J A; Roberts, M E; Cooper, R G; Saris, C G J; Pruijn, G J M; Chinoy, H; van Engelen, B G M

    2017-01-01

    Objectives Autoantibodies directed against cytosolic 5′-nucleotidase 1A have been identified in many patients with inclusion body myositis. This retrospective study investigated the association between anticytosolic 5′-nucleotidase 1A antibody status and clinical, serological and histopathological features to explore the utility of this antibody to identify inclusion body myositis subgroups and to predict prognosis. Materials and methods Data from various European inclusion body myositis registries were pooled. Anticytosolic 5′-nucleotidase 1A status was determined by an established ELISA technique. Cases were stratified according to antibody status and comparisons made. Survival and mobility aid requirement analyses were performed using Kaplan-Meier curves and Cox proportional hazards regression. Results Data from 311 patients were available for analysis; 102 (33%) had anticytosolic 5′-nucleotidase 1A antibodies. Antibody-positive patients had a higher adjusted mortality risk (HR 1.89, 95% CI 1.11 to 3.21, p=0.019), lower frequency of proximal upper limb weakness at disease onset (8% vs 23%, adjusted OR 0.29, 95% CI 0.12 to 0.68, p=0.005) and an increased prevalence of excess of cytochrome oxidase deficient fibres on muscle biopsy analysis (87% vs 72%, adjusted OR 2.80, 95% CI 1.17 to 6.66, p=0.020), compared with antibody-negative patients. Interpretation Differences were observed in clinical and histopathological features between anticytosolic 5′-nucleotidase 1A antibody positive and negative patients with inclusion body myositis, and antibody-positive patients had a higher adjusted mortality risk. Stratification of inclusion body myositis by anticytosolic 5′-nucleotidase 1A antibody status may be useful, potentially highlighting a distinct inclusion body myositis subtype with a more severe phenotype. PMID:28122761

  5. Using a conformal water bolus to adjust heating patterns of microwave waveguide applicators

    NASA Astrophysics Data System (ADS)

    Stauffer, Paul R.; Rodrigues, Dario B.; Sinahon, Randolf; Sbarro, Lyndsey; Beckhoff, Valeria; Hurwitz, Mark D.

    2017-02-01

    Background: Hyperthermia, i.e., raising tissue temperature to 40-45°C for 60 min, has been demonstrated to increase the effectiveness of radiation and chemotherapy for cancer. Although multi-element conformal heat applicators are under development to provide more adjustable heating of contoured anatomy, to date the most often used applicator to heat superficial disease is the simple microwave waveguide. With only a single power input, the operator must be resourceful to adjust heat treatment to accommodate variable size and shape tumors spreading across contoured anatomy. Methods: We used multiphysics simulation software that couples electromagnetic, thermal and fluid dynamics physics to simulate heating patterns in superficial tumors from commercially available microwave waveguide applicators. Temperature distributions were calculated inside homogenous muscle and layered skin-fat-muscle-tumor-bone tissue loads for a typical range of applicator coupling configurations and size of waterbolus. Variable thickness waterbolus was simulated as necessary to accommodate contoured anatomy. Physical models of several treatment configurations were constructed for comparison of simulation results with experimental specific absorption rate (SAR) measurements in homogenous muscle phantom. Results: Accuracy of the simulation model was confirmed with experimental SAR measurements of three unique applicator setups. Simulations demonstrated the ability to generate a wide range of power deposition patterns with commercially available waveguide antennas by controllably varying size and thickness of the waterbolus layer. Conclusion: Heating characteristics of 915 MHz waveguide antennas can be varied over a wide range by controlled adjustment of microwave power, coupling configuration, and waterbolus lateral size and thickness. The uniformity of thermal dose delivered to superficial tumors can be improved by cyclic switching of waterbolus thickness during treatment to proactively shift heat peaks and nulls around under the aperture, thereby reducing patient pain while increasing minimum thermal dose by end of treatment.

  6. Comparison of Ischemic Stroke Outcomes and, Patient and Hospital Characteristics by Race/Ethnicity and Socioeconomic Status

    PubMed Central

    Hanchate, Amresh D.; Schwamm, Lee H.; Huang, Wei-Jie; Hylek, Elaine

    2013-01-01

    Background and Purpose Current literature provides mixed evidence on disparities by race/ethnicity and socioeconomic status (SES) in discharge outcomes following hospitalization for acute ischemic stroke. Using comprehensive data from eight states, we sought to compare inpatient mortality and length of stay (LOS) by race/ethnicity and SES. Methods We examined all 2007 hospitalizations for acute ischemic stroke in all non-Federal acute care hospitals in AZ, CA, FL, MA, NJ, NY, PA and TX. Population was stratified by race/ethnicity (non-Hispanic Whites, non-Hispanic Blacks and Hispanics) and SES, measured by median income of patient zip code. For each stratum we estimated risk-adjusted rates of inpatient mortality and longer LOS (> median LOS). We also compared the hospitals where these subpopulations received care. Results Hispanic and Black patients accounted for 14 and 12 percent of all ischemic stroke admissions (N=147,780) respectively and had lower crude inpatient mortality rates (Hispanic=4.5%, Blacks=4.4%; all p-values < 0.001) compared to White patients (5.8%). Hispanic and Black patients were younger and fewer had any form of atrial fibrillation. Adjusted for patient risk, inpatient mortality was similar by race/ethnicity, but was significantly higher for low area-income patients than that for high area-income patients (Odds Ratio=1.08, 95% confidence interval=[1.02, 1.15]). Risk-adjusted rates of longer LOS were higher among minority and low area-income populations. Conclusions Risk adjusted inpatient mortality was similar among patients by race/ethnicity but higher among patients from lower income areas. However, this pattern was not evident in sensitivity analyses including the use of mechanical ventilation as a partial surrogate for stroke severity. PMID:23306327

  7. Cancer Disparities in the Context of Medicaid Insurance: A Comparison of Survival for Acute Myeloid Leukemia and Hodgkin's Lymphoma by Medicaid Enrollment

    PubMed Central

    Yung, Rachel L.; Chen, Kun; Abel, Gregory A.; Gesten, Foster C.; Roohan, Patrick J.; Boscoe, Francis P.; Sinclair, Amber H.; Schymura, Maria J.

    2011-01-01

    Background. Because poverty is difficult to measure, its association with outcomes for serious illnesses such as hematologic cancers remains largely uncharacterized. Using Medicaid enrollment as a proxy for poverty, we aimed to assess potential disparities in survival after a diagnosis of acute myeloid leukemia (AML) or Hodgkin's lymphoma (HL) in a nonelderly population. Methods. We used records from the New York (NY) and California (CA) state cancer registries linked to Medicaid enrollment records for these states to identify Medicaid enrolled and nonenrolled patients aged 21–64 years with incident diagnoses of AML or HL in 2002–2006. We compared overall survival for the two groups using Kaplan–Meier curves and Cox proportional hazards analyses adjusted for sociodemographic and clinical factors. Results. For HL, the adjusted risk for death for Medicaid enrolled compared with nonenrolled patients was 1.98 (95% confidence interval [CI], 1.47–2.68) in NY and 1.89 (95% CI, 1.43–2.49) in CA. In contrast, for AML, Medicaid enrollment had no effect on survival (adjusted hazard ratio, 1.00; 95% CI, 0.84–1.19 in NY and hazard ratio, 1.02; 95% CI, 0.89–1.16 in CA). These results persisted despite adjusting for race/ethnicity and other factors. Conclusions. Poverty does not affect survival for AML patients but does appear to be associated with survival for HL patients, who, in contrast to AML patients, require complex outpatient treatment. Challenges for the poor in adhering to treatment regimens for HL could explain this disparity and merit further study. PMID:21873583

  8. The Association Between Immigration Status and Office-based Medical Provider Visits for Cancer Patients in the United States.

    PubMed

    Wang, Yang; Wilson, Fernando A; Chen, Li-Wu

    2017-06-01

    We examined differences in cancer-related office-based provider visits associated with immigration status in the United States. Data from the 2007-2012 Medical Expenditure Panel Survey and National Health Interview Survey included adult patients diagnosed with cancer. Univariate analyses described distributions of cancer-related office-based provider visits received, expenditures, visit characteristics, as well as demographic, socioeconomic, and health covariates, across immigration groups. We measured the relationships of immigrant status to number of visits and associated expenditure within the past 12 months, adjusting for age, sex, educational attainment, race/ethnicity, self-reported health status, time since cancer diagnosis, cancer remission status, marital status, poverty status, insurance status, and usual source of care. We finally performed sensitivity analyses for regression results by using the propensity score matching method to adjust for potential selection bias. Noncitizens had about 2 fewer visits in a 12-month period in comparison to US-born citizens (4.0 vs. 5.9). Total expenditure per patient was higher for US-born citizens than immigrants (not statistically significant). Noncitizens (88.3%) were more likely than US-born citizens (76.6%) to be seen by a medical doctor during a visit. Multivariate regression results showed that noncitizens had 42% lower number of visiting medical providers at office-based settings for cancer care than US-born citizens, after adjusting for all the other covariates. There were no significant differences in expenditures across immigration groups. The propensity score matching results were largely consistent with those in multivariate-adjusted regressions. Results suggest targeted interventions are needed to reduce disparities in utilization between immigrants and US-born citizen cancer patients.

  9. Comparison of Minocycline Susceptibility Testing Methods for Carbapenem-Resistant Acinetobacter baumannii.

    PubMed

    Wang, Peng; Bowler, Sarah L; Kantz, Serena F; Mettus, Roberta T; Guo, Yan; McElheny, Christi L; Doi, Yohei

    2016-12-01

    Treatment options for infections due to carbapenem-resistant Acinetobacter baumannii are extremely limited. Minocycline is a semisynthetic tetracycline derivative with activity against this pathogen. This study compared susceptibility testing methods that are used in clinical microbiology laboratories (Etest, disk diffusion, and Sensititre broth microdilution methods) for testing of minocycline, tigecycline, and doxycycline against 107 carbapenem-resistant A. baumannii clinical isolates. Susceptibility rates determined with the standard broth microdilution method using cation-adjusted Mueller-Hinton (MH) broth were 77.6% for minocycline and 29% for doxycycline, and 92.5% of isolates had tigecycline MICs of ≤2 μg/ml. Using MH agar from BD and Oxoid, susceptibility rates determined with the Etest method were 67.3% and 52.3% for minocycline, 21.5% and 18.7% for doxycycline, and 71% and 29.9% for tigecycline, respectively. With the disk diffusion method using MH agar from BD and Oxoid, susceptibility rates were 82.2% and 72.9% for minocycline and 34.6% and 34.6% for doxycycline, respectively, and rates of MICs of ≤2 μg/ml were 46.7% and 23.4% for tigecycline. In comparison with the standard broth microdilution results, very major rates were low (∼2.8%) for all three drugs across the methods, but major error rates were higher (∼5.6%), especially with the Etest method. For minocycline, minor error rates ranged from 14% to 37.4%. For tigecycline, minor error rates ranged from 6.5% to 69.2%. The majority of minor errors were due to susceptible results being reported as intermediate. For minocycline susceptibility testing of carbapenem-resistant A. baumannii strains, very major errors are rare, but major and minor errors overcalling strains as intermediate or resistant occur frequently with susceptibility testing methods that are feasible in clinical laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  10. Culture, Cross-Role Consistency, and Adjustment: Testing Trait and Cultural Psychology Perspectives

    PubMed Central

    Church, A. Timothy; Anderson-Harumi, Cheryl A.; del Prado, Alicia M.; Curtis, Guy J.; Tanaka-Matsumi, Junko; Valdez Medina, José L.; Mastor, Khairul A.; White, Fiona A.; Miramontes, Lilia A.; Katigbak, Marcia S.

    2008-01-01

    Trait and cultural psychology perspectives on cross-role consistency and its relation to adjustment were examined in two individualistic cultures, the United States (N = 231) and Australia (N = 195), and four collectivistic cultures, Mexico (N = 199), Philippines (N = 195), Malaysia (N = 217), and Japan (N = 180). Cross-role consistency in trait ratings was evident in all cultures, supporting trait perspectives. Cultural comparisons of mean consistency provided support for cultural psychology perspectives as applied to East Asian cultures (i.e., Japan), but not collectivistic cultures more generally. Some but not all of the hypothesized predictors of consistency were supported across cultures. Cross-role consistency predicted aspects of adjustment in all cultures, but prediction was most reliable in the American sample and weakest in the Japanese sample. Alternative constructs proposed by cultural psychologists—personality coherence, social appraisal, and relationship harmony—predicted adjustment in all cultures, but were not, as hypothesized, better predictors of adjustment in collectivistic cultures than in individualistic cultures. PMID:18729706

  11. Culture, cross-role consistency, and adjustment: testing trait and cultural psychology perspectives.

    PubMed

    Church, A Timothy; Anderson-Harumi, Cheryl A; del Prado, Alicia M; Curtis, Guy J; Tanaka-Matsumi, Junko; Valdez Medina, José L; Mastor, Khairul A; White, Fiona A; Miramontes, Lilia A; Katigbak, Marcia S

    2008-09-01

    Trait and cultural psychology perspectives on cross-role consistency and its relation to adjustment were examined in 2 individualistic cultures, the United States (N=231) and Australia (N=195), and 4 collectivistic cultures, Mexico (N=199), the Philippines (N=195), Malaysia (N=217), and Japan (N=180). Cross-role consistency in trait ratings was evident in all cultures, supporting trait perspectives. Cultural comparisons of mean consistency provided support for cultural psychology perspectives as applied to East Asian cultures (i.e., Japan) but not collectivistic cultures more generally. Some but not all of the hypothesized predictors of consistency were supported across cultures. Cross-role consistency predicted aspects of adjustment in all cultures, but prediction was most reliable in the U.S. sample and weakest in the Japanese sample. Alternative constructs proposed by cultural psychologists--personality coherence, social appraisal, and relationship harmony--predicted adjustment in all cultures but were not, as hypothesized, better predictors of adjustment in collectivistic cultures than in individualistic cultures.

  12. Kruskal-Wallis test: BASIC computer program to perform nonparametric one-way analysis of variance and multiple comparisons on ranks of several independent samples.

    PubMed

    Theodorsson-Norheim, E

    1986-08-01

    Multiple t tests at a fixed p level are frequently used to analyse biomedical data where analysis of variance followed by multiple comparisons or the adjustment of the p values according to Bonferroni would be more appropriate. The Kruskal-Wallis test is a nonparametric 'analysis of variance' which may be used to compare several independent samples. The present program is written in an elementary subset of BASIC and will perform Kruskal-Wallis test followed by multiple comparisons between the groups on practically any computer programmable in BASIC.

  13. Factor weighting in DRASTIC modeling.

    PubMed

    Pacheco, F A L; Pires, L M G R; Santos, R M B; Sanches Fernandes, L F

    2015-02-01

    Evaluation of aquifer vulnerability comprehends the integration of very diverse data, including soil characteristics (texture), hydrologic settings (recharge), aquifer properties (hydraulic conductivity), environmental parameters (relief), and ground water quality (nitrate contamination). It is therefore a multi-geosphere problem to be handled by a multidisciplinary team. The DRASTIC model remains the most popular technique in use for aquifer vulnerability assessments. The algorithm calculates an intrinsic vulnerability index based on a weighted addition of seven factors. In many studies, the method is subject to adjustments, especially in the factor weights, to meet the particularities of the studied regions. However, adjustments made by different techniques may lead to markedly different vulnerabilities and hence to insecurity in the selection of an appropriate technique. This paper reports the comparison of 5 weighting techniques, an enterprise not attempted before. The studied area comprises 26 aquifer systems located in Portugal. The tested approaches include: the Delphi consensus (original DRASTIC, used as reference), Sensitivity Analysis, Spearman correlations, Logistic Regression and Correspondence Analysis (used as adjustment techniques). In all cases but Sensitivity Analysis, adjustment techniques have privileged the factors representing soil characteristics, hydrologic settings, aquifer properties and environmental parameters, by leveling their weights to ≈4.4, and have subordinated the factors describing the aquifer media by downgrading their weights to ≈1.5. Logistic Regression predicts the highest and Sensitivity Analysis the lowest vulnerabilities. Overall, the vulnerability indices may be separated by a maximum value of 51 points. This represents an uncertainty of 2.5 vulnerability classes, because they are 20 points wide. Given this ambiguity, the selection of a weighting technique to integrate a vulnerability index may require additional expertise to be set up satisfactorily. Following a general criterion that weights must be proportional to the range of the ratings, Correspondence Analysis may be recommended as the best adjustment technique. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Genetic variants of SULT1A1 and XRCC1 genes and risk of lung cancer in Bangladeshi population.

    PubMed

    Tasnim, Tasnova; Al-Mamun, Mir Md Abdullah; Nahid, Noor Ahmed; Islam, Md Reazul; Apu, Mohd Nazmul Hasan; Bushra, Most Umme; Rabbi, Sikder Nahidul Islam; Nahar, Zabun; Chowdhury, Jakir Ahmed; Ahmed, Maizbha Uddin; Islam, Mohammad Safiqul; Hasnat, Abul

    2017-11-01

    Lung cancer is one of the most frequently occurring cancers throughout the world as well as in Bangladesh. This study aimed to correlate the prognostic and/or predictive value of functional polymorphisms in SULT1A1 (rs9282861) and XRCC1 (rs25487) genes and lung cancer risk in Bangladeshi population. A case-control study was conducted which comprises 202 lung cancer patients and 242 healthy volunteers taking into account the age, sex, and smoking status. After isolation of genomic DNA, genotyping was done by polymerase chain reaction-restriction fragment length polymorphism method and the lung cancer risk was evaluated as odds ratio that was adjusted for age, sex, and smoking status. A significant association was found between SULT1A1 rs9282861 and XRCC1 rs25487 polymorphisms and lung cancer risk. In case of rs9282861 polymorphism, Arg/His (adjusted odds ratio = 5.06, 95% confidence interval = 3.05-8.41, p < 0.05) and His/His (adjusted odds ratio = 3.88, 95% confidence interval = 2.20-6.82, p < 0.05) genotypes were strongly associated with increased risk of lung cancer in comparison to the Arg/Arg genotype. In case of rs25487 polymorphism, Arg/Gln heterozygote (adjusted odds ratio = 4.57, 95% confidence interval = 2.79-7.46, p < 0.05) and Gln/Gln mutant homozygote (adjusted odds ratio = 4.99, 95% confidence interval = 2.66-9.36, p < 0.05) were also found to be significantly associated with increased risk of lung cancer. This study demonstrates that the presence of His allele and Gln allele in case of SULT1A1 rs9282861 and XRCC1 rs25487, respectively, involve in lung cancer prognosis in Bangladeshi population.

  15. Comparison of HOMA-IR, HOMA-β% and disposition index between US white men and Japanese men in Japan: the ERA JUMP study.

    PubMed

    Ahuja, Vasudha; Kadowaki, Takashi; Evans, Rhobert W; Kadota, Aya; Okamura, Tomonori; El Khoudary, Samar R; Fujiyoshi, Akira; Barinas-Mitchell, Emma J M; Hisamatsu, Takashi; Vishnu, Abhishek; Miura, Katsuyuki; Maegawa, Hiroshi; El-Saed, Aiman; Kashiwagi, Atsunori; Kuller, Lewis H; Ueshima, Hirotsugu; Sekikawa, Akira

    2015-02-01

    At the same level of BMI, white people have less visceral adipose tissue (VAT) and are less susceptible to developing type 2 diabetes than Japanese people. No previous population-based studies have compared insulin resistance and insulin secretion between these two races in a standardised manner that accounts for VAT. We compared HOMA-IR, HOMA of beta cell function (HOMA-β%) and disposition index (DI) in US white men and Japanese men in Japan. We conducted a population-based, cross-sectional study, comprising 298 white men and 294 Japanese men aged 40-49 years without diabetes. Insulin, glucose, VAT and other measurements were performed at the University of Pittsburgh. We used ANCOVA to compare geometric means of HOMA-IR, HOMA-β% and DI, adjusting for VAT and other covariates. White men had higher HOMA-IR, HOMA-β% and DI than Japanese men, and the difference remained significant (p < 0.01) after adjusting for VAT (geometric mean [95% CI]): 3.1 (2.9, 3.2) vs 2.5 (2.4, 2.6), 130.8 (124.6, 137.3) vs 86.7 (82.5, 91.0), and 42.4 (41.0, 44.0) vs 34.8 (33.6, 36.0), respectively. Moreover, HOMA-IR, HOMA-β% and DI were significantly higher in white men even after further adjustment for BMI, impaired fasting glucose and other risk factors. The higher VAT-adjusted DI in white men than Japanese men may partly explain lower susceptibility of white people than Japanese people to developing type 2 diabetes. The results, however, should be interpreted with caution because the assessment of insulin indices was made using fasting samples and adjustment was not made for baseline glucose tolerance. Further studies using formal methods to evaluate insulin indices are warranted.

  16. Variations in mortality rates among Canadian neonatal intensive care units

    PubMed Central

    Sankaran, Koravangattu; Chien, Li-Yin; Walker, Robin; Seshia, Mary; Ohlsson, Arne; Lee, Shoo K.

    2002-01-01

    Background Most previous reports of variations in mortality rates for infants admitted to neonatal intensive care units (NICUs) have involved small groups of subpopulations, such as infants with very low birth weight. Our aim was to examine the incidence and causes of death and the risk-adjusted variation in mortality rates for a large group of infants of all birth weights admitted to Canadian NICUs. Methods We examined the deaths that occurred among all 19 265 infants admitted to 17 tertiary-level Canadian NICUs from January 1996 to October 1997. We used multivariate analysis to examine the risk factors associated with death and the variations in mortality rates, adjusting for risks in the baseline population, severity of illness on admission and whether the infant was outborn (born at a different hospital from the one where the NICU was located). Results The overall mortality rate was 4% (795 infants died). Forty percent of the deaths (n = 318) occurred within 2 days of NICU admission, 50% (n = 397) within 3 days and 75% (n = 596) within 12 days. The major conditions associated with death were gestational age less than 24 weeks (59 deaths [7%]), gestational age 24–28 weeks (325 deaths [41%]), outborn status (340 deaths [42%]), congenital anomalies (270 deaths [34%]), surgery (141 deaths [18%]), infection (108 deaths [14%]), hypoxic–ischemic encephalopathy (128 deaths [16%]) and small for gestational age (i.e., less than the third percentile) (77 deaths [10%]). There was significant variation in the risk-adjusted mortality rates (range 1.6% to 5.5%) among the 17 NICUs. Interpretation Most NICU deaths occurred within the first few days after admission. Preterm birth, outborn status and congenital anomalies were the conditions most frequently associated with death in the NICU. The significant variation in risk-adjusted mortality rates emphasizes the importance of risk adjustment for valid comparison of NICU outcomes. PMID:11826939

  17. Timing of Radiotherapy and Outcome in Patients Receiving Adjuvant Endocrine Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karlsson, Per, E-mail: per.karlsson@oncology.gu.s; Cole, Bernard F.; International Breast Cancer Study Group Statistical Center, Department of Biostatistics and Computational Biology, Dana-Farber Cancer Institute, Boston, MA

    2011-06-01

    Purpose: To evaluate the association between the interval from breast-conserving surgery (BCS) to radiotherapy (RT) and the clinical outcome among patients treated with adjuvant endocrine therapy. Patients and Methods: Patient information was obtained from three International Breast Cancer Study Group trials. The analysis was restricted to 964 patients treated with BCS and adjuvant endocrine therapy. The patients were divided into two groups according to the median number of days between BCS and RT and into four groups according to the quartile of time between BCS and RT. The endpoints were the interval to local recurrence, disease-free survival, and overall survival.more » Proportional hazards regression analysis was used to perform comparisons after adjustment for baseline factors. Results: The median interval between BCS and RT was 77 days. RT timing was significantly associated with age, menopausal status, and estrogen receptor status. After adjustment for these factors, no significant effect of a RT delay {<=}20 weeks was found. The adjusted hazard ratio for RT within 77 days vs. after 77 days was 0.94 (95% confidence interval [CI], 0.47-1.87) for the interval to local recurrence, 1.05 (95% CI, 0.82-1.34) for disease-free survival, and 1.07 (95% CI, 0.77-1.49) for overall survival. For the interval to local recurrence the adjusted hazard ratio for {<=}48, 49-77, and 78-112 days was 0.90 (95% CI, 0.34-2.37), 0.86 (95% CI, 0.33-2.25), and 0.89 (95% CI, 0.33-2.41), respectively, relative to {>=}113 days. Conclusion: A RT delay of {<=}20 weeks was significantly associated with baseline factors such as age, menopausal status, and estrogen-receptor status. After adjustment for these factors, the timing of RT was not significantly associated with the interval to local recurrence, disease-free survival, or overall survival.« less

  18. Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images

    PubMed Central

    Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde

    2017-01-01

    Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality. PMID:28663860

  19. Measuring the patient experience in primary care

    PubMed Central

    Slater, Morgan; Kiran, Tara

    2016-01-01

    Abstract Objective To compare the characteristics and responses of patients completing a patient experience survey accessed online after e-mail notification or delivered in the waiting room using tablet computers. Design Cross-sectional comparison of 2 methods of delivering a patient experience survey. Setting A large family health team in Toronto, Ont. Participants Family practice patients aged 18 or older who completed an e-mail survey between January and June 2014 (N = 587) or who completed the survey in the waiting room in July and August 2014 (N = 592). Main outcome measures Comparison of respondent demographic characteristics and responses to questions related to access and patient-centredness. Results Patients responding to the e-mail survey were more likely to live in higher-income neighbourhoods (P = .0002), be between the ages of 35 and 64 (P = .0147), and be female (P = .0434) compared with those responding to the waiting room survey; there were no significant differences related to self-rated health. The differences in neighbourhood income were noted despite minimal differences between patients with and without e-mail addresses included in their medical records. There were few differences in responses to the survey questions between the 2 survey methods and any differences were explained by the underlying differences in patient demographic characteristics. Conclusion Our findings suggest that respondent demographic characteristics might differ depending on the method of survey delivery, and these differences might affect survey responses. Methods of delivering patient experience surveys that require electronic literacy might underrepresent patients living in low-income neighbourhoods. Practices should consider evaluating for nonresponse bias and adjusting for patient demographic characteristics when interpreting survey results. Further research is needed to understand how primary care practices can optimize electronic survey delivery methods to survey a representative sample of patients. PMID:27965350

  20. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    PubMed

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

Top