Sample records for analysis included estimation

  1. Benefit-Cost Analysis of Integrated Paratransit Systems : Volume 6. Technical Appendices.

    DOT National Transportation Integrated Search

    1979-09-01

    This last volume, includes five technical appendices which document the methodologies used in the benefit-cost analysis. They are the following: Scenario analysis methodology; Impact estimation; Example of impact estimation; Sensitivity analysis; Agg...

  2. Performance in population models for count data, part II: a new SAEM algorithm

    PubMed Central

    Savic, Radojka; Lavielle, Marc

    2009-01-01

    Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (1). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13 % for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7 % for all explored scenarios. The longest CPU time was 95s for parameter estimation and 56s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009). PMID:19680795

  3. Analysis models for the estimation of oceanic fields

    NASA Technical Reports Server (NTRS)

    Carter, E. F.; Robinson, A. R.

    1987-01-01

    A general model for statistically optimal estimates is presented for dealing with scalar, vector and multivariate datasets. The method deals with anisotropic fields and treats space and time dependence equivalently. Problems addressed include the analysis, or the production of synoptic time series of regularly gridded fields from irregular and gappy datasets, and the estimate of fields by compositing observations from several different instruments and sampling schemes. Technical issues are discussed, including the convergence of statistical estimates, the choice of representation of the correlations, the influential domain of an observation, and the efficiency of numerical computations.

  4. Network meta-analysis of multiple outcome measures accounting for borrowing of information across outcomes.

    PubMed

    Achana, Felix A; Cooper, Nicola J; Bujkiewicz, Sylwia; Hubbard, Stephanie J; Kendrick, Denise; Jones, David R; Sutton, Alex J

    2014-07-21

    Network meta-analysis (NMA) enables simultaneous comparison of multiple treatments while preserving randomisation. When summarising evidence to inform an economic evaluation, it is important that the analysis accurately reflects the dependency structure within the data, as correlations between outcomes may have implication for estimating the net benefit associated with treatment. A multivariate NMA offers a framework for evaluating multiple treatments across multiple outcome measures while accounting for the correlation structure between outcomes. The standard NMA model is extended to multiple outcome settings in two stages. In the first stage, information is borrowed across outcomes as well across studies through modelling the within-study and between-study correlation structure. In the second stage, we make use of the additional assumption that intervention effects are exchangeable between outcomes to predict effect estimates for all outcomes, including effect estimates on outcomes where evidence is either sparse or the treatment had not been considered by any one of the studies included in the analysis. We apply the methods to binary outcome data from a systematic review evaluating the effectiveness of nine home safety interventions on uptake of three poisoning prevention practices (safe storage of medicines, safe storage of other household products, and possession of poison centre control telephone number) in households with children. Analyses are conducted in WinBUGS using Markov Chain Monte Carlo (MCMC) simulations. Univariate and the first stage multivariate models produced broadly similar point estimates of intervention effects but the uncertainty around the multivariate estimates varied depending on the prior distribution specified for the between-study covariance structure. The second stage multivariate analyses produced more precise effect estimates while enabling intervention effects to be predicted for all outcomes, including intervention effects on outcomes not directly considered by the studies included in the analysis. Accounting for the dependency between outcomes in a multivariate meta-analysis may or may not improve the precision of effect estimates from a network meta-analysis compared to analysing each outcome separately.

  5. Linear and Nonlinear Time-Frequency Analysis for Parameter Estimation of Resident Space Objects

    DTIC Science & Technology

    2017-02-22

    AFRL-AFOSR-UK-TR-2017-0023 Linear and Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects Marco Martorella...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing   data sources, gathering and maintaining the...Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-14-1-0183 5c.  PROGRAM

  6. A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach.

    PubMed

    Tipton, Elizabeth; Shuster, Jonathan

    2017-10-15

    Bland-Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold-standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the 'bias' and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non-invasive measure is preferred. Very often, multiple Bland-Altman studies have been conducted comparing the same two measures, and random-effects meta-analysis provides a means to pool these estimates. We provide a framework for the meta-analysis of Bland-Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland-Altman meta-analyses. Frequently, Bland-Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta-analyses of Bland-Altman studies frequently exclude these studies for this reason. We provide a meta-analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta-analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Large Area Crop Inventory Experiment (LACIE). Phase 2 evaluation report

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Documentation of the activities of the Large Area Crop Inventory Experiment during the 1976 Northern Hemisphere crop year is presented. A brief overview of the experiment is included as well as phase two area, yield, and production estimates for the United States Great Plains, Canada, and the Union of Soviet Socialist Republics spring winter wheat regions. The accuracies of these estimates are compared with independent government estimates. Accuracy assessment of the United States Great Plains yardstick region based on a through blind sight analysis is given, and reasons for variations in estimating performance are discussed. Other phase two technical activities including operations, exploratory analysis, reporting, methods of assessment, phase three and advanced system design, technical issues, and developmental activities are also included.

  8. AN EMPIRICAL BAYES APPROACH TO COMBINING ESTIMATES OF THE VALUE OF A STATISTICAL LIFE FOR ENVIRONMENTAL POLICY ANALYSIS

    EPA Science Inventory

    This analysis updates EPA's standard VSL estimate by using a more comprehensive collection of VSL studies that include studies published between 1992 and 2000, as well as applying a more appropriate statistical method. We provide a pooled effect VSL estimate by applying the empi...

  9. An empirical comparative study on biological age estimation algorithms with an application of Work Ability Index (WAI).

    PubMed

    Cho, Il Haeng; Park, Kyung S; Lim, Chang Joo

    2010-02-01

    In this study, we described the characteristics of five different biological age (BA) estimation algorithms, including (i) multiple linear regression, (ii) principal component analysis, and somewhat unique methods developed by (iii) Hochschild, (iv) Klemera and Doubal, and (v) a variant of Klemera and Doubal's method. The objective of this study is to find the most appropriate method of BA estimation by examining the association between Work Ability Index (WAI) and the differences of each algorithm's estimates from chronological age (CA). The WAI was found to be a measure that reflects an individual's current health status rather than the deterioration caused by a serious dependency with the age. Experiments were conducted on 200 Korean male participants using a BA estimation system developed principally under the concept of non-invasive, simple to operate and human function-based. Using the empirical data, BA estimation as well as various analyses including correlation analysis and discriminant function analysis was performed. As a result, it had been confirmed by the empirical data that Klemera and Doubal's method with uncorrelated variables from principal component analysis produces relatively reliable and acceptable BA estimates. 2009 Elsevier Ireland Ltd. All rights reserved.

  10. The Flight Optimization System Weights Estimation Method

    NASA Technical Reports Server (NTRS)

    Wells, Douglas P.; Horvath, Bryce L.; McCullers, Linwood A.

    2017-01-01

    FLOPS has been the primary aircraft synthesis software used by the Aeronautics Systems Analysis Branch at NASA Langley Research Center. It was created for rapid conceptual aircraft design and advanced technology impact assessments. FLOPS is a single computer program that includes weights estimation, aerodynamics estimation, engine cycle analysis, propulsion data scaling and interpolation, detailed mission performance analysis, takeoff and landing performance analysis, noise footprint estimation, and cost analysis. It is well known as a baseline and common denominator for aircraft design studies. FLOPS is capable of calibrating a model to known aircraft data, making it useful for new aircraft and modifications to existing aircraft. The weight estimation method in FLOPS is known to be of high fidelity for conventional tube with wing aircraft and a substantial amount of effort went into its development. This report serves as a comprehensive documentation of the FLOPS weight estimation method. The development process is presented with the weight estimation process.

  11. Latest NASA Instrument Cost Model (NICM): Version VI

    NASA Technical Reports Server (NTRS)

    Mrozinski, Joe; Habib-Agahi, Hamid; Fox, George; Ball, Gary

    2014-01-01

    The NASA Instrument Cost Model, NICM, is a suite of tools which allow for probabilistic cost estimation of NASA's space-flight instruments at both the system and subsystem level. NICM also includes the ability to perform cost by analogy as well as joint confidence level (JCL) analysis. The latest version of NICM, Version VI, was released in Spring 2014. This paper will focus on the new features released with NICM VI, which include: 1) The NICM-E cost estimating relationship, which is applicable for instruments flying on Explorer-like class missions; 2) The new cluster analysis ability which, alongside the results of the parametric cost estimation for the user's instrument, also provides a visualization of the user's instrument's similarity to previously flown instruments; and 3) includes new cost estimating relationships for in-situ instruments.

  12. Propensity score analysis with partially observed covariates: How should multiple imputation be used?

    PubMed

    Leyrat, Clémence; Seaman, Shaun R; White, Ian R; Douglas, Ian; Smeeth, Liam; Kim, Joseph; Resche-Rigon, Matthieu; Carpenter, James R; Williamson, Elizabeth J

    2017-01-01

    Inverse probability of treatment weighting is a popular propensity score-based approach to estimate marginal treatment effects in observational studies at risk of confounding bias. A major issue when estimating the propensity score is the presence of partially observed covariates. Multiple imputation is a natural approach to handle missing data on covariates: covariates are imputed and a propensity score analysis is performed in each imputed dataset to estimate the treatment effect. The treatment effect estimates from each imputed dataset are then combined to obtain an overall estimate. We call this method MIte. However, an alternative approach has been proposed, in which the propensity scores are combined across the imputed datasets (MIps). Therefore, there are remaining uncertainties about how to implement multiple imputation for propensity score analysis: (a) should we apply Rubin's rules to the inverse probability of treatment weighting treatment effect estimates or to the propensity score estimates themselves? (b) does the outcome have to be included in the imputation model? (c) how should we estimate the variance of the inverse probability of treatment weighting estimator after multiple imputation? We studied the consistency and balancing properties of the MIte and MIps estimators and performed a simulation study to empirically assess their performance for the analysis of a binary outcome. We also compared the performance of these methods to complete case analysis and the missingness pattern approach, which uses a different propensity score model for each pattern of missingness, and a third multiple imputation approach in which the propensity score parameters are combined rather than the propensity scores themselves (MIpar). Under a missing at random mechanism, complete case and missingness pattern analyses were biased in most cases for estimating the marginal treatment effect, whereas multiple imputation approaches were approximately unbiased as long as the outcome was included in the imputation model. Only MIte was unbiased in all the studied scenarios and Rubin's rules provided good variance estimates for MIte. The propensity score estimated in the MIte approach showed good balancing properties. In conclusion, when using multiple imputation in the inverse probability of treatment weighting context, MIte with the outcome included in the imputation model is the preferred approach.

  13. Network meta-analysis of multiple outcome measures accounting for borrowing of information across outcomes

    PubMed Central

    2014-01-01

    Background Network meta-analysis (NMA) enables simultaneous comparison of multiple treatments while preserving randomisation. When summarising evidence to inform an economic evaluation, it is important that the analysis accurately reflects the dependency structure within the data, as correlations between outcomes may have implication for estimating the net benefit associated with treatment. A multivariate NMA offers a framework for evaluating multiple treatments across multiple outcome measures while accounting for the correlation structure between outcomes. Methods The standard NMA model is extended to multiple outcome settings in two stages. In the first stage, information is borrowed across outcomes as well across studies through modelling the within-study and between-study correlation structure. In the second stage, we make use of the additional assumption that intervention effects are exchangeable between outcomes to predict effect estimates for all outcomes, including effect estimates on outcomes where evidence is either sparse or the treatment had not been considered by any one of the studies included in the analysis. We apply the methods to binary outcome data from a systematic review evaluating the effectiveness of nine home safety interventions on uptake of three poisoning prevention practices (safe storage of medicines, safe storage of other household products, and possession of poison centre control telephone number) in households with children. Analyses are conducted in WinBUGS using Markov Chain Monte Carlo (MCMC) simulations. Results Univariate and the first stage multivariate models produced broadly similar point estimates of intervention effects but the uncertainty around the multivariate estimates varied depending on the prior distribution specified for the between-study covariance structure. The second stage multivariate analyses produced more precise effect estimates while enabling intervention effects to be predicted for all outcomes, including intervention effects on outcomes not directly considered by the studies included in the analysis. Conclusions Accounting for the dependency between outcomes in a multivariate meta-analysis may or may not improve the precision of effect estimates from a network meta-analysis compared to analysing each outcome separately. PMID:25047164

  14. Spectrum Modal Analysis for the Detection of Low-Altitude Windshear with Airborne Doppler Radar

    NASA Technical Reports Server (NTRS)

    Kunkel, Matthew W.

    1992-01-01

    A major obstacle in the estimation of windspeed patterns associated with low-altitude windshear with an airborne pulsed Doppler radar system is the presence of strong levels of ground clutter which can strongly bias a windspeed estimate. Typical solutions attempt to remove the clutter energy from the return through clutter rejection filtering. Proposed is a method whereby both the weather and clutter modes present in a return spectrum can be identified to yield an unbiased estimate of the weather mode without the need for clutter rejection filtering. An attempt will be made to show that modeling through a second order extended Prony approach is sufficient for the identification of the weather mode. A pattern recognition approach to windspeed estimation from the identified modes is derived and applied to both simulated and actual flight data. Comparisons between windspeed estimates derived from modal analysis and the pulse-pair estimator are included as well as associated hazard factors. Also included is a computationally attractive method for estimating windspeeds directly from the coefficients of a second-order autoregressive model. Extensions and recommendations for further study are included.

  15. Manual of phosphoric acid fuel cell power plant cost model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    Cost analysis of phosphoric acid fuel cell power plant includes two parts: a method for estimation of system capital costs, and an economic analysis which determines the levelized annual cost of operating the system used in the capital cost estimation. A FORTRAN computer has been developed for this cost analysis.

  16. A program to form a multidisciplinary data base and analysis for dynamic systems

    NASA Technical Reports Server (NTRS)

    Taylor, L. W.; Suit, W. T.; Mayo, M. H.

    1984-01-01

    Diverse sets of experimental data and analysis programs have been assembled for the purpose of facilitating research in systems identification, parameter estimation and state estimation techniques. The data base analysis programs are organized to make it easy to compare alternative approaches. Additional data and alternative forms of analysis will be included as they become available.

  17. Robust Variance Estimation with Dependent Effect Sizes: Practical Considerations Including a Software Tutorial in Stata and SPSS

    ERIC Educational Resources Information Center

    Tanner-Smith, Emily E.; Tipton, Elizabeth

    2014-01-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…

  18. Comparison of Estimates between Cohort and Case-Control Studies in Meta-Analyses of Therapeutic Interventions: A Meta-Epidemiological Study.

    PubMed

    Lanza, Amy; Ravaud, Philippe; Riveros, Carolina; Dechartres, Agnes

    2016-01-01

    Observational studies are increasingly being used for assessing therapeutic interventions. Case-control studies are generally considered to have greater risk of bias than cohort studies, but we lack evidence of differences in effect estimates between the 2 study types. We aimed to compare estimates between cohort and case-control studies in meta-analyses of observational studies of therapeutic interventions by using a meta-epidemiological study. We used a random sample of meta-analyses of therapeutic interventions published in 2013 that included both cohort and case-control studies assessing a binary outcome. For each meta-analysis, the ratio of estimates (RE) was calculated by comparing the estimate in case-control studies to that in cohort studies. Then, we used random-effects meta-analysis to estimate a combined RE across meta-analyses. An RE < 1 indicated that case-control studies yielded larger estimates than cohort studies. The final analysis included 23 meta-analyses: 138 cohort and 133 case-control studies. Treatment effect estimates did not significantly differ between case-control and cohort studies (combined RE 0.97 [95% CI 0.86-1.09]). Heterogeneity was low, with between-meta-analysis variance τ2 = 0.0049. Estimates did not differ between case-control and prospective or retrospective cohort studies (RE = 1.05 [95% CI 0.96-1.15] and RE = 0.99 [95% CI, 0.83-1.19], respectively). Sensitivity analysis of studies reporting adjusted estimates also revealed no significant difference (RE = 1.03 [95% CI 0.91-1.16]). Heterogeneity was also low for these analyses. We found no significant difference in treatment effect estimates between case-control and cohort studies assessing therapeutic interventions.

  19. Estimating unconsolidated sediment cover thickness by using the horizontal distance to a bedrock outcrop as secondary information

    NASA Astrophysics Data System (ADS)

    Kitterød, Nils-Otto

    2017-08-01

    Unconsolidated sediment cover thickness (D) above bedrock was estimated by using a publicly available well database from Norway, GRANADA. General challenges associated with such databases typically involve clustering and bias. However, if information about the horizontal distance to the nearest bedrock outcrop (L) is included, does the spatial estimation of D improve? This idea was tested by comparing two cross-validation results: ordinary kriging (OK) where L was disregarded; and co-kriging (CK) where cross-covariance between D and L was included. The analysis showed only minor differences between OK and CK with respect to differences between estimation and true values. However, the CK results gave in general less estimation variance compared to the OK results. All observations were declustered and transformed to standard normal probability density functions before estimation and back-transformed for the cross-validation analysis. The semivariogram analysis gave correlation lengths for D and L of approx. 10 and 6 km. These correlations reduce the estimation variance in the cross-validation analysis because more than 50 % of the data material had two or more observations within a radius of 5 km. The small-scale variance of D, however, was about 50 % of the total variance, which gave an accuracy of less than 60 % for most of the cross-validation cases. Despite the noisy character of the observations, the analysis demonstrated that L can be used as secondary information to reduce the estimation variance of D.

  20. Illicit and pharmaceutical drug consumption estimated via wastewater analysis. Part A: chemical analysis and drug use estimates.

    PubMed

    Baker, David R; Barron, Leon; Kasprzyk-Hordern, Barbara

    2014-07-15

    This paper presents, for the first time, community-wide estimation of drug and pharmaceuticals consumption in England using wastewater analysis and a large number of compounds. Among groups of compounds studied were: stimulants, hallucinogens and their metabolites, opioids, morphine derivatives, benzodiazepines, antidepressants and others. Obtained results showed the usefulness of wastewater analysis in order to provide estimates of local community drug consumption. It is noticeable that where target compounds could be compared to NHS prescription statistics, good comparisons were apparent between the two sets of data. These compounds include oxycodone, dihydrocodeine, methadone, tramadol, temazepam and diazepam. Whereas, discrepancies were observed for propoxyphene, codeine, dosulepin and venlafaxine (over-estimations in each case except codeine). Potential reasons for discrepancies include: sales of drugs sold without prescription and not included within NHS data, abuse of a drug with the compound trafficked through illegal sources, different consumption patterns in different areas, direct disposal leading to over estimations when using parent compound as the drug target residue and excretion factors not being representative of the local community. It is noticeable that using a metabolite (and not a parent drug) as a biomarker leads to higher certainty of obtained estimates. With regard to illicit drugs, consistent and logical results were reported. Monitoring of these compounds over a one week period highlighted the expected recreational use of many of these drugs (e.g. cocaine and MDMA) and the more consistent use of others (e.g. methadone). Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Gravitational waves: search results, data analysis and parameter estimation: Amaldi 10 Parallel session C2.

    PubMed

    Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michał; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi; Robinet, Florent; Schmidt, Patricia; Smith, Rory; Veitch, John; Wade, Madeline; Aoudia, Sofiane; Bose, Sukanta; Calderon Bustillo, Juan; Canizares, Priscilla; Capano, Colin; Clark, James; Colla, Alberto; Cuoco, Elena; Da Silva Costa, Carlos; Dal Canton, Tito; Evangelista, Edgar; Goetz, Evan; Gupta, Anuradha; Hannam, Mark; Keitel, David; Lackey, Benjamin; Logue, Joshua; Mohapatra, Satyanarayan; Piergiovanni, Francesco; Privitera, Stephen; Prix, Reinhard; Pürrer, Michael; Re, Virginia; Serafinelli, Roberto; Wade, Leslie; Wen, Linqing; Wette, Karl; Whelan, John; Palomba, C; Prodi, G

    The Amaldi 10 Parallel Session C2 on gravitational wave (GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.

  2. Gravitational Waves: Search Results, Data Analysis and Parameter Estimation. Amaldi 10 Parallel Session C2

    NASA Technical Reports Server (NTRS)

    Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michal; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi

    2015-01-01

    The Amaldi 10 Parallel Session C2 on gravitational wave(GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.

  3. What’s Driving Uncertainty? The Model or the Model Parameters (What’s Driving Uncertainty? The influences of model and model parameters in data analysis)

    DOE PAGES

    Anderson-Cook, Christine Michaela

    2017-03-01

    Here, one of the substantial improvements to the practice of data analysis in recent decades is the change from reporting just a point estimate for a parameter or characteristic, to now including a summary of uncertainty for that estimate. Understanding the precision of the estimate for the quantity of interest provides better understanding of what to expect and how well we are able to predict future behavior from the process. For example, when we report a sample average as an estimate of the population mean, it is good practice to also provide a confidence interval (or credible interval, if youmore » are doing a Bayesian analysis) to accompany that summary. This helps to calibrate what ranges of values are reasonable given the variability observed in the sample and the amount of data that were included in producing the summary.« less

  4. Acoustic Source Bearing Estimation (ASBE) computer program development

    NASA Technical Reports Server (NTRS)

    Wiese, Michael R.

    1987-01-01

    A new bearing estimation algorithm (Acoustic Source Analysis Technique - ASAT) and an acoustic analysis computer program (Acoustic Source Bearing Estimation - ASBE) are described, which were developed by Computer Sciences Corporation for NASA Langley Research Center. The ASBE program is used by the Acoustics Division/Applied Acoustics Branch and the Instrument Research Division/Electro-Mechanical Instrumentation Branch to analyze acoustic data and estimate the azimuths from which the source signals radiated. Included are the input and output from a benchmark test case.

  5. Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates

    NASA Technical Reports Server (NTRS)

    Peffley, Al F.

    1991-01-01

    The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.

  6. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  7. Robust variance estimation with dependent effect sizes: practical considerations including a software tutorial in Stata and spss.

    PubMed

    Tanner-Smith, Emily E; Tipton, Elizabeth

    2014-03-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and spss (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding the practical application and implementation of those macros. This paper provides a brief tutorial on the implementation of the Stata and spss macros and discusses practical issues meta-analysts should consider when estimating meta-regression models with robust variance estimates. Two example databases are used in the tutorial to illustrate the use of meta-analysis with robust variance estimates. Copyright © 2013 John Wiley & Sons, Ltd.

  8. An Empirical Comparison of Heterogeneity Variance Estimators in 12,894 Meta-Analyses

    ERIC Educational Resources Information Center

    Langan, Dean; Higgins, Julian P. T.; Simmonds, Mark

    2015-01-01

    Heterogeneity in meta-analysis is most commonly estimated using a moment-based approach described by DerSimonian and Laird. However, this method has been shown to produce biased estimates. Alternative methods to estimate heterogeneity include the restricted maximum likelihood approach and those proposed by Paule and Mandel, Sidik and Jonkman, and…

  9. Radiolysis Model Sensitivity Analysis for a Used Fuel Storage Canister

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wittman, Richard S.

    2013-09-20

    This report fulfills the M3 milestone (M3FT-13PN0810027) to report on a radiolysis computer model analysis that estimates the generation of radiolytic products for a storage canister. The analysis considers radiolysis outside storage canister walls and within the canister fill gas over a possible 300-year lifetime. Previous work relied on estimates based directly on a water radiolysis G-value. This work also includes that effect with the addition of coupled kinetics for 111 reactions for 40 gas species to account for radiolytic-induced chemistry, which includes water recombination and reactions with air.

  10. Multiplication factor versus regression analysis in stature estimation from hand and foot dimensions.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha

    2012-05-01

    Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  11. Estimating costs in the economic evaluation of medical technologies.

    PubMed

    Luce, B R; Elixhauser, A

    1990-01-01

    The complexities and nuances of evaluating the costs associated with providing medical technologies are often underestimated by analysts engaged in economic evaluations. This article describes the theoretical underpinnings of cost estimation, emphasizing the importance of accounting for opportunity costs and marginal costs. The various types of costs that should be considered in an analysis are described; a listing of specific cost elements may provide a helpful guide to analysis. The process of identifying and estimating costs is detailed, and practical recommendations for handling the challenges of cost estimation are provided. The roles of sensitivity analysis and discounting are characterized, as are determinants of the types of costs to include in an analysis. Finally, common problems facing the analyst are enumerated with suggestions for managing these problems.

  12. Extracting galactic structure parameters from multivariated density estimation

    NASA Technical Reports Server (NTRS)

    Chen, B.; Creze, M.; Robin, A.; Bienayme, O.

    1992-01-01

    Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.

  13. 40 CFR 93.122 - Procedures for determining regional transportation-related emissions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... estimated in accordance with reasonable professional practice. (2) The emissions analysis may not include... regional emissions analysis required by §§ 93.118 and 93.119 for the transportation plan, TIP, or project... nonattainment or maintenance area. The analysis shall include FHWA/FTA projects proposed in the transportation...

  14. 40 CFR 93.122 - Procedures for determining regional transportation-related emissions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... estimated in accordance with reasonable professional practice. (2) The emissions analysis may not include... regional emissions analysis required by §§ 93.118 and 93.119 for the transportation plan, TIP, or project... nonattainment or maintenance area. The analysis shall include FHWA/FTA projects proposed in the transportation...

  15. 40 CFR 93.122 - Procedures for determining regional transportation-related emissions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... estimated in accordance with reasonable professional practice. (2) The emissions analysis may not include... regional emissions analysis required by §§ 93.118 and 93.119 for the transportation plan, TIP, or project... nonattainment or maintenance area. The analysis shall include FHWA/FTA projects proposed in the transportation...

  16. 40 CFR 93.122 - Procedures for determining regional transportation-related emissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... estimated in accordance with reasonable professional practice. (2) The emissions analysis may not include... regional emissions analysis required by §§ 93.118 and 93.119 for the transportation plan, TIP, or project... nonattainment or maintenance area. The analysis shall include FHWA/FTA projects proposed in the transportation...

  17. 40 CFR 93.122 - Procedures for determining regional transportation-related emissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... estimated in accordance with reasonable professional practice. (2) The emissions analysis may not include... regional emissions analysis required by §§ 93.118 and 93.119 for the transportation plan, TIP, or project... nonattainment or maintenance area. The analysis shall include FHWA/FTA projects proposed in the transportation...

  18. NASA Instrument Cost/Schedule Model

    NASA Technical Reports Server (NTRS)

    Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George

    2011-01-01

    NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.

  19. Breast and ovarian cancer risks to carriers of the BRCA1 5382insC and 185delAG and BRCA2 6174delT mutations: a combined analysis of 22 population based studies

    PubMed Central

    Antoniou, A; Pharoah, P; Narod, S; Risch, H; Eyfjord, J; Hopper, J; Olsson, H; Johannsson, O; Borg, A; Pasini, B; Radice, P; Manoukian, S; Eccles, D; Tang, N; Olah, E; Anton-Culver, H; Warner, E; Lubinski, J; Gronwald, J; Gorski, B; Tulinius, H; Thorlacius, S; Eerola, H; Nevanlinna, H; Syrjakoski, K; Kallioniemi, O; Thompson, D; Evans, C; Peto, J; Lalloo, F; Evans, D; Easton, D

    2005-01-01

    A recent report estimated the breast cancer risks in carriers of the three Ashkenazi founder mutations to be higher than previously published estimates derived from population based studies. In an attempt to confirm this, the breast and ovarian cancer risks associated with the three Ashkenazi founder mutations were estimated using families included in a previous meta-analysis of populatrion based studies. The estimated breast cancer risks for each of the founder BRCA1 and BRCA2 mutations were similar to the corresponding estimates based on all BRCA1 or BRCA2 mutations in the meta-analysis. These estimates appear to be consistent with the observed prevalence of the mutations in the Ashkenazi Jewish population. PMID:15994883

  20. Low-Temperature Hydrothermal Resource Potential Estimate

    DOE Data Explorer

    Katherine Young

    2016-06-30

    Compilation of data (spreadsheet and shapefiles) for several low-temperature resource types, including isolated springs and wells, delineated area convection systems, sedimentary basins and coastal plains sedimentary systems. For each system, we include estimates of the accessible resource base, mean extractable resource and beneficial heat. Data compiled from USGS and other sources. The paper (submitted to GRC 2016) describing the methodology and analysis is also included.

  1. Analysis of pumping tests: Significance of well diameter, partial penetration, and noise

    USGS Publications Warehouse

    Heidari, M.; Ghiassi, K.; Mehnert, E.

    1999-01-01

    The nonlinear least squares (NLS) method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating pumping wells, and with partially penetrating piezometers or observation wells. It was demonstrated that noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced an exact or acceptable set of parameters when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters, particularly that of specific storage, decreased with increases in the noise level in the observed drawdown data. With consideration of the well radii, the noiseless drawdown data from the pumping well in an unconfined aquifer produced good estimates of horizontal and vertical hydraulic conductivities and specific yield, but the estimated specific storage was unacceptable. When noisy data from the pumping well were used, an acceptable set of parameters was not obtained. Further experiments with noisy drawdown data in an unconfined aquifer revealed that when the well diameter was included in the analysis, hydraulic conductivity, specific yield and vertical hydraulic conductivity may be estimated rather effectively from piezometers located over a range of distances from the pumping well. Estimation of specific storage became less reliable for piezemeters located at distances greater than the initial saturated thickness of the aquifer. Application of the NLS to field pumping and recovery data from a confined aquifer showed that the estimated parameters from the two tests were in good agreement only when the well diameter was included in the analysis. Without consideration of well radii, the estimated values of hydraulic conductivity from the pumping and recovery tests were off by a factor of four.The nonlinear least squares method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating piezometers and observation wells. Noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced a set of parameters that agrees very well with piezometer test data when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters decreased with increasing noise level.

  2. Approximate Confidence Intervals for Moment-Based Estimators of the Between-Study Variance in Random Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-01-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…

  3. Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses?

    PubMed

    McAuley, L; Pham, B; Tugwell, P; Moher, D

    2000-10-07

    The inclusion of only a subset of all available evidence in a meta-analysis may introduce biases and threaten its validity; this is particularly likely if the subset of included studies differ from those not included, which may be the case for published and grey literature (unpublished studies, with limited distribution). We set out to examine whether exclusion of grey literature, compared with its inclusion in meta-analysis, provides different estimates of the effectiveness of interventions assessed in randomised trials. From a random sample of 135 meta-analyses, we identified and retrieved 33 publications that included both grey and published primary studies. The 33 publications contributed 41 separate meta-analyses from several disease areas. General characteristics of the meta-analyses and associated studies and outcome data at the trial level were collected. We explored the effects of the inclusion of grey literature on the quantitative results using logistic-regression analyses. 33% of the meta-analyses were found to include some form of grey literature. The grey literature, when included, accounts for between 4.5% and 75% of the studies in a meta-analysis. On average, published work, compared with grey literature, yielded significantly larger estimates of the intervention effect by 15% (ratio of odds ratios=1.15 [95% CI 1.04-1.28]). Excluding abstracts from the analysis further compounded the exaggeration (1.33 [1.10-1.60]). The exclusion of grey literature from meta-analyses can lead to exaggerated estimates of intervention effectiveness. In general, meta-analysts should attempt to identify, retrieve, and include all reports, grey and published, that meet predefined inclusion criteria.

  4. Vestibular schwannomas: Accuracy of tumor volume estimated by ice cream cone formula using thin-sliced MR images.

    PubMed

    Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Ma, Hsin-I; Hsu, Hsian-He; Juan, Chun-Jung

    2018-01-01

    We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey's, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey's formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey's formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas.

  5. Assessment of the Value, Impact, and Validity of the Jobs and Economic Development Impacts (JEDI) Suite of Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billman, L.; Keyser, D.

    The Jobs and Economic Development Impacts (JEDI) models, developed by the National Renewable Energy Laboratory (NREL) for the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), use input-output methodology to estimate gross (not net) jobs and economic impacts of building and operating selected types of renewable electricity generation and fuel plants. This analysis provides the DOE with an assessment of the value, impact, and validity of the JEDI suite of models. While the models produce estimates of jobs, earnings, and economic output, this analysis focuses only on jobs estimates. This validation report includes an introductionmore » to JEDI models, an analysis of the value and impact of the JEDI models, and an analysis of the validity of job estimates generated by JEDI model through comparison to other modeled estimates and comparison to empirical, observed jobs data as reported or estimated for a commercial project, a state, or a region.« less

  6. Sensitivity analysis of the add-on price estimate for the edge-defined film-fed growth process

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.; Kachare, A. H.

    1981-01-01

    The analysis is in terms of cost parameters and production parameters. The cost parameters include equipment, space, direct labor, materials, and utilities. The production parameters include growth rate, process yield, and duty cycle. A computer program was developed specifically to do the sensitivity analysis.

  7. WAATS: A computer program for Weights Analysis of Advanced Transportation Systems

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.

    1974-01-01

    A historical weight estimating technique for advanced transportation systems is presented. The classical approach to weight estimation is discussed and sufficient data is presented to estimate weights for a large spectrum of flight vehicles including horizontal and vertical takeoff aircraft, boosters and reentry vehicles. A computer program, WAATS (Weights Analysis for Advanced Transportation Systems) embracing the techniques discussed has been written and user instructions are presented. The program was developed for use in the ODIN (Optimal Design Integration System) system.

  8. Review of Recent Methodological Developments in Group-Randomized Trials: Part 2-Analysis.

    PubMed

    Turner, Elizabeth L; Prague, Melanie; Gallis, John A; Li, Fan; Murray, David M

    2017-07-01

    In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have updated that review with developments in analysis of the past 13 years, with a companion article to focus on developments in design. We discuss developments in the topics of the earlier review (e.g., methods for parallel-arm GRTs, individually randomized group-treatment trials, and missing data) and in new topics, including methods to account for multiple-level clustering and alternative estimation methods (e.g., augmented generalized estimating equations, targeted maximum likelihood, and quadratic inference functions). In addition, we describe developments in analysis of alternative group designs (including stepped-wedge GRTs, network-randomized trials, and pseudocluster randomized trials), which require clustering to be accounted for in their design and analysis.

  9. Random vectors and spatial analysis by geostatistics for geotechnical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, D.S.

    1987-08-01

    Geostatistics is extended to the spatial analysis of vector variables by defining the estimation variance and vector variogram in terms of the magnitude of difference vectors. Many random variables in geotechnology are in vectorial terms rather than scalars, and its structural analysis requires those sample variable interpolations to construct and characterize structural models. A better local estimator will result in greater quality of input models; geostatistics can provide such estimators; kriging estimators. The efficiency of geostatistics for vector variables is demonstrated in a case study of rock joint orientations in geological formations. The positive cross-validation encourages application of geostatistics tomore » spatial analysis of random vectors in geoscience as well as various geotechnical fields including optimum site characterization, rock mechanics for mining and civil structures, cavability analysis of block cavings, petroleum engineering, and hydrologic and hydraulic modelings.« less

  10. Multifidelity Analysis and Optimization for Supersonic Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory

    2010-01-01

    Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.

  11. Sizing and Lifecycle Cost Analysis of an Ares V Composite Interstage

    NASA Technical Reports Server (NTRS)

    Mann, Troy; Smeltzer, Stan; Grenoble, Ray; Mason, Brian; Rosario, Sev; Fairbairn, Bob

    2012-01-01

    The Interstage Element of the Ares V launch vehicle was sized using a commercially available structural sizing software tool. Two different concepts were considered, a metallic design and a composite design. Both concepts were sized using similar levels of analysis fidelity and included the influence of design details on each concept. Additionally, the impact of the different manufacturing techniques and failure mechanisms for composite and metallic construction were considered. Significant details were included in analysis models of each concept, including penetrations for human access, joint connections, as well as secondary loading effects. The designs and results of the analysis were used to determine lifecycle cost estimates for the two Interstage designs. Lifecycle cost estimates were based on industry provided cost data for similar launch vehicle components. The results indicated that significant mass as well as cost savings are attainable for the chosen composite concept as compared with a metallic option.

  12. Progress in Turbulence Detection via GNSS Occultation Data

    NASA Technical Reports Server (NTRS)

    Cornman, L. B.; Goodrich, R. K.; Axelrad, P.; Barlow, E.

    2012-01-01

    The increased availability of radio occultation (RO) data offers the ability to detect and study turbulence in the Earth's atmosphere. An analysis of how RO data can be used to determine the strength and location of turbulent regions is presented. This includes the derivation of a model for the power spectrum of the log-amplitude and phase fluctuations of the permittivity (or index of refraction) field. The bulk of the paper is then concerned with the estimation of the model parameters. Parameter estimators are introduced and some of their statistical properties are studied. These estimators are then applied to simulated log-amplitude RO signals. This includes the analysis of global statistics derived from a large number of realizations, as well as case studies that illustrate various specific aspects of the problem. Improvements to the basic estimation methods are discussed, and their beneficial properties are illustrated. The estimation techniques are then applied to real occultation data. Only two cases are presented, but they illustrate some of the salient features inherent in real data.

  13. 75 FR 3434 - Fisheries of the Northeastern United States; Northeast Skate Complex Fishery; Amendment 3

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-21

    ... analysis of dependency on the skate fishery indicates that almost 75 percent of the vessels included in the analysis have less than a 5-percent dependency on the skate fishery. The estimated impact on gross sales increases markedly in relation to dependency on the skate fishery among the 127 vessels estimated to be...

  14. Sampling protocol, estimation, and analysis procedures for the down woody materials indicator of the FIA program

    Treesearch

    Christopher W. Woodall; Vicente J. Monleon

    2008-01-01

    The USDA Forest Service's Forest Inventory and Analysis program conducts an inventory of forests of the United States including down woody materials (DWM). In this report we provide the rationale and context for a national inventory of DWM, describe the components sampled, discuss the sampling protocol used and corresponding estimation procedures, and provide...

  15. PFA toolbox: a MATLAB tool for Metabolic Flux Analysis.

    PubMed

    Morales, Yeimy; Bosque, Gabriel; Vehí, Josep; Picó, Jesús; Llaneras, Francisco

    2016-07-11

    Metabolic Flux Analysis (MFA) is a methodology that has been successfully applied to estimate metabolic fluxes in living cells. However, traditional frameworks based on this approach have some limitations, particularly when measurements are scarce and imprecise. This is very common in industrial environments. The PFA Toolbox can be used to face those scenarios. Here we present the PFA (Possibilistic Flux Analysis) Toolbox for MATLAB, which simplifies the use of Interval and Possibilistic Metabolic Flux Analysis. The main features of the PFA Toolbox are the following: (a) It provides reliable MFA estimations in scenarios where only a few fluxes can be measured or those available are imprecise. (b) It provides tools to easily plot the results as interval estimates or flux distributions. (c) It is composed of simple functions that MATLAB users can apply in flexible ways. (d) It includes a Graphical User Interface (GUI), which provides a visual representation of the measurements and their uncertainty. (e) It can use stoichiometric models in COBRA format. In addition, the PFA Toolbox includes a User's Guide with a thorough description of its functions and several examples. The PFA Toolbox for MATLAB is a freely available Toolbox that is able to perform Interval and Possibilistic MFA estimations.

  16. Carbon footprint estimator, phase II : volume II - technical appendices.

    DOT National Transportation Integrated Search

    2014-03-01

    The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...

  17. Carbon footprint estimator, phase II : volume I - GASCAP model.

    DOT National Transportation Integrated Search

    2014-03-01

    The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...

  18. Nonparametric EROC analysis for observer performance evaluation on joint detection and estimation tasks

    NASA Astrophysics Data System (ADS)

    Wunderlich, Adam; Goossens, Bart

    2014-03-01

    The majority of the literature on task-based image quality assessment has focused on lesion detection tasks, using the receiver operating characteristic (ROC) curve, or related variants, to measure performance. However, since many clinical image evaluation tasks involve both detection and estimation (e.g., estimation of kidney stone composition, estimation of tumor size), there is a growing interest in performance evaluation for joint detection and estimation tasks. To evaluate observer performance on such tasks, Clarkson introduced the estimation ROC (EROC) curve, and the area under the EROC curve as a summary figure of merit. In the present work, we propose nonparametric estimators for practical EROC analysis from experimental data, including estimators for the area under the EROC curve and its variance. The estimators are illustrated with a practical example comparing MRI images reconstructed from different k-space sampling trajectories.

  19. Evidence-based mapping of design heterogeneity prior to meta-analysis: a systematic review and evidence synthesis.

    PubMed

    Althuis, Michelle D; Weed, Douglas L; Frankenfeld, Cara L

    2014-07-23

    Assessment of design heterogeneity conducted prior to meta-analysis is infrequently reported; it is often presented post hoc to explain statistical heterogeneity. However, design heterogeneity determines the mix of included studies and how they are analyzed in a meta-analysis, which in turn can importantly influence the results. The goal of this work is to introduce ways to improve the assessment and reporting of design heterogeneity prior to statistical summarization of epidemiologic studies. In this paper, we use an assessment of sugar-sweetened beverages (SSB) and type 2 diabetes (T2D) as an example to show how a technique called 'evidence mapping' can be used to organize studies and evaluate design heterogeneity prior to meta-analysis.. Employing a systematic and reproducible approach, we evaluated the following elements across 11 selected cohort studies: variation in definitions of SSB, T2D, and co-variables, design features and population characteristics associated with specific definitions of SSB, and diversity in modeling strategies. Evidence mapping strategies effectively organized complex data and clearly depicted design heterogeneity. For example, across 11 studies of SSB and T2D, 7 measured diet only once (with 7 to 16 years of disease follow-up), 5 included primarily low SSB consumers, and 3 defined the study variable (SSB) as consumption of either sugar or artificially-sweetened beverages. This exercise also identified diversity in analysis strategies, such as adjustment for 11 to 17 co-variables and a large degree of fluctuation in SSB-T2D risk estimates depending on variables selected for multivariable models (2 to 95% change in the risk estimate from the age-adjusted model). Meta-analysis seeks to understand heterogeneity in addition to computing a summary risk estimate. This strategy effectively documents design heterogeneity, thus improving the practice of meta-analysis by aiding in: 1) protocol and analysis planning, 2) transparent reporting of differences in study designs, and 3) interpretation of pooled estimates. We recommend expanding the practice of meta-analysis reporting to include a table that summarizes design heterogeneity. This would provide readers with more evidence to interpret the summary risk estimates.

  20. Evidence-based mapping of design heterogeneity prior to meta-analysis: a systematic review and evidence synthesis

    PubMed Central

    2014-01-01

    Background Assessment of design heterogeneity conducted prior to meta-analysis is infrequently reported; it is often presented post hoc to explain statistical heterogeneity. However, design heterogeneity determines the mix of included studies and how they are analyzed in a meta-analysis, which in turn can importantly influence the results. The goal of this work is to introduce ways to improve the assessment and reporting of design heterogeneity prior to statistical summarization of epidemiologic studies. Methods In this paper, we use an assessment of sugar-sweetened beverages (SSB) and type 2 diabetes (T2D) as an example to show how a technique called ‘evidence mapping’ can be used to organize studies and evaluate design heterogeneity prior to meta-analysis.. Employing a systematic and reproducible approach, we evaluated the following elements across 11 selected cohort studies: variation in definitions of SSB, T2D, and co-variables, design features and population characteristics associated with specific definitions of SSB, and diversity in modeling strategies. Results Evidence mapping strategies effectively organized complex data and clearly depicted design heterogeneity. For example, across 11 studies of SSB and T2D, 7 measured diet only once (with 7 to 16 years of disease follow-up), 5 included primarily low SSB consumers, and 3 defined the study variable (SSB) as consumption of either sugar or artificially-sweetened beverages. This exercise also identified diversity in analysis strategies, such as adjustment for 11 to 17 co-variables and a large degree of fluctuation in SSB-T2D risk estimates depending on variables selected for multivariable models (2 to 95% change in the risk estimate from the age-adjusted model). Conclusions Meta-analysis seeks to understand heterogeneity in addition to computing a summary risk estimate. This strategy effectively documents design heterogeneity, thus improving the practice of meta-analysis by aiding in: 1) protocol and analysis planning, 2) transparent reporting of differences in study designs, and 3) interpretation of pooled estimates. We recommend expanding the practice of meta-analysis reporting to include a table that summarizes design heterogeneity. This would provide readers with more evidence to interpret the summary risk estimates. PMID:25055879

  1. Optimizing Human Input in Social Network Analysis

    DTIC Science & Technology

    2018-01-23

    of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or any...other aspect of this collection of information , including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for

  2. Saugus River and Tributaries Flood Damage Reduction Study: Lynn, Malden, Revere and Saugus, Massachusetts. Section 1. Feasibility Report.

    DTIC Science & Technology

    1989-12-01

    57 Table 5 Sensitivity Analysis - Point of Pines LPP 61 Table 6 Plan Comparison 64 Table 7 NED Plan Project Costs 96 Table 8 Estimated Operation...Costs 99 Table 13 Selected Plan/Estimated Annual Benefits 101 Table 14 Comparative Impacts - NED Regional Floodgate Plan 102 Table 15 Economic Analysis ...Includes detailed descriptions, plans and profiles and design considerations of the selected plan; coastal analysis of the shorefront; detailed project

  3. Documentation of the analysis of the benefits and costs of aeronautical research and technology models, volume 1

    NASA Technical Reports Server (NTRS)

    Bobick, J. C.; Braun, R. L.; Denny, R. E.

    1979-01-01

    The analysis of the benefits and costs of aeronautical research and technology (ABC-ART) models are documented. These models were developed by NASA for use in analyzing the economic feasibility of applying advanced aeronautical technology to future civil aircraft. The methodology is composed of three major modules: fleet accounting module, airframe manufacturing module, and air carrier module. The fleet accounting module is used to estimate the number of new aircraft required as a function of time to meet demand. This estimation is based primarily upon the expected retirement age of existing aircraft and the expected change in revenue passenger miles demanded. Fuel consumption estimates are also generated by this module. The airframe manufacturer module is used to analyze the feasibility of the manufacturing the new aircraft demanded. The module includes logic for production scheduling and estimating manufacturing costs. For a series of aircraft selling prices, a cash flow analysis is performed and a rate of return on investment is calculated. The air carrier module provides a tool for analyzing the financial feasibility of an airline purchasing and operating the new aircraft. This module includes a methodology for computing the air carrier direct and indirect operating costs, performing a cash flow analysis, and estimating the internal rate of return on investment for a set of aircraft purchase prices.

  4. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    PubMed

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  5. Current situation of oil refinery in Bulgaria

    NASA Astrophysics Data System (ADS)

    Vershkova, Elena; Petkova, Petinka; Grinkevich, Anastasia

    2016-09-01

    This article deals with the classification approach for oil refineries in international practices. Criteria of refinery estimation group, including its financial status estimation, have been investigated. The analysis object is “Lukoil Neftochim Bourgas” AD (LNCHB) activity. This company is a leading enterprise in Bulgaria. The analysis of LNCHB operating: energy intensity index; index of operating costs and return on investment index have been performed.

  6. Estimation of environment-related properties of chemicals for design of sustainable processes: development of group-contribution+ (GC+) property models and uncertainty analysis.

    PubMed

    Hukkerikar, Amol Shivajirao; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M; Sin, Gürkan; Gani, Rafiqul

    2012-11-26

    The aim of this work is to develop group-contribution(+) (GC(+)) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22 environment-related properties, which include the fathead minnow 96-h LC(50), Daphnia magna 48-h LC(50), oral rat LD(50), aqueous solubility, bioconcentration factor, permissible exposure limit (OSHA-TWA), photochemical oxidation potential, global warming potential, ozone depletion potential, acidification potential, emission to urban air (carcinogenic and noncarcinogenic), emission to continental rural air (carcinogenic and noncarcinogenic), emission to continental fresh water (carcinogenic and noncarcinogenic), emission to continental seawater (carcinogenic and noncarcinogenic), emission to continental natural soil (carcinogenic and noncarcinogenic), and emission to continental agricultural soil (carcinogenic and noncarcinogenic) have been modeled and analyzed. The application of the developed property models for the estimation of environment-related properties and uncertainties of the estimated property values is highlighted through an illustrative example. The developed property models provide reliable estimates of environment-related properties needed to perform process synthesis, design, and analysis of sustainable chemical processes and allow one to evaluate the effect of uncertainties of estimated property values on the calculated performance of processes giving useful insights into quality and reliability of the design of sustainable processes.

  7. Estimating two-way tables based on forest surveys

    Treesearch

    Charles T. Scott

    2000-01-01

    Forest survey analysts usually are interested in tables of values rather than single point estimates. A common error is to include only plots on which nonzero values of the attribute were observed when computing the variance of a mean. Similarly, analysts often exclude nonforest plots from the analysis. The development of the correct estimates of forest area, attribute...

  8. Instrumental Variable Methods for Continuous Outcomes That Accommodate Nonignorable Missing Baseline Values.

    PubMed

    Ertefaie, Ashkan; Flory, James H; Hennessy, Sean; Small, Dylan S

    2017-06-15

    Instrumental variable (IV) methods provide unbiased treatment effect estimation in the presence of unmeasured confounders under certain assumptions. To provide valid estimates of treatment effect, treatment effect confounders that are associated with the IV (IV-confounders) must be included in the analysis, and not including observations with missing values may lead to bias. Missing covariate data are particularly problematic when the probability that a value is missing is related to the value itself, which is known as nonignorable missingness. In such cases, imputation-based methods are biased. Using health-care provider preference as an IV method, we propose a 2-step procedure with which to estimate a valid treatment effect in the presence of baseline variables with nonignorable missing values. First, the provider preference IV value is estimated by performing a complete-case analysis using a random-effects model that includes IV-confounders. Second, the treatment effect is estimated using a 2-stage least squares IV approach that excludes IV-confounders with missing values. Simulation results are presented, and the method is applied to an analysis comparing the effects of sulfonylureas versus metformin on body mass index, where the variables baseline body mass index and glycosylated hemoglobin have missing values. Our result supports the association of sulfonylureas with weight gain. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Fixed reproducible tangible wealth in the United States, 1925-94

    DOT National Transportation Integrated Search

    1999-08-01

    This volume presents estimates of fixed reproducible tangible wealth in the United States for 192594 that were prepared by the Bureau of Economic Analysis (BEA). It includes the investment series that were used to construct these estimates; for mo...

  10. Measuring and modeling carbon stock change estimates for US forests and uncertainties from apparent inter-annual variability

    Treesearch

    James E. Smith; Linda S. Heath

    2015-01-01

    Our approach is based on a collection of models that convert or augment the USDA Forest Inventory and Analysis program survey data to estimate all forest carbon component stocks, including live and standing dead tree aboveground and belowground biomass, forest floor (litter), down deadwood, and soil organic carbon, for each inventory plot. The data, which include...

  11. Estimating the Global Incidence of Aneurysmal Subarachnoid Hemorrhage: A Systematic Review for Central Nervous System Vascular Lesions and Meta-Analysis of Ruptured Aneurysms.

    PubMed

    Hughes, Joshua D; Bond, Kamila M; Mekary, Rania A; Dewan, Michael C; Rattani, Abbas; Baticulon, Ronnie; Kato, Yoko; Azevedo-Filho, Hildo; Morcos, Jacques J; Park, Kee B

    2018-04-09

    There is increasing acknowledgement that surgical care is important in global health initiatives. In particular, neurosurgical care is as limited as 1 per 10 million people in parts of the world. We performed a systematic literature review to examine the worldwide incidence of central nervous system vascular lesions and a meta-analysis of aneurysmal subarachnoid hemorrhage (aSAH) to define the disease burden and inform neurosurgical global health efforts. A systematic review and meta-analysis were conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines to estimate the global epidemiology of central nervous system vascular lesions, including unruptured and ruptured aneurysms, arteriovenous malformations, cavernous malformations, dural arteriovenous fistulas, developmental venous anomalies, and vein of Galen malformations. Results were organized by World Health Organization regions. After literature review, because of a lack of data from particular World Health Organization regions, we determined we could only provide an estimate of aSAH. Using data from studies with aSAH and 12 high-quality stroke studies from regions lacking data, we meta-analyzed the yearly crude incidence of aSAH per 100,000 persons. Estimates were generated via random-effects models. From an initial yield of 1492 studies, 46 manuscripts on aSAH incidence were included. The final meta-analysis included 58 studies from 31 different countries. We estimated the global crude incidence for aSAH to be 6.67 per 100,000 persons with a wide variation across WHO regions from 0.71 to 12.38 per 100,000 persons. Worldwide, almost 500,000 individuals will suffer from aSAH each year, with almost two-thirds in low- and middle-income countries. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Estimation and Psychometric Analysis of Component Profile Scores via Multivariate Generalizability Theory

    ERIC Educational Resources Information Center

    Grochowalski, Joseph H.

    2015-01-01

    Component Universe Score Profile analysis (CUSP) is introduced in this paper as a psychometric alternative to multivariate profile analysis. The theoretical foundations of CUSP analysis are reviewed, which include multivariate generalizability theory and constrained principal components analysis. Because CUSP is a combination of generalizability…

  13. Clinical outcomes after estimated versus calculated activity of radioiodine for the treatment of hyperthyroidism: systematic review and meta-analysis.

    PubMed

    de Rooij, A; Vandenbroucke, J P; Smit, J W A; Stokkel, M P M; Dekkers, O M

    2009-11-01

    Despite the long experience with radioiodine for hyperthyroidism, controversy remains regarding the optimal method to determine the activity that is required to achieve long-term euthyroidism. To compare the effect of estimated versus calculated activity of radioiodine in hyperthyroidism. Design Systematic review and meta-analysis. We searched the databases Medline, EMBASE, Web of Science, and Cochrane Library for randomized and nonrandomized studies, comparing the effect of activity estimation methods with dosimetry for hyperthyroidism. The main outcome measure was the frequency of treatment success, defined as persistent euthyroidism after radioiodine treatment at the end of follow-up in the dose estimated and calculated dosimetry group. Furthermore, we assessed the cure rates of hyperthyroidism. Three randomized and five nonrandomized studies, comparing the effect of estimated versus calculated activity of radioiodine on clinical outcomes for the treatment of hyperthyroidism, were included. The weighted mean relative frequency of successful treatment outcome (euthyroidism) was 1.03 (95% confidence interval (CI) 0.91-1.16) for estimated versus calculated activity; the weighted mean relative frequency of cure of hyperthyroidism (eu- or hypothyroidism) was 1.03 (95% CI 0.96-1.10). Subgroup analysis showed a relative frequency of euthyroidism of 1.03 (95% CI 0.84-1.26) for Graves' disease and of 1.05 (95% CI 0.91-1.19) for toxic multinodular goiter. The two main methods used to determine the activity in the treatment of hyperthyroidism with radioiodine, estimated and calculated, resulted in an equally successful treatment outcome. However, the heterogeneity of the included studies is a strong limitation that prevents a definitive conclusion from this meta-analysis.

  14. The French-Canadian data set of Demirjian for dental age estimation: a systematic review and meta-analysis.

    PubMed

    Jayaraman, Jayakumar; Wong, Hai Ming; King, Nigel M; Roberts, Graham J

    2013-07-01

    Estimation of age of an individual can be performed by evaluating the pattern of dental development. A dataset for age estimation based on the dental maturity of a French-Canadian population was published over 35 years ago and has become the most widely accepted dataset. The applicability of this dataset has been tested on different population groups. To estimate the observed differences between Chronological age (CA) and Dental age (DA) when the French Canadian dataset was used to estimate the age of different population groups. A systematic search of literature for papers utilizing the French Canadian dataset for age estimation was performed. All language articles from PubMed, Embase and Cochrane databases were electronically searched for terms 'Demirjian' and 'Dental age' published between January 1973 and December 2011. A hand search of articles was also conducted. A total of 274 studies were identified from which 34 studies were included for qualitative analysis and 12 studies were included for quantitative assessment and meta-analysis. When synthesizing the estimation results from different population groups, on average, the Demirjian dataset overestimated the age of females by 0.65 years (-0.10 years to +2.82 years) and males by 0.60 years (-0.23 years to +3.04 years). The French Canadian dataset overestimates the age of the subjects by more than six months and hence this dataset should be used only with considerable caution when estimating age of group of subjects of any global population. Copyright © 2013 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  15. Vestibular schwannomas: Accuracy of tumor volume estimated by ice cream cone formula using thin-sliced MR images

    PubMed Central

    Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Hsu, Hsian-He

    2018-01-01

    Purpose We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. Methods The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey’s, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Results Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey’s formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). Conclusion The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey’s formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas. PMID:29438424

  16. Interrupted time-series analysis yielded an effect estimate concordant with the cluster-randomized controlled trial result.

    PubMed

    Fretheim, Atle; Soumerai, Stephen B; Zhang, Fang; Oxman, Andrew D; Ross-Degnan, Dennis

    2013-08-01

    We reanalyzed the data from a cluster-randomized controlled trial (C-RCT) of a quality improvement intervention for prescribing antihypertensive medication. Our objective was to estimate the effectiveness of the intervention using both interrupted time-series (ITS) and RCT methods, and to compare the findings. We first conducted an ITS analysis using data only from the intervention arm of the trial because our main objective was to compare the findings from an ITS analysis with the findings from the C-RCT. We used segmented regression methods to estimate changes in level or slope coincident with the intervention, controlling for baseline trend. We analyzed the C-RCT data using generalized estimating equations. Last, we estimated the intervention effect by including data from both study groups and by conducting a controlled ITS analysis of the difference between the slope and level changes in the intervention and control groups. The estimates of absolute change resulting from the intervention were ITS analysis, 11.5% (95% confidence interval [CI]: 9.5, 13.5); C-RCT, 9.0% (95% CI: 4.9, 13.1); and the controlled ITS analysis, 14.0% (95% CI: 8.6, 19.4). ITS analysis can provide an effect estimate that is concordant with the results of a cluster-randomized trial. A broader range of comparisons from other RCTs would help to determine whether these are generalizable results. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Estimating Jupiter’s Gravity Field Using Juno Measurements, Trajectory Estimation Analysis, and a Flow Model Optimization

    NASA Astrophysics Data System (ADS)

    Galanti, Eli; Durante, Daniele; Finocchiaro, Stefano; Iess, Luciano; Kaspi, Yohai

    2017-07-01

    The upcoming Juno spacecraft measurements have the potential of improving our knowledge of Jupiter’s gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spatial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially regarding the Jovian flow structure and its depth, which can influence the measured gravity field. In this study we propose a new iterative method for the estimation of the Jupiter gravity field, using a simulated Juno trajectory, a trajectory estimation model, and an adjoint-based inverse model for the flow dynamics. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that this method can fit some of the gravitational harmonics better to the “measured” harmonics, mainly because of the added information from the dynamical model, which includes the flow structure. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity harmonics estimated from the Juno and Cassini radio science experiments.

  18. Estimating Jupiter’s Gravity Field Using Juno Measurements, Trajectory Estimation Analysis, and a Flow Model Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galanti, Eli; Kaspi, Yohai; Durante, Daniele

    The upcoming Juno spacecraft measurements have the potential of improving our knowledge of Jupiter’s gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spatial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially regarding the Jovian flow structure and its depth, which can influence the measured gravity field. In this study we propose a new iterative method for the estimation of the Jupiter gravity field, using a simulatedmore » Juno trajectory, a trajectory estimation model, and an adjoint-based inverse model for the flow dynamics. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that this method can fit some of the gravitational harmonics better to the “measured” harmonics, mainly because of the added information from the dynamical model, which includes the flow structure. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity harmonics estimated from the Juno and Cassini radio science experiments.« less

  19. Cost and Savings Estimates the Air Force Used to Decide Against Relocating the Electromagnetic Compatibility Analysis Center from Annapolis, Maryland, to Duluth, Minnesota.

    DTIC Science & Technology

    1983-03-09

    that maximize electromagnetic compatibility potential. -- Providing direct assistance on an reimbursable basis to DOD and other Government agencies on...value, we estimated that reimburs - able real estate expenses would average about $6,458 rather than $4,260 included in the Air Force estimate. When the...of estimated reimbursement was assumed to be necessary to encourage the relocation of more professional employees and increase their estimated

  20. Sex estimation from sternal measurements using multidetector computed tomography.

    PubMed

    Ekizoglu, Oguzhan; Hocaoglu, Elif; Inci, Ercan; Bilgili, Mustafa Gokhan; Solmaz, Dilek; Erdil, Irem; Can, Ismail Ozgur

    2014-12-01

    We aimed to show the utility and reliability of sternal morphometric analysis for sex estimation.Sex estimation is a very important step in forensic identification. Skeletal surveys are main methods for sex estimation studies. Morphometric analysis of sternum may provide high accuracy rated data in sex discrimination. In this study, morphometric analysis of sternum was evaluated in 1 mm chest computed tomography scans for sex estimation. Four hundred forty 3 subjects (202 female, 241 male, mean age: 44 ± 8.1 [distribution: 30-60 year old]) were included the study. Manubrium length (ML), mesosternum length (2L), Sternebra 1 (S1W), and Sternebra 3 (S3W) width were measured and also sternal index (SI) was calculated. Differences between genders were evaluated by student t-test. Predictive factors of sex were determined by discrimination analysis and receiver operating characteristic (ROC) analysis. Male sternal measurement values are significantly higher than females (P < 0.001) while SI is significantly low in males (P < 0.001). In discrimination analysis, MSL has high accuracy rate with 80.2% in females and 80.9% in males. MSL also has the best sensitivity (75.9%) and specificity (87.6%) values. Accuracy rates were above 80% in 3 stepwise discrimination analysis for both sexes. Stepwise 1 (ML, MSL, S1W, S3W) has the highest accuracy rate in stepwise discrimination analysis with 86.1% in females and 83.8% in males. Our study showed that morphometric computed tomography analysis of sternum might provide important information for sex estimation.

  1. Earth resources data analysis program, phase 2

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The efforts and findings of the Earth Resources Data Analysis Program are summarized. Results of a detailed study of the needs of EOD with respect to an applications development system (ADS) for the analysis of remotely sensed data, including an evaluation of four existing systems with respect to these needs are described. Recommendations as to possible courses for EOD to follow to obtain a viable ADS are presented. Algorithmic development comprised of several subtasks is discussed. These subtasks include the following: (1) two algorithms for multivariate density estimation; (2) a data smoothing algorithm; (3) a method for optimally estimating prior probabilities of unclassified data; and (4) further applications of the modified Cholesky decomposition in various calculations. Little effort was expended on task 3, however, two reports were reviewed.

  2. Endogenous pain modulation in chronic orofacial pain: a systematic review and meta-analysis.

    PubMed

    Moana-Filho, Estephan J; Herrero Babiloni, Alberto; Theis-Mahon, Nicole R

    2018-06-15

    Abnormal endogenous pain modulation was suggested as a potential mechanism for chronic pain, ie, increased pain facilitation and/or impaired pain inhibition underlying symptoms manifestation. Endogenous pain modulation function can be tested using psychophysical methods such as temporal summation of pain (TSP) and conditioned pain modulation (CPM), which assess pain facilitation and inhibition, respectively. Several studies have investigated endogenous pain modulation function in patients with nonparoxysmal orofacial pain (OFP) and reported mixed results. This study aimed to provide, through a qualitative and quantitative synthesis of the available literature, overall estimates for TSP/CPM responses in patients with OFP relative to controls. MEDLINE, Embase, and the Cochrane databases were searched, and references were screened independently by 2 raters. Twenty-six studies were included for qualitative review, and 22 studies were included for meta-analysis. Traditional meta-analysis and robust variance estimation were used to synthesize overall estimates for standardized mean difference. The overall standardized estimate for TSP was 0.30 (95% confidence interval: 0.11-0.49; P = 0.002), with moderate between-study heterogeneity (Q [df = 17] = 41.8, P = 0.001; I = 70.2%). Conditioned pain modulation's estimated overall effect size was large but above the significance threshold (estimate = 1.36; 95% confidence interval: -0.09 to 2.81; P = 0.066), with very large heterogeneity (Q [df = 8] = 108.3, P < 0.001; I = 98.0%). Sensitivity analyses did not affect the overall estimate for TSP; for CPM, the overall estimate became significant if specific random-effect models were used or if the most influential study was removed. Publication bias was not present for TSP studies, whereas it substantially influenced CPM's overall estimate. These results suggest increased pain facilitation and trend for pain inhibition impairment in patients with nonparoxysmal OFP.

  3. Poster — Thur Eve — 44: Linearization of Compartmental Models for More Robust Estimates of Regional Hemodynamic, Metabolic and Functional Parameters using DCE-CT/PET Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blais, AR; Dekaban, M; Lee, T-Y

    2014-08-15

    Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kineticmore » models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.« less

  4. Inverse Analysis of Irradiated NuclearMaterial Gamma Spectra via Nonlinear Optimization

    NASA Astrophysics Data System (ADS)

    Dean, Garrett James

    Nuclear forensics is the collection of technical methods used to identify the provenance of nuclear material interdicted outside of regulatory control. Techniques employed in nuclear forensics include optical microscopy, gas chromatography, mass spectrometry, and alpha, beta, and gamma spectrometry. This dissertation focuses on the application of inverse analysis to gamma spectroscopy to estimate the history of pulse irradiated nuclear material. Previous work in this area has (1) utilized destructive analysis techniques to supplement the nondestructive gamma measurements, and (2) been applied to samples composed of spent nuclear fuel with long irradiation and cooling times. Previous analyses have employed local nonlinear solvers, simple empirical models of gamma spectral features, and simple detector models of gamma spectral features. The algorithm described in this dissertation uses a forward model of the irradiation and measurement process within a global nonlinear optimizer to estimate the unknown irradiation history of pulse irradiated nuclear material. The forward model includes a detector response function for photopeaks only. The algorithm uses a novel hybrid global and local search algorithm to quickly estimate the irradiation parameters, including neutron fluence, cooling time and original composition. Sequential, time correlated series of measurements are used to reduce the uncertainty in the estimated irradiation parameters. This algorithm allows for in situ measurements of interdicted irradiated material. The increase in analysis speed comes with a decrease in information that can be determined, but the sample fluence, cooling time, and composition can be determined within minutes of a measurement. Furthermore, pulse irradiated nuclear material has a characteristic feature that irradiation time and flux cannot be independently estimated. The algorithm has been tested against pulse irradiated samples of pure special nuclear material with cooling times of four minutes to seven hours. The algorithm described is capable of determining the cooling time and fluence the sample was exposed to within 10% as well as roughly estimating the relative concentrations of nuclides present in the original composition.

  5. Scope Complexity Options Risks Excursions (SCORE) Factor Mathematical Description.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, Jared Lee; Samberson, Jonell Nicole; Shettigar, Subhasini

    The purpose of the Scope, Complexity, Options, Risks, Excursions (SCORE) model is to estimate the relative complexity of design variants of future warhead options, resulting in scores. SCORE factors extend this capability by providing estimates of complexity relative to a base system (i.e., all design options are normalized to one weapon system). First, a clearly defined set of scope elements for a warhead option is established. The complexity of each scope element is estimated by Subject Matter Experts (SMEs), including a level of uncertainty, relative to a specific reference system. When determining factors, complexity estimates for a scope element canmore » be directly tied to the base system or chained together via comparable scope elements in a string of reference systems that ends with the base system. The SCORE analysis process is a growing multi-organizational Nuclear Security Enterprise (NSE) effort, under the management of the NA-12 led Enterprise Modeling and Analysis Consortium (EMAC). Historically, it has provided the data elicitation, integration, and computation needed to support the out-year Life Extension Program (LEP) cost estimates included in the Stockpile Stewardship Management Plan (SSMP).« less

  6. Description of data on the Nimbus 7 LIMS map archive tape: Temperature and geopotential height

    NASA Technical Reports Server (NTRS)

    Haggard, K. V.; Remsberg, E. E.; Grose, W. L.; Russell, J. M., III; Marshall, B. T.; Lingenfelser, G.

    1986-01-01

    The process by which the analysis of the Limb Infared Monitor of the Stratosphere (LIMS) experiment data were used to produce estimates of synoptic maps of temperature and geopotential height is described. In addition to a detailed description of the analysis procedure, several interesting features in the data are discussed and these features are used to demonstrate how the analysis procedure produced the final maps and how one can estimate the uncertainties in the maps. In addition, features in the analysis are noted that would influence how one might use, or interpret, the results. These include subjects such as smoothing and the interpretation of wave components. While some suggestions are made for an improved analysis of the data, it is shown that, in general, the maps are an excellent estimation of the synoptic fields.

  7. Updated Estimates of Glacier Mass Change for Western North America

    NASA Astrophysics Data System (ADS)

    Menounos, B.; Gardner, A. S.; Howat, I.; Berthier, E.; Dehecq, A.; Noh, M. J.; Pelto, B. M.

    2017-12-01

    Alpine glaciers are critical components in Western North America's hydrologic cycle. We use varied remotely-sensed datasets to provide updated mass change estimates for Region 2 of the Randolf Glacier Inventory (RGI-02 - all North American glaciers outside of Alaska). Our datasets include: i) aerial laser altimetry surveys completed over many thousands of square kilometers; and ii) multiple Terabytes of high resolution optical stereo imagery (World View 1-3 and Pleiades). Our data from the period 2014-2017 includes the majority of glaciers in RGI-02, specifically those ice masses in the Rocky Mountains (US and Canada), Interior Ranges in British Columbia and the Cascade Mountains (Washington). We co-registered and bias corrected the recent surface models to the Shuttle Radar Topographic Mapping (SRTM) data acquired in February, 2000. In British Columbia, our estimates of mass change are within the uncertainty estimates obtained for the period 1985-2000, but estimates from some regions indicate accelerated mass loss. Work is also underway to update glacier mass change estimates for glaciers in Washington and Montana. Finally, we use re-analysis data (ERA interim and ERA5) to evaluate the meteorological drivers that explain the temporal and spatial variability of mass change evident in our analysis.

  8. Guidelines for the analysis of free energy calculations

    PubMed Central

    Klimovich, Pavel V.; Shirts, Michael R.; Mobley, David L.

    2015-01-01

    Free energy calculations based on molecular dynamics (MD) simulations show considerable promise for applications ranging from drug discovery to prediction of physical properties and structure-function studies. But these calculations are still difficult and tedious to analyze, and best practices for analysis are not well defined or propagated. Essentially, each group analyzing these calculations needs to decide how to conduct the analysis and, usually, develop its own analysis tools. Here, we review and recommend best practices for analysis yielding reliable free energies from molecular simulations. Additionally, we provide a Python tool, alchemical–analysis.py, freely available on GitHub at https://github.com/choderalab/pymbar–examples, that implements the analysis practices reviewed here for several reference simulation packages, which can be adapted to handle data from other packages. Both this review and the tool covers analysis of alchemical calculations generally, including free energy estimates via both thermodynamic integration and free energy perturbation-based estimators. Our Python tool also handles output from multiple types of free energy calculations, including expanded ensemble and Hamiltonian replica exchange, as well as standard fixed ensemble calculations. We also survey a range of statistical and graphical ways of assessing the quality of the data and free energy estimates, and provide prototypes of these in our tool. We hope these tools and discussion will serve as a foundation for more standardization of and agreement on best practices for analysis of free energy calculations. PMID:25808134

  9. Quantification of Microbial Phenotypes

    PubMed Central

    Martínez, Verónica S.; Krömer, Jens O.

    2016-01-01

    Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694

  10. Scientific analysis is essential to assess biofuel policy effects: in response to the paper by Kim and Dale on "Indirect land use change for biofuels: Testing predictions and improving analytical methodologies"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kline, Keith L; Oladosu, Gbadebo A; Dale, Virginia H

    2011-01-01

    Vigorous debate on the effects of biofuels derives largely from the changes in land use estimated using economic models designed mainly for the analysis of agricultural trade and markets. The models referenced for land-use change (LUC) analysis in the U.S. Environmental Protection Agency Final Rule on the Renewable Fuel Standard include GTAP, FAPRI-CARD, and FASOM. To address bioenergy impacts, these models were expanded and modified to facilitate simulations of hypothesized LUC. However, even when models use similar basic assumptions and data, the range of LUC results can vary by ten-fold or more. While the market dynamics simulated in these modelsmore » include processes that are important in estimating effects of biofuel policies, the models have not been validated for estimating land-use changes and employ crucial assumptions and simplifications that contradict empirical evidence.« less

  11. An Analysis of the Marine Corps Selection Process: Does Increased Competition Lead to Increased Quality

    DTIC Science & Technology

    2018-03-01

    collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources...gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or...any other aspect of this collection of information , including suggestions for reducing this burden, to Washington headquarters Services, Directorate

  12. Production cost analysis of Euphorbia lathyris. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendel, D.A.; Schooley, F.A.; Dickenson, R.L.

    1979-08-01

    The purpose of SRI's study was to estimate the costs of producing Euphorbia in commercial quantities in five regions of the United States, which include both irrigated and nonirrigated areas. The study assumed that a uniform crop yield could be achieved in the five regions by varying the quantities of production inputs. Therefore, the production costs estimates, which are based on fourth quarter 1978 dollars, include both fixed and variable costs for each region. Doane's Machinery Custom Rates for 1978 were used to estimate all variable costs except materials, which were estimated separately. Custom rates are determined by members ofmore » the Doane Countywide Farm Panel, a group of farmers specifically selected to represent the various sizes and types of commercial farms found throughout the country. The rates reported are the most recent rates the panel members had either paid, charged, or known for certain a second party had paid or charged. Custom rates for any particular operation include equipment operating costs (fuel, lubrication, and repairs), equipment ownership costs (depreciation, taxes, interest), as well as a labor charge for the operator. Custom rates are regionally specific and thereby assist the accuracy of this analysis. Fixed costs include land, management, and transportation of the plant material to a conversion facility. When appropriate, fixed costs were regionally specific. Changes in total production costs over future time periods were not addressed. The total estimated production costs of Euphorbia in each region were compared with production costs for corn and alfalfa in the same regions. Finally, the effects on yield and costs of changes in the production inputs were estimated.« less

  13. Estimation of whole body fat from appendicular soft tissue from peripheral quantitative computed tomography in adolescent girls

    PubMed Central

    Lee, Vinson R.; Blew, Rob M.; Farr, Josh N.; Tomas, Rita; Lohman, Timothy G.; Going, Scott B.

    2013-01-01

    Objective Assess the utility of peripheral quantitative computed tomography (pQCT) for estimating whole body fat in adolescent girls. Research Methods and Procedures Our sample included 458 girls (aged 10.7 ± 1.1y, mean BMI = 18.5 ± 3.3 kg/m2) who had DXA scans for whole body percent fat (DXA %Fat). Soft tissue analysis of pQCT scans provided thigh and calf subcutaneous percent fat and thigh and calf muscle density (muscle fat content surrogates). Anthropometric variables included weight, height and BMI. Indices of maturity included age and maturity offset. The total sample was split into validation (VS; n = 304) and cross-validation (CS; n = 154) samples. Linear regression was used to develop prediction equations for estimating DXA %Fat from anthropometric variables and pQCT-derived soft tissue components in VS and the best prediction equation was applied to CS. Results Thigh and calf SFA %Fat were positively correlated with DXA %Fat (r = 0.84 to 0.85; p <0.001) and thigh and calf muscle densities were inversely related to DXA %Fat (r = −0.30 to −0.44; p < 0.001). The best equation for estimating %Fat included thigh and calf SFA %Fat and thigh and calf muscle density (adj. R2 = 0.90; SEE = 2.7%). Bland-Altman analysis in CS showed accurate estimates of percent fat (adj. R2 = 0.89; SEE = 2.7%) with no bias. Discussion Peripheral QCT derived indices of adiposity can be used to accurately estimate whole body percent fat in adolescent girls. PMID:25147482

  14. Calculating system reliability with SRFYDO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less

  15. A Study on Active Disaster Management System for Standardized Emergency Action Plan using BIM and Flood Damage Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Jeong, C.; Om, J.; Hwang, J.; Joo, K.; Heo, J.

    2013-12-01

    In recent, the frequency of extreme flood has been increasing due to climate change and global warming. Highly flood damages are mainly caused by the collapse of flood control structures such as dam and dike. In order to reduce these disasters, the disaster management system (DMS) through flood forecasting, inundation mapping, EAP (Emergency Action Plan) has been studied. The estimation of inundation damage and practical EAP are especially crucial to the DMS. However, it is difficult to predict inundation and take a proper action through DMS in real emergency situation because several techniques for inundation damage estimation are not integrated and EAP is supplied in the form of a document in Korea. In this study, the integrated simulation system including rainfall frequency analysis, rainfall-runoff modeling, inundation prediction, surface runoff analysis, and inland flood analysis was developed. Using this system coupled with standard GIS data, inundation damage can be estimated comprehensively and automatically. The standard EAP based on BIM (Building Information Modeling) was also established in this system. It is, therefore, expected that the inundation damages through this study over the entire area including buildings can be predicted and managed.

  16. Managing the stands of the future based on the lessons of the past: estimating Western timber species product recovery by using historical data.

    Treesearch

    James A. Stevens; R. James Barbour

    2000-01-01

    Researchers at the Pacific Northwest Research Station have completed over 100 forest product recovery studies over the past 40 years. Tree, log, and product data from these studies have been entered into a database, which will allow further analysis within, between, and across studies. Opportunities for analysis include stand-to-log-to-final product estimates of volume...

  17. Broadband Studies of Semsmic Sources at Regional and Teleseismic Distances Using Advanced Time Series Analysis Methods. Volume 1.

    DTIC Science & Technology

    1991-03-21

    discussion of spectral factorability and motivations for broadband analysis, the report is subdivided into four main sections. In Section 1.0, we...estimates. The motivation for developing our multi-channel deconvolution method was to gain information about seismic sources, most notably, nuclear...with complex constraints for estimating the rupture history. Such methods (applied mostly to data sets that also include strong rmotion data), were

  18. Intelligence/Electronic Warfare (IEW) direction-finding and fix estimation analysis report. Volume 2: Trailblazer

    NASA Technical Reports Server (NTRS)

    Gardner, Robert; Gillis, James W.; Griesel, Ann; Pardo, Bruce

    1985-01-01

    An analysis of the direction finding (DF) and fix estimation algorithms in TRAILBLAZER is presented. The TRAILBLAZER software analyzed is old and not currently used in the field. However, the algorithms analyzed are used in other current IEW systems. The underlying algorithm assumptions (including unmodeled errors) are examined along with their appropriateness for TRAILBLAZER. Coding and documentation problems are then discussed. A detailed error budget is presented.

  19. Systemic Console: Advanced analysis of exoplanetary data

    NASA Astrophysics Data System (ADS)

    Meschiari, Stefano; Wolf, Aaron S.; Rivera, Eugenio; Laughlin, Gregory; Vogt, Steve; Butler, Paul

    2012-10-01

    Systemic Console is a tool for advanced analysis of exoplanetary data. It comprises a graphical tool for fitting radial velocity and transits datasets and a library of routines for non-interactive calculations. Among its features are interactive plotting of RV curves and transits, combined fitting of RV and transit timing (primary and secondary), interactive periodograms and FAP estimation, and bootstrap and MCMC error estimation. The console package includes public radial velocity and transit data.

  20. Statistical theory and methodology for remote sensing data analysis with special emphasis on LACIE

    NASA Technical Reports Server (NTRS)

    Odell, P. L.

    1975-01-01

    Crop proportion estimators for determining crop acreage through the use of remote sensing were evaluated. Several studies of these estimators were conducted, including an empirical comparison of the different estimators (using actual data) and an empirical study of the sensitivity (robustness) of the class of mixture estimators. The effect of missing data upon crop classification procedures is discussed in detail including a simulation of the missing data effect. The final problem addressed is that of taking yield data (bushels per acre) gathered at several yield stations and extrapolating these values over some specified large region. Computer programs developed in support of some of these activities are described.

  1. Planning for Downtown Circulation Systems. Volume 2. Analysis Techniques.

    DOT National Transportation Integrated Search

    1983-10-01

    This volume contains the analysis and refinement stages of downtown circulator planning. Included are sections on methods for estimating patronage, costs, revenues, and impacts, and a section on methods for performing micro-level analyses.

  2. Bank Regulation: Analysis of the Failure of Superior Bank, FSB, Hinsdale, Illinois

    DTIC Science & Technology

    2002-02-07

    statement of financial position based on the fair value . The best evidence of fair value is a quoted market price in an active market, but if there is no...market price, the value must be estimated. In estimating the fair value of retained interests, valuation techniques include estimating the present...about interest rates, default, prepayment, and volatility. In 1999, FASB explained that when estimating the fair value for 7FAS No. 140: Accounting for

  3. Estimation of plant disease severity visually, by digital photography and image analysis, and by hyperspectral imaging

    USDA-ARS?s Scientific Manuscript database

    Reliable, precise and accurate estimates of disease severity are important for predicting yield loss, monitoring and forecasting epidemics, for assessing crop germplasm for disease resistance, and for understanding fundamental biological processes including co-evolution. In some situations poor qual...

  4. Program review presentation to Level 1, Interagency Coordination Committee

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Progress in the development of crop inventory technology is reported. Specific topics include the results of a thematic mapper analysis, variable selection studies/early season estimator improvements, the agricultural information system simulator, large unit proportion estimation, and development of common features for multi-satellite information extraction.

  5. Criterion-Related Validity of Sit-and-Reach Tests for Estimating Hamstring and Lumbar Extensibility: a Meta-Analysis

    PubMed Central

    Mayorga-Vega, Daniel; Merino-Marban, Rafael; Viciana, Jesús

    2014-01-01

    The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r) between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measures. Then, from the included studies, the Hunter- Schmidt´s psychometric meta-analysis approach was conducted to estimate population criterion- related validity of sit-and-reach tests. Firstly, the corrected correlation mean (rp), unaffected by statistical artefacts (i.e., sampling error and measurement error), was calculated separately for each sit-and-reach test. Subsequently, the three potential moderator variables (sex of participants, age of participants, and level of hamstring extensibility) were examined by a partially hierarchical analysis. Of the 34 studies included in the present meta-analysis, 99 correlations values across eight sit-and-reach tests and 51 across seven sit-and-reach tests were retrieved for hamstring and lumbar extensibility, respectively. The overall results showed that all sit-and-reach tests had a moderate mean criterion-related validity for estimating hamstring extensibility (rp = 0.46-0.67), but they had a low mean for estimating lumbar extensibility (rp = 0. 16-0.35). Generally, females, adults and participants with high levels of hamstring extensibility tended to have greater mean values of criterion-related validity for estimating hamstring extensibility. When the use of angular tests is limited such as in a school setting or in large scale studies, scientists and practitioners could use the sit-and-reach tests as a useful alternative for hamstring extensibility estimation, but not for estimating lumbar extensibility. Key Points Overall sit-and-reach tests have a moderate mean criterion-related validity for estimating hamstring extensibility, but they have a low mean validity for estimating lumbar extensibility. Among all the sit-and-reach test protocols, the Classic sit-and-reach test seems to be the best option to estimate hamstring extensibility. End scores (e.g., the Classic sit-and-reach test) are a better indicator of hamstring extensibility than the modifications that incorporate fingers-to-box distance (e.g., the Modified sit-and-reach test). When angular tests such as straight leg raise or knee extension tests cannot be used, sit-and-reach tests seem to be a useful field test alternative to estimate hamstring extensibility, but not to estimate lumbar extensibility. PMID:24570599

  6. Including information about comorbidity in estimates of disease burden: Results from the WHO World Mental Health Surveys

    PubMed Central

    Alonso, Jordi; Vilagut, Gemma; Chatterji, Somnath; Heeringa, Steven; Schoenbaum, Michael; Üstün, T. Bedirhan; Rojas-Farreras, Sonia; Angermeyer, Matthias; Bromet, Evelyn; Bruffaerts, Ronny; de Girolamo, Giovanni; Gureje, Oye; Haro, Josep Maria; Karam, Aimee N.; Kovess, Viviane; Levinson, Daphna; Liu, Zhaorui; Mora, Maria Elena Medina; Ormel, J.; Posada-Villa, Jose; Uda, Hidenori; Kessler, Ronald C.

    2010-01-01

    Background The methodology commonly used to estimate disease burden, featuring ratings of severity of individual conditions, has been criticized for ignoring comorbidity. A methodology that addresses this problem is proposed and illustrated here with data from the WHO World Mental Health Surveys. Although the analysis is based on self-reports about one’s own conditions in a community survey, the logic applies equally well to analysis of hypothetical vignettes describing comorbid condition profiles. Methods Face-to-face interviews in 13 countries (six developing, nine developed; n = 31,067; response rate = 69.6%) assessed 10 classes of chronic physical and 9 of mental conditions. A visual analog scale (VAS) was used to assess overall perceived health. Multiple regression analysis with interactions for comorbidity was used to estimate associations of conditions with VAS. Simulation was used to estimate condition-specific effects. Results The best-fitting model included condition main effects and interactions of types by numbers of conditions. Neurological conditions, insomnia, and major depression were rated most severe. Adjustment for comorbidity reduced condition-specific estimates with substantial between-condition variation (.24–.70 ratios of condition-specific estimates with and without adjustment for comorbidity). The societal-level burden rankings were quite different from the individual-level rankings, with the highest societal-level rankings associated with conditions having high prevalence rather than high individual-level severity. Conclusions Plausible estimates of disorder-specific effects on VAS can be obtained using methods that adjust for comorbidity. These adjustments substantially influence condition-specific ratings. PMID:20553636

  7. What Are the Real Procedural Costs of Bariatric Surgery? A Systematic Literature Review of Published Cost Analyses.

    PubMed

    Doble, Brett; Wordsworth, Sarah; Rogers, Chris A; Welbourn, Richard; Byrne, James; Blazeby, Jane M

    2017-08-01

    This review aims to evaluate the current literature on the procedural costs of bariatric surgery for the treatment of severe obesity. Using a published framework for the conduct of micro-costing studies for surgical interventions, existing cost estimates from the literature are assessed for their accuracy, reliability and comprehensiveness based on their consideration of seven 'important' cost components. MEDLINE, PubMed, key journals and reference lists of included studies were searched up to January 2017. Eligible studies had to report per-case, total procedural costs for any type of bariatric surgery broken down into two or more individual cost components. A total of 998 citations were screened, of which 13 studies were included for analysis. Included studies were mainly conducted from a US hospital perspective, assessed either gastric bypass or adjustable gastric banding procedures and considered a range of different cost components. The mean total procedural costs for all included studies was US$14,389 (range, US$7423 to US$33,541). No study considered all of the recommended 'important' cost components and estimation methods were poorly reported. The accuracy, reliability and comprehensiveness of the existing cost estimates are, therefore, questionable. There is a need for a comparative cost analysis of the different approaches to bariatric surgery, with the most appropriate costing approach identified to be micro-costing methods. Such an analysis will not only be useful in estimating the relative cost-effectiveness of different surgeries but will also ensure appropriate reimbursement and budgeting by healthcare payers to ensure barriers to access this effective treatment by severely obese patients are minimised.

  8. Sex Estimation From Sternal Measurements Using Multidetector Computed Tomography

    PubMed Central

    Ekizoglu, Oguzhan; Hocaoglu, Elif; Inci, Ercan; Bilgili, Mustafa Gokhan; Solmaz, Dilek; Erdil, Irem; Can, Ismail Ozgur

    2014-01-01

    Abstract We aimed to show the utility and reliability of sternal morphometric analysis for sex estimation. Sex estimation is a very important step in forensic identification. Skeletal surveys are main methods for sex estimation studies. Morphometric analysis of sternum may provide high accuracy rated data in sex discrimination. In this study, morphometric analysis of sternum was evaluated in 1 mm chest computed tomography scans for sex estimation. Four hundred forty 3 subjects (202 female, 241 male, mean age: 44 ± 8.1 [distribution: 30–60 year old]) were included the study. Manubrium length (ML), mesosternum length (2L), Sternebra 1 (S1W), and Sternebra 3 (S3W) width were measured and also sternal index (SI) was calculated. Differences between genders were evaluated by student t-test. Predictive factors of sex were determined by discrimination analysis and receiver operating characteristic (ROC) analysis. Male sternal measurement values are significantly higher than females (P < 0.001) while SI is significantly low in males (P < 0.001). In discrimination analysis, MSL has high accuracy rate with 80.2% in females and 80.9% in males. MSL also has the best sensitivity (75.9%) and specificity (87.6%) values. Accuracy rates were above 80% in 3 stepwise discrimination analysis for both sexes. Stepwise 1 (ML, MSL, S1W, S3W) has the highest accuracy rate in stepwise discrimination analysis with 86.1% in females and 83.8% in males. Our study showed that morphometric computed tomography analysis of sternum might provide important information for sex estimation. PMID:25501090

  9. MNE software for processing MEG and EEG data

    PubMed Central

    Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Parkkonen, L.; Hämäläinen, M.

    2013-01-01

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals originating from neural currents in the brain. Using these signals to characterize and locate brain activity is a challenging task, as evidenced by several decades of methodological contributions. MNE, whose name stems from its capability to compute cortically-constrained minimum-norm current estimates from M/EEG data, is a software package that provides comprehensive analysis tools and workflows including preprocessing, source estimation, time–frequency analysis, statistical analysis, and several methods to estimate functional connectivity between distributed brain regions. The present paper gives detailed information about the MNE package and describes typical use cases while also warning about potential caveats in analysis. The MNE package is a collaborative effort of multiple institutes striving to implement and share best methods and to facilitate distribution of analysis pipelines to advance reproducibility of research. Full documentation is available at http://martinos.org/mne. PMID:24161808

  10. An analysis of simulated and observed storm characteristics

    NASA Astrophysics Data System (ADS)

    Benestad, R. E.

    2010-09-01

    A calculus-based cyclone identification (CCI) method has been applied to the most recent re-analysis (ERAINT) from the European Centre for Medium-range Weather Forecasts and results from regional climate model (RCM) simulations. The storm frequency for events with central pressure below a threshold value of 960-990hPa were examined, and the gradient wind from the simulated storm systems were compared with corresponding estimates from the re-analysis. The analysis also yielded estimates for the spatial extent of the storm systems, which was also included in the regional climate model cyclone evaluation. A comparison is presented between a number of RCMs and the ERAINT re-analysis in terms of their description of the gradient winds, number of cyclones, and spatial extent. Furthermore, a comparison between geostrophic wind estimated though triangules of interpolated or station measurements of SLP is presented. Wind still represents one of the more challenging variables to model realistically.

  11. Identification of modal parameters including unmeasured forces and transient effects

    NASA Astrophysics Data System (ADS)

    Cauberghe, B.; Guillaume, P.; Verboven, P.; Parloo, E.

    2003-08-01

    In this paper, a frequency-domain method to estimate modal parameters from short data records with known input (measured) forces and unknown input forces is presented. The method can be used for an experimental modal analysis, an operational modal analysis (output-only data) and the combination of both. A traditional experimental and operational modal analysis in the frequency domain starts respectively, from frequency response functions and spectral density functions. To estimate these functions accurately sufficient data have to be available. The technique developed in this paper estimates the modal parameters directly from the Fourier spectra of the outputs and the known input. Instead of using Hanning windows on these short data records the transient effects are estimated simultaneously with the modal parameters. The method is illustrated, tested and validated by Monte Carlo simulations and experiments. The presented method to process short data sequences leads to unbiased estimates with a small variance in comparison to the more traditional approaches.

  12. Cost Analysis of an Air Brayton Receiver for a Solar Thermal Electric Power System in Selected Annual Production Volumes

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Pioneer Engineering and Manufacturing Company estimated the cost of manufacturing and Air Brayton Receiver for a Solar Thermal Electric Power System as designed by the AiResearch Division of the Garrett Corporation. Production costs were estimated at annual volumes of 100; 1,000; 5,000; 10,000; 50,000; 100,000 and 1,000,000 units. These costs included direct labor, direct material and manufacturing burden. A make or buy analysis was made of each part of each volume. At high volumes special fabrication concepts were used to reduce operation cycle times. All costs were estimated at an assumed 100% plant capacity. Economic feasibility determined the level of production at which special concepts were to be introduced. Estimated costs were based on the economics of the last half of 1980. Tooling and capital equipment costs were estimated for ach volume. Infrastructure and personnel requirements were also estimated.

  13. Application of Observed Precipitation in NCEP Global and Regional Data Assimilation Systems, Including Reanalysis and Land Data Assimilation

    NASA Astrophysics Data System (ADS)

    Mitchell, K. E.

    2006-12-01

    The Environmental Modeling Center (EMC) of the National Centers for Environmental Prediction (NCEP) applies several different analyses of observed precipitation in both the data assimilation and validation components of NCEP's global and regional numerical weather and climate prediction/analysis systems (including in NCEP global and regional reanalysis). This invited talk will survey these data assimilation and validation applications and methodologies, as well as the temporal frequency, spatial domains, spatial resolution, data sources, data density and data quality control in the precipitation analyses that are applied. Some of the precipitation analyses applied by EMC are produced by NCEP's Climate Prediction Center (CPC), while others are produced by the River Forecast Centers (RFCs) of the National Weather Service (NWS), or by automated algorithms of the NWS WSR-88D Radar Product Generator (RPG). Depending on the specific type of application in data assimilation or model forecast validation, the temporal resolution of the precipitation analyses may be hourly, daily, or pentad (5-day) and the domain may be global, continental U.S. (CONUS), or Mexico. The data sources for precipitation include ground-based gauge observations, radar-based estimates, and satellite-based estimates. The precipitation analyses over the CONUS are analyses of either hourly, daily or monthly totals of precipitation, and they are of two distinct types: gauge-only or primarily radar-estimated. The gauge-only CONUS analysis of daily precipitation utilizes an orographic-adjustment technique (based on the well-known PRISM precipitation climatology of Oregon State University) developed by the NWS Office of Hydrologic Development (OHD). The primary NCEP global precipitation analysis is the pentad CPC Merged Analysis of Precipitation (CMAP), which blends both gauge observations and satellite estimates. The presentation will include a brief comparison between the CMAP analysis and other global precipitation analyses by other institutions. Other global precipitation analyses produced by other methodologies are also used by EMC in certain applications, such as CPC's well-known satellite-IR based technique known as "GPI", and satellite-microwave based estimates from NESDIS or NASA. Finally, the presentation will cover the three assimilation methods used by EMC to assimilate precipitation data, including 1) 3D-VAR variational assimilation in NCEP's Global Data Assimilation System (GDAS), 2) direct insertion of precipitation-inferred vertical latent heating profiles in NCEP's N. American Data Assimilation System (NDAS) and its N. American Regional Reanalysis (NARR) counterpart, and 3) direct use of observed precipitation to drive the Noah land model component of NCEP's Global and N. American Land Data Assimilation Systems (GLDAS and NLDAS). In the applications of precipitation analyses in data assimilation at NCEP, the analyses are temporally disaggregated to hourly or less using time-weights calculated from A) either radar-based estimates or an analysis of hourly gauge-observations for the CONUS-domain daily precipitation analyses, or B) global model forecasts of 6-hourly precipitation (followed by linear interpolation to hourly or less) for the global CMAP precipitation analysis.

  14. The Version 2 Global Precipitation Climatology Project (GPCP) Monthly Precipitation Analysis (1979-Present)

    NASA Technical Reports Server (NTRS)

    Adler, Robert F.; Huffman, George J.; Chang, Alfred; Ferraro, Ralph; Xie, Ping-Ping; Janowiak, John; Rudolf, Bruno; Schneider, Udo; Curtis, Scott; Bolvin, David

    2003-01-01

    The Global Precipitation Climatology Project (GPCP) Version 2 Monthly Precipitation Analysis is described. This globally complete, monthly analysis of surface precipitation at 2.5 degrees x 2.5 degrees latitude-longitude resolution is available from January 1979 to the present. It is a merged analysis that incorporates precipitation estimates from low-orbit-satellite microwave data, geosynchronous-orbit-satellite infrared data, and rain gauge observations. The merging approach utilizes the higher accuracy of the low-orbit microwave observations to calibrate, or adjust, the more frequent geosynchronous infrared observations. The data set is extended back into the premicrowave era (before 1987) by using infrared-only observations calibrated to the microwave-based analysis of the later years. The combined satellite-based product is adjusted by the raingauge analysis. This monthly analysis is the foundation for the GPCP suite of products including those at finer temporal resolution, satellite estimate, and error estimates for each field. The 23-year GPCP climatology is characterized, along with time and space variations of precipitation.

  15. Integrated Software for Analyzing Designs of Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Philips, Alan D.

    2003-01-01

    Launch Vehicle Analysis Tool (LVA) is a computer program for preliminary design structural analysis of launch vehicles. Before LVA was developed, in order to analyze the structure of a launch vehicle, it was necessary to estimate its weight, feed this estimate into a program to obtain pre-launch and flight loads, then feed these loads into structural and thermal analysis programs to obtain a second weight estimate. If the first and second weight estimates differed, it was necessary to reiterate these analyses until the solution converged. This process generally took six to twelve person-months of effort. LVA incorporates text to structural layout converter, configuration drawing, mass properties generation, pre-launch and flight loads analysis, loads output plotting, direct solution structural analysis, and thermal analysis subprograms. These subprograms are integrated in LVA so that solutions can be iterated automatically. LVA incorporates expert-system software that makes fundamental design decisions without intervention by the user. It also includes unique algorithms based on extensive research. The total integration of analysis modules drastically reduces the need for interaction with the user. A typical solution can be obtained in 30 to 60 minutes. Subsequent runs can be done in less than two minutes.

  16. Nonparametric methods for drought severity estimation at ungauged sites

    NASA Astrophysics Data System (ADS)

    Sadri, S.; Burn, D. H.

    2012-12-01

    The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.

  17. Fish assemblages

    USGS Publications Warehouse

    McGarvey, Daniel J.; Falke, Jeffrey A.; Li, Hiram W.; Li, Judith; Hauer, F. Richard; Lamberti, G.A.

    2017-01-01

    Methods to sample fishes in stream ecosystems and to analyze the raw data, focusing primarily on assemblage-level (all fish species combined) analyses, are presented in this chapter. We begin with guidance on sample site selection, permitting for fish collection, and information-gathering steps to be completed prior to conducting fieldwork. Basic sampling methods (visual surveying, electrofishing, and seining) are presented with specific instructions for estimating population sizes via visual, capture-recapture, and depletion surveys, in addition to new guidance on environmental DNA (eDNA) methods. Steps to process fish specimens in the field including the use of anesthesia and preservation of whole specimens or tissue samples (for genetic or stable isotope analysis) are also presented. Data analysis methods include characterization of size-structure within populations, estimation of species richness and diversity, and application of fish functional traits. We conclude with three advanced topics in assemblage-level analysis: multidimensional scaling (MDS), ecological networks, and loop analysis.

  18. Applications of harvesting system simulation to timber management and utilization analyses

    Treesearch

    John E. Baumgras; Chris B. LeDoux

    1990-01-01

    Applications of timber harvesting system simulation to the economic analysis of forest management and wood utilization practices are presented. These applications include estimating thinning revenue by stand age, estimating impacts of minimum merchantable tree diameter on harvesting revenue, and evaluating wood utilization alternatives relative to pulpwood quotas and...

  19. Planning level assessment of greenhouse gas emissions for alternative transportation construction projects : carbon footprint estimator, phase II, volume I - GASCAP model.

    DOT National Transportation Integrated Search

    2014-03-01

    The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...

  20. Plantation thinning systems in the Southern United States

    Treesearch

    Bryce J. Stokes; William F. Watson

    1996-01-01

    This paper reviews southern pine management and thinning practices, describes three harvesting systems for thinning, and presents production and cost estimates, and utilization rates. The costs and product recoveries were developed from published sources using a spreadsheet analysis. Systems included tree-length, flail/chip, and cut-to-length. The estimated total...

  1. Analysis of vehicle classification data, including monthly and seasonal ADT factors, hourly distribution factors, and lane distribution factors

    DOT National Transportation Integrated Search

    1998-11-01

    This report documents the development of monthly and seasonal average daily traffic (ADT) factors for performing estimating AADTs. It appears that seasonal factors can estimate AADT as well as monthly factors, and it is recommended that seasonal fact...

  2. MARINE CORPS ASIA PACIFIC REALIGNMENT: DOD Should Resolve Capability Deficiencies and Infrastructure Risks and Revise Cost Estimates

    DTIC Science & Technology

    2017-04-01

    environmental mitigation and the costs associated with it. In April 2015, the Navy released a draft environmental impact statement to the public that...economic analysis for decision making indicates that, as part of assessing the costs and benefits of alternatives, an economic analysis should include...Page 48 GAO-17-415 Marine Corps Asia-Pacific Realignment comprehensive estimates of the expected costs and benefits that are incident to achieving

  3. An application of the suction analog for the analysis of asymmetric flow situations

    NASA Technical Reports Server (NTRS)

    Luckring, J. M.

    1976-01-01

    A recent extension of the suction analogy for estimation of vortex loads on asymmetric configurations is reviewed. This extension includes asymmetric augmented vortex lift and the forward sweep effect on side edge suction. Application of this extension to a series of skewed wings has resulted in an improved estimating capability for a wide range of asymmetric flow situations. Hence, the suction analogy concept now has more general applicability for subsonic lifting surface analysis.

  4. Procedure-related risk of miscarriage following amniocentesis and chorionic villus sampling: a systematic review and meta-analysis.

    PubMed

    Akolekar, R; Beta, J; Picciarelli, G; Ogilvie, C; D'Antonio, F

    2015-01-01

    To estimate procedure-related risks of miscarriage following amniocentesis and chorionic villus sampling (CVS) based on a systematic review of the literature and a meta-analysis. A search of MEDLINE, EMBASE, CINHAL and The Cochrane Library (2000-2014) was performed to review relevant citations reporting procedure-related complications of amniocentesis and CVS. Only studies reporting data on more than 1000 procedures were included in this review to minimize the effect of bias from smaller studies. Heterogeneity between studies was estimated using Cochran's Q, the I(2) statistic and Egger bias. Meta-analysis of proportions was used to derive weighted pooled estimates for the risk of miscarriage before 24 weeks' gestation. Incidence-rate difference meta-analysis was used to estimate pooled procedure-related risks. The weighted pooled risks of miscarriage following invasive procedures were estimated from analysis of controlled studies including 324 losses in 42 716 women who underwent amniocentesis and 207 losses in 8899 women who underwent CVS. The risk of miscarriage prior to 24 weeks in women who underwent amniocentesis and CVS was 0.81% (95% CI, 0.58-1.08%) and 2.18% (95% CI, 1.61-2.82%), respectively. The background rates of miscarriage in women from the control group that did not undergo any procedures were 0.67% (95% CI, 0.46-0.91%) for amniocentesis and 1.79% (95% CI, 0.61-3.58%) for CVS. The weighted pooled procedure-related risks of miscarriage for amniocentesis and CVS were 0.11% (95% CI, -0.04 to 0.26%) and 0.22% (95% CI, -0.71 to 1.16%), respectively. The procedure-related risks of miscarriage following amniocentesis and CVS are much lower than are currently quoted. Copyright © 2014 ISUOG. Published by John Wiley & Sons Ltd.

  5. The economic burden of schizophrenia in Canada in 2004.

    PubMed

    Goeree, R; Farahati, F; Burke, N; Blackhouse, G; O'Reilly, D; Pyne, J; Tarride, J-E

    2005-12-01

    To estimate the financial burden of schizophrenia in Canada in 2004. A prevalence-based cost-of-illness (COI) approach was used. The primary sources of information for the study included a review of the published literature, a review of published reports and documents, secondary analysis of administrative datasets, and information collected directly from various federal and provincial government programs and services. The literature review included publications up to April 2005 reported in MedLine, EMBASE and PsychINFO. Where specific information from a province was not available, the method of mean substitution from other provinces was used. Costs incurred by various levels/departments of government were separated into healthcare and non-healthcare costs. Also included in the analysis was the value of lost productivity for premature mortality and morbidity associated with schizophrenia. Sensitivity analysis was used to test major cost assumptions used in the analysis. Where possible, all resource utilization estimates for the financial burden of schizophrenia were obtained for 2004 and are expressed in 2004 Canadian dollars (CAN dollars). The estimated number of persons with schizophrenia in Canada in 2004 was 234 305 (95% CI, 136 201-333 402). The direct healthcare and non-healthcare costs were estimated to be 2.02 billion CAN dollars in 2004. There were 374 deaths attributed to schizophrenia. This combined with the high unemployment rate due to schizophrenia resulted in an additional productivity morbidity and mortality loss estimate of 4.83 billion CAN dollars, for a total cost estimate in 2004 of 6.85 billion CAN dollars. By far the largest component of the total cost estimate was for productivity losses associated with morbidity in schizophrenia (70% of total costs) and the results showed that total cost estimates were most sensitive to alternative assumptions regarding the additional unemployment due to schizophrenia in Canada. Despite significant improvements in the past decade in pharmacotherapy, programs and services available for patients with schizophrenia, the economic burden of schizophrenia in Canada remains high. The most significant factor affecting the cost of schizophrenia in Canada is lost productivity due to morbidity. Programs targeted at improving patient symptoms and functioning to increase workforce participation has the potential to make a significant contribution in reducing the cost of this severe mental illness in Canada.

  6. Advanced transportation system studies technical area 2 (TA-2): Heavy lift launch vehicle development. volume 3; Program Cost estimates

    NASA Technical Reports Server (NTRS)

    McCurry, J. B.

    1995-01-01

    The purpose of the TA-2 contract was to provide advanced launch vehicle concept definition and analysis to assist NASA in the identification of future launch vehicle requirements. Contracted analysis activities included vehicle sizing and performance analysis, subsystem concept definition, propulsion subsystem definition (foreign and domestic), ground operations and facilities analysis, and life cycle cost estimation. The basic period of performance of the TA-2 contract was from May 1992 through May 1993. No-cost extensions were exercised on the contract from June 1993 through July 1995. This document is part of the final report for the TA-2 contract. The final report consists of three volumes: Volume 1 is the Executive Summary, Volume 2 is Technical Results, and Volume 3 is Program Cost Estimates. The document-at-hand, Volume 3, provides a work breakdown structure dictionary, user's guide for the parametric life cycle cost estimation tool, and final report developed by ECON, Inc., under subcontract to Lockheed Martin on TA-2 for the analysis of heavy lift launch vehicle concepts.

  7. Sensitivity Analysis of Repeat Track Estimation Techniques for Detection of Elevation Change in Polar Ice Sheets

    NASA Astrophysics Data System (ADS)

    Harpold, R. E.; Urban, T. J.; Schutz, B. E.

    2008-12-01

    Interest in elevation change detection in the polar regions has increased recently due to concern over the potential sea level rise from the melting of the polar ice caps. Repeat track analysis can be used to estimate elevation change rate by fitting elevation data to model parameters. Several aspects of this method have been tested to improve the recovery of the model parameters. Elevation data from ICESat over Antarctica and Greenland from 2003-2007 are used to test several grid sizes and types, such as grids based on latitude and longitude and grids centered on the ICESat reference groundtrack. Different sets of parameters are estimated, some of which include seasonal terms or alternate types of slopes (linear, quadratic, etc.). In addition, the effects of including crossovers and other solution constraints are evaluated. Simulated data are used to infer potential errors due to unmodeled parameters.

  8. CO Component Estimation Based on the Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  9. CO component estimation based on the independent component analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independentmore » component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.« less

  10. Improving The Discipline of Cost Estimation and Analysis

    NASA Technical Reports Server (NTRS)

    Piland, William M.; Pine, David J.; Wilson, Delano M.

    2000-01-01

    The need to improve the quality and accuracy of cost estimates of proposed new aerospace systems has been widely recognized. The industry has done the best job of maintaining related capability with improvements in estimation methods and giving appropriate priority to the hiring and training of qualified analysts. Some parts of Government, and National Aeronautics and Space Administration (NASA) in particular, continue to need major improvements in this area. Recently, NASA recognized that its cost estimation and analysis capabilities had eroded to the point that the ability to provide timely, reliable estimates was impacting the confidence in planning many program activities. As a result, this year the Agency established a lead role for cost estimation and analysis. The Independent Program Assessment Office located at the Langley Research Center was given this responsibility. This paper presents the plans for the newly established role. Described is how the Independent Program Assessment Office, working with all NASA Centers, NASA Headquarters, other Government agencies, and industry, is focused on creating cost estimation and analysis as a professional discipline that will be recognized equally with the technical disciplines needed to design new space and aeronautics activities. Investments in selected, new analysis tools, creating advanced training opportunities for analysts, and developing career paths for future analysts engaged in the discipline are all elements of the plan. Plans also include increasing the human resources available to conduct independent cost analysis of Agency programs during their formulation, to improve near-term capability to conduct economic cost-benefit assessments, to support NASA management's decision process, and to provide cost analysis results emphasizing "full-cost" and "full-life cycle" considerations. The Agency cost analysis improvement plan has been approved for implementation starting this calendar year. Adequate financial and human resources are being made available to accomplish the goals of this important effort, and all indications are that NASA's cost estimation and analysis core competencies will be substantially improved within the foreseeable future.

  11. Guidelines for the analysis of free energy calculations.

    PubMed

    Klimovich, Pavel V; Shirts, Michael R; Mobley, David L

    2015-05-01

    Free energy calculations based on molecular dynamics simulations show considerable promise for applications ranging from drug discovery to prediction of physical properties and structure-function studies. But these calculations are still difficult and tedious to analyze, and best practices for analysis are not well defined or propagated. Essentially, each group analyzing these calculations needs to decide how to conduct the analysis and, usually, develop its own analysis tools. Here, we review and recommend best practices for analysis yielding reliable free energies from molecular simulations. Additionally, we provide a Python tool, alchemical-analysis.py, freely available on GitHub as part of the pymbar package (located at http://github.com/choderalab/pymbar), that implements the analysis practices reviewed here for several reference simulation packages, which can be adapted to handle data from other packages. Both this review and the tool covers analysis of alchemical calculations generally, including free energy estimates via both thermodynamic integration and free energy perturbation-based estimators. Our Python tool also handles output from multiple types of free energy calculations, including expanded ensemble and Hamiltonian replica exchange, as well as standard fixed ensemble calculations. We also survey a range of statistical and graphical ways of assessing the quality of the data and free energy estimates, and provide prototypes of these in our tool. We hope this tool and discussion will serve as a foundation for more standardization of and agreement on best practices for analysis of free energy calculations.

  12. Ares I-X Best Estimated Trajectory Analysis and Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; Starr, Brett R.; Derry, Stephen D.; Brandon, Jay; Olds, Aaron D.

    2011-01-01

    The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air-data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.

  13. Life Cycle Cost Analysis of Shuttle-Derived Launch Vehicles, Volume 1

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The design, performance, and programmatic definition of shuttle derived launch vehicles (SDLV) established by two different contractors were assessed and the relative life cycle costs of space transportation systems using the shuttle alone were compared with costs for a mix of shuttles and SDLV's. The ground rules and assumptions used in the evaluation are summarized and the work breakdown structure is included. Approaches used in deriving SDLV costs, including calibration factors and historical data are described. Both SDLV cost estimates and SDLV/STS cost comparisons are summarized. Standard formats are used to report comprehensive SDLV life cycle estimates. Hardware cost estimates (below subsystem level) obtained using the RCA PRICE 84 cost model are included along with other supporting data.

  14. Age Estimation of Infants Through Metric Analysis of Developing Anterior Deciduous Teeth.

    PubMed

    Viciano, Joan; De Luca, Stefano; Irurita, Javier; Alemán, Inmaculada

    2018-01-01

    This study provides regression equations for estimation of age of infants from the dimensions of their developing deciduous teeth. The sample comprises 97 individuals of known sex and age (62 boys, 35 girls), aged between 2 days and 1,081 days. The age-estimation equations were obtained for the sexes combined, as well as for each sex separately, thus including "sex" as an independent variable. The values of the correlations and determination coefficients obtained for each regression equation indicate good fits for most of the equations obtained. The "sex" factor was statistically significant when included as an independent variable in seven of the regression equations. However, the "sex" factor provided an advantage for age estimation in only three of the equations, compared to those that did not include "sex" as a factor. These data suggest that the ages of infants can be accurately estimated from measurements of their developing deciduous teeth. © 2017 American Academy of Forensic Sciences.

  15. The dependability of medical students' performance ratings as documented on in-training evaluations.

    PubMed

    van Barneveld, Christina

    2005-03-01

    To demonstrate an approach to obtain an unbiased estimate of the dependability of students' performance ratings during training, when the data-collection design includes nesting of student in rater, unbalanced nest sizes, and dependent observations. In 2003, two variance components analyses of in-training evaluation (ITE) report data were conducted using urGENOVA software. In the first analysis, the dependability for the nested and unbalanced data-collection design was calculated. In the second analysis, an approach using multiple generalizability studies was used to obtain an unbiased estimate of the student variance component, resulting in an unbiased estimate of dependability. Results suggested that there is bias in estimates of the dependability of students' performance on ITEs that are attributable to the data-collection design. When the bias was corrected, the results indicated that the dependability of ratings of student performance was almost zero. The combination of the multiple generalizability studies method and the use of specialized software provides an unbiased estimate of the dependability of ratings of student performance on ITE scores for data-collection designs that include nesting of student in rater, unbalanced nest sizes, and dependent observations.

  16. Documentation of spreadsheets for the analysis of aquifer-test and slug-test data

    USGS Publications Warehouse

    Halford, Keith J.; Kuniansky, Eve L.

    2002-01-01

    Several spreadsheets have been developed for the analysis of aquifer-test and slug-test data. Each spreadsheet incorporates analytical solution(s) of the partial differential equation for ground-water flow to a well for a specific type of condition or aquifer. The derivations of the analytical solutions were previously published. Thus, this report abbreviates the theoretical discussion, but includes practical information about each method and the important assumptions for the applications of each method. These spreadsheets were written in Microsoft Excel 9.0 (use of trade names does not constitute endorsement by the USGS). Storage properties should not be estimated with many of the spreadsheets because most are for analyzing single-well tests. Estimation of storage properties from single-well tests is generally discouraged because single-well tests are affected by wellbore storage and by well construction. These non-ideal effects frequently cause estimates of storage to be erroneous by orders of magnitude. Additionally, single-well tests are not sensitive to aquifer-storage properties. Single-well tests include all slug tests (Bouwer and Rice Method, Cooper, Bredehoeft, Papadopulos Method, and van der Kamp Method), the Cooper-Jacob straight-line Method, Theis recovery-data analysis, Jacob-Lohman method for flowing wells in a confined aquifer, and the step-drawdown test. Multi-well test spreadsheets included in this report are; Hantush-Jacob Leaky Aquifer Method and Distance-Drawdown Methods. The distance-drawdown method is an equilibrium or steady-state method, thus storage cannot be estimated.

  17. Coal gasification systems engineering and analysis. Appendix D: Cost and economic studies

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The detailed cost estimate documentation for the designs prepared in this study are presented. The include: (1) Koppers-Totzek, (2) Texaco (3) Babcock and Wilcox, (4) BGC-Lurgi, and (5) Lurgi. The alternate product cost estimates include: (1) Koppers-Totzek and Texaco single product facilities (methane, methanol, gasoline, hydrogen), (2) Kopers-Totzek SNG and MBG, (3) Kopers-Totzek and Texaco SNG and MBG, and (4) Lurgi-methane and Lurgi-methane and methanol.

  18. Improved Estimation of Electron Temperature from Rocket-borne Impedance Probes

    NASA Astrophysics Data System (ADS)

    Rowland, D. E.; Wolfinger, K.; Stamm, J. D.

    2017-12-01

    The impedance probe technique is a well known method for determining high accuracy measurements of electron number density in the Earth's ionosphere. We present analysis of impedance probe data from several sounding rockets at low, mid-, and auroral latitudes, including high cadence estimates of the electron temperature, derived from analytical fits to the antenna impedance curves. These estimates compare favorably with independent estimates from Langmuir Probes, but at much higher temporal and spatial resolution, providing a capability to resolve small-scale temperature fluctuations. We also present some considerations for the design of impedance probes, including assessment of the effects of resonance damping due to rocket motion, effects of wake and spin modulation, and aspect angle to the magnetic field.

  19. A Comparative Assessment of Greek Universities' Efficiency Using Quantitative Analysis

    ERIC Educational Resources Information Center

    Katharaki, Maria; Katharakis, George

    2010-01-01

    In part due to the increased demand for higher education, typical evaluation frameworks for universities often address the key issue of available resource utilisation. This study seeks to estimate the efficiency of 20 public universities in Greece through quantitative analysis (including performance indicators, data envelopment analysis (DEA) and…

  20. SSL Pricing and Efficacy Trend Analysis for Utility Program Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuenge, J. R.

    2013-10-01

    Report to help utilities and energy efficiency organizations forecast the order in which important SSL applications will become cost-effective and estimate when each "tipping point" will be reached. Includes performance trend analysis from DOE's LED Lighting Facts® and CALiPER programs plus cost analysis from various sources.

  1. HEART: an automated beat-to-beat cardiovascular analysis package using Matlab.

    PubMed

    Schroeder, M J Mark J; Perreault, Bill; Ewert, D L Daniel L; Koenig, S C Steven C

    2004-07-01

    A computer program is described for beat-to-beat analysis of cardiovascular parameters from high-fidelity pressure and flow waveforms. The Hemodynamic Estimation and Analysis Research Tool (HEART) is a post-processing analysis software package developed in Matlab that enables scientists and clinicians to document, load, view, calibrate, and analyze experimental data that have been digitally saved in ascii or binary format. Analysis routines include traditional hemodynamic parameter estimates as well as more sophisticated analyses such as lumped arterial model parameter estimation and vascular impedance frequency spectra. Cardiovascular parameter values of all analyzed beats can be viewed and statistically analyzed. An attractive feature of the HEART program is the ability to analyze data with visual quality assurance throughout the process, thus establishing a framework toward which Good Laboratory Practice (GLP) compliance can be obtained. Additionally, the development of HEART on the Matlab platform provides users with the flexibility to adapt or create study specific analysis files according to their specific needs. Copyright 2003 Elsevier Ltd.

  2. Beyond total treatment effects in randomised controlled trials: Baseline measurement of intermediate outcomes needed to reduce confounding in mediation investigations.

    PubMed

    Landau, Sabine; Emsley, Richard; Dunn, Graham

    2018-06-01

    Random allocation avoids confounding bias when estimating the average treatment effect. For continuous outcomes measured at post-treatment as well as prior to randomisation (baseline), analyses based on (A) post-treatment outcome alone, (B) change scores over the treatment phase or (C) conditioning on baseline values (analysis of covariance) provide unbiased estimators of the average treatment effect. The decision to include baseline values of the clinical outcome in the analysis is based on precision arguments, with analysis of covariance known to be most precise. Investigators increasingly carry out explanatory analyses to decompose total treatment effects into components that are mediated by an intermediate continuous outcome and a non-mediated part. Traditional mediation analysis might be performed based on (A) post-treatment values of the intermediate and clinical outcomes alone, (B) respective change scores or (C) conditioning on baseline measures of both intermediate and clinical outcomes. Using causal diagrams and Monte Carlo simulation, we investigated the performance of the three competing mediation approaches. We considered a data generating model that included three possible confounding processes involving baseline variables: The first two processes modelled baseline measures of the clinical variable or the intermediate variable as common causes of post-treatment measures of these two variables. The third process allowed the two baseline variables themselves to be correlated due to past common causes. We compared the analysis models implied by the competing mediation approaches with this data generating model to hypothesise likely biases in estimators, and tested these in a simulation study. We applied the methods to a randomised trial of pragmatic rehabilitation in patients with chronic fatigue syndrome, which examined the role of limiting activities as a mediator. Estimates of causal mediation effects derived by approach (A) will be biased if one of the three processes involving baseline measures of intermediate or clinical outcomes is operating. Necessary assumptions for the change score approach (B) to provide unbiased estimates under either process include the independence of baseline measures and change scores of the intermediate variable. Finally, estimates provided by the analysis of covariance approach (C) were found to be unbiased under all the three processes considered here. When applied to the example, there was evidence of mediation under all methods but the estimate of the indirect effect depended on the approach used with the proportion mediated varying from 57% to 86%. Trialists planning mediation analyses should measure baseline values of putative mediators as well as of continuous clinical outcomes. An analysis of covariance approach is recommended to avoid potential biases due to confounding processes involving baseline measures of intermediate or clinical outcomes, and not simply for increased precision.

  3. Relationship between Race and the Effect of Fluids on Long-term Mortality after Acute Respiratory Distress Syndrome. Secondary Analysis of the National Heart, Lung, and Blood Institute Fluid and Catheter Treatment Trial.

    PubMed

    Jolley, Sarah E; Hough, Catherine L; Clermont, Gilles; Hayden, Douglas; Hou, Suqin; Schoenfeld, David; Smith, Nicholas L; Thompson, Boyd Taylor; Bernard, Gordon R; Angus, Derek C

    2017-09-01

    Short-term follow-up in the Fluid and Catheter Treatment Trial (FACTT) suggested differential mortality by race with conservative fluid management, but no significant interaction. In a post hoc analysis of FACTT including 1-year follow-up, we sought to estimate long-term mortality by race and test for an interaction between fluids and race. We performed a post hoc analysis of FACTT and the Economic Analysis of Pulmonary Artery Catheters (EAPAC) study (which included 655 of the 1,000 FACTT patients with near-complete 1-year follow up). We fit a multistate Markov model to estimate 1-year mortality for all non-Hispanic black and white randomized FACTT subjects. The model estimated the distribution of time from randomization to hospital discharge or hospital death (available on all patients) and estimated the distribution of time from hospital discharge to death using data on patients after hospital discharge for patients in EAPAC. The 1-year mortality was found by combining these estimates. Non-Hispanic black (n = 217, 25%) or white identified subjects (n = 641, 75%) were included. There was a significant interaction between race and fluid treatment (P = 0.012). One-year mortality was lower for black subjects assigned to conservative fluids (38 vs. 54%; mean mortality difference, 16%; 95% confidence interval, 2-30%; P = 0.027 between conservative and liberal). Conversely, 1-year mortality for white subjects was 35% versus 30% for conservative versus liberal arms (mean mortality difference, -4.8%; 95% confidence interval, -13% to 3%; P = 0.23). In our cohort, conservative fluid management may have improved 1-year mortality for non-Hispanic black patients with ARDS. However, we found no long-term benefit of conservative fluid management in white subjects.

  4. Cost analysis in support of minimum energy standards for clothes washers and dryers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-02-02

    The results of the cost analysis of energy conservation design options for laundry products are presented. The analysis was conducted using two approaches. The first, is directed toward the development of industrial engineering cost estimates of each energy conservation option. This approach results in the estimation of manufacturers costs. The second approach is directed toward determining the market price differential of energy conservation features. The results of this approach are shown. The market cost represents the cost to the consumer. It is the final cost, and therefore includes distribution costs as well as manufacturing costs.

  5. Parameter Transient Behavior Analysis on Fault Tolerant Control System

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine (Technical Monitor); Shin, Jong-Yeob

    2003-01-01

    In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. This paper illustrates analysis of a FTC system based on estimated fault parameter transient behavior which may include false fault detections during a short time interval. Using Lyapunov function analysis, the upper bound of an induced-L2 norm of the FTC system performance is calculated as a function of a fault detection time and the exponential decay rate of the Lyapunov function.

  6. Mars Exploration Rovers Landing Dispersion Analysis

    NASA Technical Reports Server (NTRS)

    Knocke, Philip C.; Wawrzyniak, Geoffrey G.; Kennedy, Brian M.; Desai, Prasun N.; Parker, TImothy J.; Golombek, Matthew P.; Duxbury, Thomas C.; Kass, David M.

    2004-01-01

    Landing dispersion estimates for the Mars Exploration Rover missions were key elements in the site targeting process and in the evaluation of landing risk. This paper addresses the process and results of the landing dispersion analyses performed for both Spirit and Opportunity. The several contributors to landing dispersions (navigation and atmospheric uncertainties, spacecraft modeling, winds, and margins) are discussed, as are the analysis tools used. JPL's MarsLS program, a MATLAB-based landing dispersion visualization and statistical analysis tool, was used to calculate the probability of landing within hazardous areas. By convolving this with the probability of landing within flight system limits (in-spec landing) for each hazard area, a single overall measure of landing risk was calculated for each landing ellipse. In-spec probability contours were also generated, allowing a more synoptic view of site risks, illustrating the sensitivity to changes in landing location, and quantifying the possible consequences of anomalies such as incomplete maneuvers. Data and products required to support these analyses are described, including the landing footprints calculated by NASA Langley's POST program and JPL's AEPL program, cartographically registered base maps and hazard maps, and flight system estimates of in-spec landing probabilities for each hazard terrain type. Various factors encountered during operations, including evolving navigation estimates and changing atmospheric models, are discussed and final landing points are compared with approach estimates.

  7. A general model for attitude determination error analysis

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Seidewitz, ED; Nicholson, Mark

    1988-01-01

    An overview is given of a comprehensive approach to filter and dynamics modeling for attitude determination error analysis. The models presented include both batch least-squares and sequential attitude estimation processes for both spin-stabilized and three-axis stabilized spacecraft. The discussion includes a brief description of a dynamics model of strapdown gyros, but it does not cover other sensor models. Model parameters can be chosen to be solve-for parameters, which are assumed to be estimated as part of the determination process, or consider parameters, which are assumed to have errors but not to be estimated. The only restriction on this choice is that the time evolution of the consider parameters must not depend on any of the solve-for parameters. The result of an error analysis is an indication of the contributions of the various error sources to the uncertainties in the determination of the spacecraft solve-for parameters. The model presented gives the uncertainty due to errors in the a priori estimates of the solve-for parameters, the uncertainty due to measurement noise, the uncertainty due to dynamic noise (also known as process noise or measurement noise), the uncertainty due to the consider parameters, and the overall uncertainty due to all these sources of error.

  8. Postmortem eyefluid analysis in dogs, cats and cattle as an estimate of antemortem serum chemistry profiles.

    PubMed Central

    Hanna, P E; Bellamy, J E; Donald, A

    1990-01-01

    This study was carried out to determine the diagnostic usefulness of postmortem eyefluid analysis in estimating antemortem concentrations of serochemical constituents. A total of 31 cattle, 18 dogs and 22 cats were selected from routine elective euthanasia submissions to a diagnostic laboratory. For all cases, a biochemical profile, including determinations for electrolytes, glucose, urea, creatinine, enzymes, cholesterol, bilirubin, protein and osmolality was performed on antemortem serum, and postmortem aqueous and vitreous humors at 0 and 24 h incubation periods. The association between serum and postmortem eyefluid chemistry values was examined using simple linear regression. A strong correlation between serum and postmortem eyefluid urea and creatinine concentrations was demonstrated in the three species examined over a 24 h postmortem interval. We concluded that an accurate estimate of antemortem serum urea or creatinine can be made from the analysis of aqueous or vitreous fluid at necropsy. An estimation of antemortem serum electrolytes (including calcium in cattle) cannot be made with a high degree of accuracy due to the amount of variability in the relationship between serum and eyefluid electrolyte values. For large molecules such as proteins, enzymes, cholesterol and bilirubin there was very poor correlation between serum and eyefluid values. PMID:2249181

  9. Study of plasma environments for the integrated Space Station electromagnetic analysis system

    NASA Technical Reports Server (NTRS)

    Singh, Nagendra

    1992-01-01

    The final report includes an analysis of various plasma effects on the electromagnetic environment of the Space Station Freedom. Effects of arcing are presented. Concerns of control of arcing by a plasma contactor are highlighted. Generation of waves by contaminant ions are studied and amplitude levels of the waves are estimated. Generation of electromagnetic waves by currents in the structure of the space station, driven by motional EMF, is analyzed and the radiation level is estimated.

  10. Prevalence of UTI among Iranian infants with prolonged jaundice, and its main causes: A systematic review and meta-analysis study.

    PubMed

    Tola, H H; Ranjbaran, M; Omani-Samani, R; Sadeghi, M

    2018-04-01

    An extremely variable and high prevalence of urinary tract infection (UTI) in infants with prolonged jaundice has been reported in Iran. However, there is no research from the area that has attempted to estimate pooled prevalence of UTI from considerably diverse evidence. Therefore, this systematic review and meta-analysis study aimed to estimate the prevalence of UTI in infants with prolonged jaundice who were admitted into clinics or hospitals in Iran. A systematic review and meta-analysis was conducted of published articles on UTI prevalence in infants with prolonged jaundice in Iran. Electronic databases were searched, including Web of Sciences, PubMed/Medline, Scopus, Iranian Scientific Information Database (SID) and Iranmedex, for both English and Persian language articles published between January, 2000 and March, 2017. All possible combinations of the following keywords were used: jaundice, icterus, hyperbilirubinemia during infancy, infection and neonatal. Nine studies that reported prevalence of UTI in infants with prolonged jaundice were included. The overall prevalence of UTI was estimated using random-effects meta-analysis models. A total of 1750 infants were pooled to estimate the overall prevalence of UTI in infants with prolonged jaundice. The prevalence reported by the studies included in this literature review was extremely variable and ranged 0.6-53.9%. The overall prevalence was 11% (95% Confidence Interval (CI): 5.0-18.0), and Escherichia coli was found to be the main cause of UTI. The overall prevalence of UTI was 11%, and E. coli was the main cause of UTI in infants with prolonged jaundice. Screening of UTI should be considered for infants with prolonged jaundice. Copyright © 2018 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  11. Estimating the Effects of the Terminal Area Productivity Program

    NASA Technical Reports Server (NTRS)

    Lee, David A.; Kostiuk, Peter F.; Hemm, Robert V., Jr.; Wingrove, Earl R., III; Shapiro, Gerald

    1997-01-01

    The report describes methods and results of an analysis of the technical and economic benefits of the systems to be developed in the NASA Terminal Area Productivity (TAP) program. A runway capacity model using parameters that reflect the potential impact of the TAP technologies is described. The runway capacity model feeds airport specific models which are also described. The capacity estimates are used with a queuing model to calculate aircraft delays, and TAP benefits are determined by calculating the savings due to reduced delays. The report includes benefit estimates for Boston Logan and Detroit Wayne County airports. An appendix includes a description and listing of the runway capacity model.

  12. Investigation of safety analysis methods using computer vision techniques

    NASA Astrophysics Data System (ADS)

    Shirazi, Mohammad Shokrolah; Morris, Brendan Tran

    2017-09-01

    This work investigates safety analysis methods using computer vision techniques. The vision-based tracking system is developed to provide the trajectory of road users including vehicles and pedestrians. Safety analysis methods are developed to estimate time to collision (TTC) and postencroachment time (PET) that are two important safety measurements. Corresponding algorithms are presented and their advantages and drawbacks are shown through their success in capturing the conflict events in real time. The performance of the tracking system is evaluated first, and probability density estimation of TTC and PET are shown for 1-h monitoring of a Las Vegas intersection. Finally, an idea of an intersection safety map is introduced, and TTC values of two different intersections are estimated for 1 day from 8:00 a.m. to 6:00 p.m.

  13. School Cost Functions: A Meta-Regression Analysis

    ERIC Educational Resources Information Center

    Colegrave, Andrew D.; Giles, Margaret J.

    2008-01-01

    The education cost literature includes econometric studies attempting to determine economies of scale, or estimate an optimal school or district size. Not only do their results differ, but the studies use dissimilar data, techniques, and models. To derive value from these studies requires that the estimates be made comparable. One method to do…

  14. Optical remote sensing for forest area estimation

    Treesearch

    Randolph H. Wynne; Richard G. Oderwald; Gregory A. Reams; John A. Scrivani

    2000-01-01

    The air photo dot-count method is now widely and successfully used for estimating operational forest area in the USDA Forest Inventory and Analysis (FIA) program. Possible alternatives that would provide for more frequent updates, spectral change detection, and maps of forest area include the AVHRR calibration center technique and various Landsat TM classification...

  15. Estimating economic impacts of timber-based industry expansion in northeastern Minnesota.

    Treesearch

    Daniel L. Erkkila; Dietmar W. Rose; Allen L. Lundgren

    1982-01-01

    Analysis of current and projected timber supplies in northeastern Minnesota indicates that expanded timber-based industrial activity could be supported. The impacts of a hypothetical industrial development scenario, including construction of waferboard plants and a wood-fueled power plant, were estimated using an input-output model. Development had noticeable impacts...

  16. Investigating Approaches to Estimating Covariate Effects in Growth Mixture Modeling: A Simulation Study

    ERIC Educational Resources Information Center

    Li, Ming; Harring, Jeffrey R.

    2017-01-01

    Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…

  17. The Neural Bases of Difficult Speech Comprehension and Speech Production: Two Activation Likelihood Estimation (ALE) Meta-Analyses

    ERIC Educational Resources Information Center

    Adank, Patti

    2012-01-01

    The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…

  18. Lake Powell management alternatives and values: CVM estimates of recreation benefits

    USGS Publications Warehouse

    Douglas, A.J.; Harpman, D.A.

    2004-01-01

    This paper presents data analyses based on information gathered from a recreation survey distributed during the spring of 1997 at Lake Powell. Recreation-linked management issues are the foci of the survey and this discussion. Survey responses to contingent valuation method (CVM) queries included in the questionnaire quantify visitor recreation values. The CVM estimates of the benefits provided by potential resource improvements are compared with the costs of the improvements in a benefit-cost analysis. The CVM questions covered three resources management issues including water quality improvement, sport fish harvest enhancement, and archeological site protection and restoration. The estimated benefits are remarkably high relative to the costs and range from $6 to $60 million per year. The dichotomous choice format was used in each of three resource CVM question scenarios. There were two levels of enhancement for each resource. There are, therefore, several consistency requirements—some of them unique to the dichotomous choice format—that the data and benefit estimates must satisfy. These consistency tests are presented in detail in the ensuing analysis.

  19. Impacts of Climate Variability and Change on Flood Frequency Analysis for Transportation Design

    DOT National Transportation Integrated Search

    2010-09-01

    Planning for construction of roads and bridges over rivers or floodplains includes a hydrologic analysis of rainfall amount and intensity : for a defined period. Infrastructure design must be based on accurate rainfall estimates how much (intensi...

  20. Filter parameter tuning analysis for operational orbit determination support

    NASA Technical Reports Server (NTRS)

    Dunham, J.; Cox, C.; Niklewski, D.; Mistretta, G.; Hart, R.

    1994-01-01

    The use of an extended Kalman filter (EKF) for operational orbit determination support is being considered by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). To support that investigation, analysis was performed to determine how an EKF can be tuned for operational support of a set of earth-orbiting spacecraft. The objectives of this analysis were to design and test a general purpose scheme for filter tuning, evaluate the solution accuracies, and develop practical methods to test the consistency of the EKF solutions in an operational environment. The filter was found to be easily tuned to produce estimates that were consistent, agreed with results from batch estimation, and compared well among the common parameters estimated for several spacecraft. The analysis indicates that there is not a sharply defined 'best' tunable parameter set, especially when considering only the position estimates over the data arc. The comparison of the EKF estimates for the user spacecraft showed that the filter is capable of high-accuracy results and can easily meet the current accuracy requirements for the spacecraft included in the investigation. The conclusion is that the EKF is a viable option for FDD operational support.

  1. Data analysis in emission tomography using emission-count posteriors

    NASA Astrophysics Data System (ADS)

    Sitek, Arkadiusz

    2012-11-01

    A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.

  2. Advanced Software for Analysis of High-Speed Rolling-Element Bearings

    NASA Technical Reports Server (NTRS)

    Poplawski, J. V.; Rumbarger, J. H.; Peters, S. M.; Galatis, H.; Flower, R.

    2003-01-01

    COBRA-AHS is a package of advanced software for analysis of rigid or flexible shaft systems supported by rolling-element bearings operating at high speeds under complex mechanical and thermal loads. These loads can include centrifugal and thermal loads generated by motions of bearing components. COBRA-AHS offers several improvements over prior commercial bearing-analysis programs: It includes innovative probabilistic fatigue-life-estimating software that provides for computation of three-dimensional stress fields and incorporates stress-based (in contradistinction to prior load-based) mathematical models of fatigue life. It interacts automatically with the ANSYS finite-element code to generate finite-element models for estimating distributions of temperature and temperature-induced changes in dimensions in iterative thermal/dimensional analyses: thus, for example, it can be used to predict changes in clearances and thermal lockup. COBRA-AHS provides an improved graphical user interface that facilitates the iterative cycle of analysis and design by providing analysis results quickly in graphical form, enabling the user to control interactive runs without leaving the program environment, and facilitating transfer of plots and printed results for inclusion in design reports. Additional features include roller-edge stress prediction and influence of shaft and housing distortion on bearing performance.

  3. The Chandra Source Catalog

    NASA Astrophysics Data System (ADS)

    Evans, Ian N.; Primini, Francis A.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger M.; Hall, Diane M.; Harbo, Peter N.; He, Xiangqun Helen; Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael S.; Van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2010-07-01

    The Chandra Source Catalog (CSC) is a general purpose virtual X-ray astrophysics facility that provides access to a carefully selected set of generally useful quantities for individual X-ray sources, and is designed to satisfy the needs of a broad-based group of scientists, including those who may be less familiar with astronomical data analysis in the X-ray regime. The first release of the CSC includes information about 94,676 distinct X-ray sources detected in a subset of public Advanced CCD Imaging Spectrometer imaging observations from roughly the first eight years of the Chandra mission. This release of the catalog includes point and compact sources with observed spatial extents lsim30''. The catalog (1) provides access to the best estimates of the X-ray source properties for detected sources, with good scientific fidelity, and directly supports scientific analysis using the individual source data; (2) facilitates analysis of a wide range of statistical properties for classes of X-ray sources; and (3) provides efficient access to calibrated observational data and ancillary data products for individual X-ray sources, so that users can perform detailed further analysis using existing tools. The catalog includes real X-ray sources detected with flux estimates that are at least 3 times their estimated 1σ uncertainties in at least one energy band, while maintaining the number of spurious sources at a level of lsim1 false source per field for a 100 ks observation. For each detected source, the CSC provides commonly tabulated quantities, including source position, extent, multi-band fluxes, hardness ratios, and variability statistics, derived from the observations in which the source is detected. In addition to these traditional catalog elements, for each X-ray source the CSC includes an extensive set of file-based data products that can be manipulated interactively, including source images, event lists, light curves, and spectra from each observation in which a source is detected.

  4. Age estimation in the living: Transition analysis on developing third molars.

    PubMed

    Tangmose, Sara; Thevissen, Patrick; Lynnerup, Niels; Willems, Guy; Boldsen, Jesper

    2015-12-01

    A radiographic assessment of third molar development is essential for differentiating between juveniles and adolescents in forensic age estimations. As the developmental stages of third molars are highly correlated, age estimates based on a combination of a full set of third molar scores are statistically complicated. Transition analysis (TA) is a statistical method developed for estimating age at death in skeletons, which combines several correlated developmental traits into one age estimate including a 95% prediction interval. The aim of this study was to evaluate the performance of TA in the living on a full set of third molar scores. A cross sectional sample of 854 panoramic radiographs, homogenously distributed by sex and age (15.0-24.0 years), were randomly split in two; a reference sample for obtaining age estimates including a 95% prediction interval according to TA; and a validation sample to test the age estimates against actual age. The mean inaccuracy of the age estimates was 1.82 years (±1.35) in males and 1.81 years (±1.44) in females. The mean bias was 0.55 years (±2.20) in males and 0.31 years (±2.30) in females. Of the actual ages, 93.7% of the males and 95.9% of the females (validation sample) fell within the 95% prediction interval. Moreover, at a sensitivity and specificity of 0.824 and 0.937 in males and 0.814 and 0.827 in females, TA performs well in differentiating between being a minor as opposed to an adult. Although accuracy does not outperform other methods, TA provides unbiased age estimates which minimize the risk of wrongly estimating minors as adults. Furthermore, when corrected ad hoc, TA produces appropriate prediction intervals. As TA allows expansion with additional traits, i.e. stages of development of the left hand-wrist and the clavicle, it has a great potential for future more accurate and reproducible age estimates, including an estimated probability of having attained the legal age limit of 18 years. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Analysis of scanner data for crop inventories

    NASA Technical Reports Server (NTRS)

    Horvath, R. (Principal Investigator); Cicone, R. C.; Kauth, R. J.; Malila, W. A.; Pont, W.; Thelen, B.; Sellman, A.

    1981-01-01

    Accomplishments for a machine-oriented small grains labeler T&E, and for Argentina ground data collection are reported. Features of the small grains labeler include temporal-spectral profiles, which characterize continuous patterns of crop spectral development, and crop calendar shift estimation, which adjusts for planting date differences of fields within a crop type. Corn and soybean classification technology development for area estimation for foreign commodity production forecasting is reported. Presentations supporting quarterly project management reviews and a quarterly technical interchange meeting are also included.

  6. [Forensic age determination in living individuals at the Institute of Legal Medicine in Berlin (Charité): analysis of the expert reports from 2001 to 2007].

    PubMed

    Schmidt, Sven; Knüfermann, Raidun; Tsokos, Michael; Schmeling, Andreas

    2009-01-01

    The analysis included the age reports provided by the Institute of Legal Medicine in Berlin (Charité) in the period from 2001 to 2007. A total of 416 age estimations were carried out, 289 in criminal and 127 in civil proceedings. 357 of the examined individuals were male, 59 were female. The vast majority of the individuals came from Vietnam. In 112 cases, there were no deviations between the indicated age and the estimated minimum age, while the actual age of the individuals was partly clearly above the estimated age. In 300 cases, there were discrepancies of up to 11 years between the indicated age and the estimated age. The study demonstrates that forensic age estimation in living individuals can make an important contribution to legal certainty.

  7. Estimating life expectancies for US small areas: a regression framework

    NASA Astrophysics Data System (ADS)

    Congdon, Peter

    2014-01-01

    Analysis of area mortality variations and estimation of area life tables raise methodological questions relevant to assessing spatial clustering, and socioeconomic inequalities in mortality. Existing small area analyses of US life expectancy variation generally adopt ad hoc amalgamations of counties to alleviate potential instability of mortality rates involved in deriving life tables, and use conventional life table analysis which takes no account of correlated mortality for adjacent areas or ages. The alternative strategy here uses structured random effects methods that recognize correlations between adjacent ages and areas, and allows retention of the original county boundaries. This strategy generalizes to include effects of area category (e.g. poverty status, ethnic mix), allowing estimation of life tables according to area category, and providing additional stabilization of estimated life table functions. This approach is used here to estimate stabilized mortality rates, derive life expectancies in US counties, and assess trends in clustering and in inequality according to county poverty category.

  8. Approaches to Refining Estimates of Global Burden and Economics of Dengue

    PubMed Central

    Shepard, Donald S.; Undurraga, Eduardo A.; Betancourt-Cravioto, Miguel; Guzmán, María G.; Halstead, Scott B.; Harris, Eva; Mudin, Rose Nani; Murray, Kristy O.; Tapia-Conyer, Roberto; Gubler, Duane J.

    2014-01-01

    Dengue presents a formidable and growing global economic and disease burden, with around half the world's population estimated to be at risk of infection. There is wide variation and substantial uncertainty in current estimates of dengue disease burden and, consequently, on economic burden estimates. Dengue disease varies across time, geography and persons affected. Variations in the transmission of four different viruses and interactions among vector density and host's immune status, age, pre-existing medical conditions, all contribute to the disease's complexity. This systematic review aims to identify and examine estimates of dengue disease burden and costs, discuss major sources of uncertainty, and suggest next steps to improve estimates. Economic analysis of dengue is mainly concerned with costs of illness, particularly in estimating total episodes of symptomatic dengue. However, national dengue disease reporting systems show a great diversity in design and implementation, hindering accurate global estimates of dengue episodes and country comparisons. A combination of immediate, short-, and long-term strategies could substantially improve estimates of disease and, consequently, of economic burden of dengue. Suggestions for immediate implementation include refining analysis of currently available data to adjust reported episodes and expanding data collection in empirical studies, such as documenting the number of ambulatory visits before and after hospitalization and including breakdowns by age. Short-term recommendations include merging multiple data sources, such as cohort and surveillance data to evaluate the accuracy of reporting rates (by health sector, treatment, severity, etc.), and using covariates to extrapolate dengue incidence to locations with no or limited reporting. Long-term efforts aim at strengthening capacity to document dengue transmission using serological methods to systematically analyze and relate to epidemiologic data. As promising tools for diagnosis, vaccination, vector control, and treatment are being developed, these recommended steps should improve objective, systematic measures of dengue burden to strengthen health policy decisions. PMID:25412506

  9. Approaches to refining estimates of global burden and economics of dengue.

    PubMed

    Shepard, Donald S; Undurraga, Eduardo A; Betancourt-Cravioto, Miguel; Guzmán, María G; Halstead, Scott B; Harris, Eva; Mudin, Rose Nani; Murray, Kristy O; Tapia-Conyer, Roberto; Gubler, Duane J

    2014-11-01

    Dengue presents a formidable and growing global economic and disease burden, with around half the world's population estimated to be at risk of infection. There is wide variation and substantial uncertainty in current estimates of dengue disease burden and, consequently, on economic burden estimates. Dengue disease varies across time, geography and persons affected. Variations in the transmission of four different viruses and interactions among vector density and host's immune status, age, pre-existing medical conditions, all contribute to the disease's complexity. This systematic review aims to identify and examine estimates of dengue disease burden and costs, discuss major sources of uncertainty, and suggest next steps to improve estimates. Economic analysis of dengue is mainly concerned with costs of illness, particularly in estimating total episodes of symptomatic dengue. However, national dengue disease reporting systems show a great diversity in design and implementation, hindering accurate global estimates of dengue episodes and country comparisons. A combination of immediate, short-, and long-term strategies could substantially improve estimates of disease and, consequently, of economic burden of dengue. Suggestions for immediate implementation include refining analysis of currently available data to adjust reported episodes and expanding data collection in empirical studies, such as documenting the number of ambulatory visits before and after hospitalization and including breakdowns by age. Short-term recommendations include merging multiple data sources, such as cohort and surveillance data to evaluate the accuracy of reporting rates (by health sector, treatment, severity, etc.), and using covariates to extrapolate dengue incidence to locations with no or limited reporting. Long-term efforts aim at strengthening capacity to document dengue transmission using serological methods to systematically analyze and relate to epidemiologic data. As promising tools for diagnosis, vaccination, vector control, and treatment are being developed, these recommended steps should improve objective, systematic measures of dengue burden to strengthen health policy decisions.

  10. Comparative Lifecycle Energy Analysis: Theory and Practice.

    ERIC Educational Resources Information Center

    Morris, Jeffrey; Canzoneri, Diana

    1992-01-01

    Explores the position that more energy is conserved through recycling secondary materials than is generated from municipal solid waste incineration. Discusses one component of a lifecycle analysis--a comparison of energy requirements for manufacturing competing products. Includes methodological issues, energy cost estimates, and difficulties…

  11. MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors.

    PubMed

    Hedeker, D; Gibbons, R D

    1996-05-01

    MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors. This model can be used for analysis of unbalanced longitudinal data, where individuals may be measured at a different number of timepoints, or even at different timepoints. Autocorrelated errors of a general form or following an AR(1), MA(1), or ARMA(1,1) form are allowable. This model can also be used for analysis of clustered data, where the mixed-effects model assumes data within clusters are dependent. The degree of dependency is estimated jointly with estimates of the usual model parameters, thus adjusting for clustering. MIXREG uses maximum marginal likelihood estimation, utilizing both the EM algorithm and a Fisher-scoring solution. For the scoring solution, the covariance matrix of the random effects is expressed in its Gaussian decomposition, and the diagonal matrix reparameterized using the exponential transformation. Estimation of the individual random effects is accomplished using an empirical Bayes approach. Examples illustrating usage and features of MIXREG are provided.

  12. Description of data on the Nimbus 7 LIMS map archive tape: Water vapor and nitrogen dioxide

    NASA Technical Reports Server (NTRS)

    Haggard, Kenneth V.; Marshall, B. T.; Kurzeja, Robert J.; Remsberg, Ellis E.; Russell, James M., III

    1988-01-01

    Described is the process by which the analysis of the Limb Infrared Monitor of the Stratosphere (LIMS) experiment data were used to produce estimates of synoptic maps of water vapor and nitrogen dioxide. In addition to a detailed description of the analysis procedure, also discussed are several interesting features in the data which are used to demonstrate how the analysis procedure produced the final maps and how one can estimate the uncertainties in the maps. In addition, features in the analysis are noted that would influence how one might use, or interpret, the results. These include subjects such as smoothing and the interpretation of wave components.

  13. New dimension analyses with error analysis for quaking aspen and black spruce

    NASA Technical Reports Server (NTRS)

    Woods, K. D.; Botkin, D. B.; Feiveson, A. H.

    1987-01-01

    Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.

  14. Inference for High-dimensional Differential Correlation Matrices.

    PubMed

    Cai, T Tony; Zhang, Anru

    2016-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.

  15. A Functional Varying-Coefficient Single-Index Model for Functional Response Data

    PubMed Central

    Li, Jialiang; Huang, Chao; Zhu, Hongtu

    2016-01-01

    Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer’s Disease Neuroimaging Initiative (ADNI) study. PMID:29200540

  16. A Functional Varying-Coefficient Single-Index Model for Functional Response Data.

    PubMed

    Li, Jialiang; Huang, Chao; Zhu, Hongtu

    2017-01-01

    Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer's Disease Neuroimaging Initiative (ADNI) study.

  17. Software for Quantifying and Simulating Microsatellite Genotyping Error

    PubMed Central

    Johnson, Paul C.D.; Haydon, Daniel T.

    2007-01-01

    Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126

  18. Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

    PubMed Central

    Huang, Jian; Zhang, Cun-Hui

    2013-01-01

    The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100

  19. COMDYN: Software to study the dynamics of animal communities using a capture-recapture approach

    USGS Publications Warehouse

    Hines, J.E.; Boulinier, T.; Nichols, J.D.; Sauer, J.R.; Pollock, K.H.

    1999-01-01

    COMDYN is a set of programs developed for estimation of parameters associated with community dynamics using count data from two locations or time periods. It is Internet-based, allowing remote users either to input their own data, or to use data from the North American Breeding Bird Survey for analysis. COMDYN allows probability of detection to vary among species and among locations and time periods. The basic estimator for species richness underlying all estimators is the jackknife estimator proposed by Burnham and Overton. Estimators are presented for quantities associated with temporal change in species richness, including rate of change in species richness over time, local extinction probability, local species turnover and number of local colonizing species. Estimators are also presented for quantities associated with spatial variation in species richness, including relative richness at two locations and proportion of species present in one location that are also present at a second location. Application of the estimators to species richness estimation has been previously described and justified. The potential applications of these programs are discussed.

  20. Sample size and power considerations in network meta-analysis

    PubMed Central

    2012-01-01

    Background Network meta-analysis is becoming increasingly popular for establishing comparative effectiveness among multiple interventions for the same disease. Network meta-analysis inherits all methodological challenges of standard pairwise meta-analysis, but with increased complexity due to the multitude of intervention comparisons. One issue that is now widely recognized in pairwise meta-analysis is the issue of sample size and statistical power. This issue, however, has so far only received little attention in network meta-analysis. To date, no approaches have been proposed for evaluating the adequacy of the sample size, and thus power, in a treatment network. Findings In this article, we develop easy-to-use flexible methods for estimating the ‘effective sample size’ in indirect comparison meta-analysis and network meta-analysis. The effective sample size for a particular treatment comparison can be interpreted as the number of patients in a pairwise meta-analysis that would provide the same degree and strength of evidence as that which is provided in the indirect comparison or network meta-analysis. We further develop methods for retrospectively estimating the statistical power for each comparison in a network meta-analysis. We illustrate the performance of the proposed methods for estimating effective sample size and statistical power using data from a network meta-analysis on interventions for smoking cessation including over 100 trials. Conclusion The proposed methods are easy to use and will be of high value to regulatory agencies and decision makers who must assess the strength of the evidence supporting comparative effectiveness estimates. PMID:22992327

  1. Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: a primer and applications.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E

    2014-04-01

    This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  2. Statistical Analysis of Big Data on Pharmacogenomics

    PubMed Central

    Fan, Jianqing; Liu, Han

    2013-01-01

    This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905

  3. Prediction and Estimation of Scaffold Strength with different pore size

    NASA Astrophysics Data System (ADS)

    Muthu, P.; Mishra, Shubhanvit; Sri Sai Shilpa, R.; Veerendranath, B.; Latha, S.

    2018-04-01

    This paper emphasizes the significance of prediction and estimation of the mechanical strength of 3D functional scaffolds before the manufacturing process. Prior evaluation of the mechanical strength and structural properties of the scaffold will reduce the cost fabrication and in fact ease up the designing process. Detailed analysis and investigation of various mechanical properties including shear stress equivalence have helped to estimate the effect of porosity and pore size on the functionality of the scaffold. The influence of variation in porosity was examined by computational approach via finite element analysis (FEA) and ANSYS application software. The results designate the adequate perspective of the evolutionary method for the regulation and optimization of the intricate engineering design process.

  4. Implementation of MCA Method for Identification of Factors for Conceptual Cost Estimation of Residential Buildings

    NASA Astrophysics Data System (ADS)

    Juszczyk, Michał; Leśniak, Agnieszka; Zima, Krzysztof

    2013-06-01

    Conceptual cost estimation is important for construction projects. Either underestimation or overestimation of building raising cost may lead to failure of a project. In the paper authors present application of a multicriteria comparative analysis (MCA) in order to select factors influencing residential building raising cost. The aim of the analysis is to indicate key factors useful in conceptual cost estimation in the early design stage. Key factors are being investigated on basis of the elementary information about the function, form and structure of the building, and primary assumptions of technological and organizational solutions applied in construction process. The mentioned factors are considered as variables of the model which aim is to make possible conceptual cost estimation fast and with satisfying accuracy. The whole analysis included three steps: preliminary research, choice of a set of potential variables and reduction of this set to select the final set of variables. Multicriteria comparative analysis is applied in problem solution. Performed analysis allowed to select group of factors, defined well enough at the conceptual stage of the design process, to be used as a describing variables of the model.

  5. Processing and analysis techniques involving in-vessel material generation

    DOEpatents

    Schabron, John F [Laramie, WY; Rovani, Jr., Joseph F.

    2011-01-25

    In at least one embodiment, the inventive technology relates to in-vessel generation of a material from a solution of interest as part of a processing and/or analysis operation. Preferred embodiments of the in-vessel material generation (e.g., in-vessel solid material generation) include precipitation; in certain embodiments, analysis and/or processing of the solution of interest may include dissolution of the material, perhaps as part of a successive dissolution protocol using solvents of increasing ability to dissolve. Applications include, but are by no means limited to estimation of a coking onset and solution (e.g., oil) fractionating.

  6. Processing and analysis techniques involving in-vessel material generation

    DOEpatents

    Schabron, John F [Laramie, WY; Rovani, Jr., Joseph F.

    2012-09-25

    In at least one embodiment, the inventive technology relates to in-vessel generation of a material from a solution of interest as part of a processing and/or analysis operation. Preferred embodiments of the in-vessel material generation (e.g., in-vessel solid material generation) include precipitation; in certain embodiments, analysis and/or processing of the solution of interest may include dissolution of the material, perhaps as part of a successive dissolution protocol using solvents of increasing ability to dissolve. Applications include, but are by no means limited to estimation of a coking onset and solution (e.g., oil) fractionating.

  7. Quantifying the bias in the estimated treatment effect in randomized trials having interim analyses and a rule for early stopping for futility.

    PubMed

    Walter, S D; Han, H; Briel, M; Guyatt, G H

    2017-04-30

    In this paper, we consider the potential bias in the estimated treatment effect obtained from clinical trials, the protocols of which include the possibility of interim analyses and an early termination of the study for reasons of futility. In particular, by considering the conditional power at an interim analysis, we derive analytic expressions for various parameters of interest: (i) the underestimation or overestimation of the treatment effect in studies that stop for futility; (ii) the impact of the interim analyses on the estimation of treatment effect in studies that are completed, i.e. that do not stop for futility; (iii) the overall estimation bias in the estimated treatment effect in a single study with such a stopping rule; and (iv) the probability of stopping at an interim analysis. We evaluate these general expressions numerically for typical trial scenarios. Results show that the parameters of interest depend on a number of factors, including the true underlying treatment effect, the difference that the trial is designed to detect, the study power, the number of planned interim analyses and what assumption is made about future data to be observed after an interim analysis. Because the probability of stopping early is small for many practical situations, the overall bias is often small, but a more serious issue is the potential for substantial underestimation of the treatment effect in studies that actually stop for futility. We also consider these ideas using data from an illustrative trial that did stop for futility at an interim analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Lifecycle Cost Analysis of Green Infrastructure. U.S. EPA National Stormwater Calculator: Low Impact Development Stormwater Control Cost Estimation Module & Future Enhancements

    EPA Science Inventory

    This presentation will cover the new cost estimation module of the US EPA National Stormwater Calculator and future enhancements, including a new mobile web app version of the tool. The presentation mainly focuses on how the calculator may be used to provide planning level capita...

  9. Robust Magnetotelluric Impedance Estimation

    NASA Astrophysics Data System (ADS)

    Sutarno, D.

    2010-12-01

    Robust magnetotelluric (MT) response function estimators are now in standard use by the induction community. Properly devised and applied, these have ability to reduce the influence of unusual data (outliers). The estimators always yield impedance estimates which are better than the conventional least square (LS) estimation because the `real' MT data almost never satisfy the statistical assumptions of Gaussian distribution and stationary upon which normal spectral analysis is based. This paper discuses the development and application of robust estimation procedures which can be classified as M-estimators to MT data. Starting with the description of the estimators, special attention is addressed to the recent development of a bounded-influence robust estimation, including utilization of the Hilbert Transform (HT) operation on causal MT impedance functions. The resulting robust performances are illustrated using synthetic as well as real MT data.

  10. Identification of dynamic systems, theory and formulation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1985-01-01

    The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.

  11. Discrete Choice Experiments: A Guide to Model Specification, Estimation and Software.

    PubMed

    Lancsar, Emily; Fiebig, Denzil G; Hole, Arne Risa

    2017-07-01

    We provide a user guide on the analysis of data (including best-worst and best-best data) generated from discrete-choice experiments (DCEs), comprising a theoretical review of the main choice models followed by practical advice on estimation and post-estimation. We also provide a review of standard software. In providing this guide, we endeavour to not only provide guidance on choice modelling but to do so in a way that provides a 'way in' for researchers to the practicalities of data analysis. We argue that choice of modelling approach depends on the research questions, study design and constraints in terms of quality/quantity of data and that decisions made in relation to analysis of choice data are often interdependent rather than sequential. Given the core theory and estimation of choice models is common across settings, we expect the theoretical and practical content of this paper to be useful to researchers not only within but also beyond health economics.

  12. Methods for estimating the magnitude and frequency of floods for urban and small, rural streams in Georgia, South Carolina, and North Carolina, 2011

    USGS Publications Warehouse

    Feaster, Toby D.; Gotvald, Anthony J.; Weaver, J. Curtis

    2014-01-01

    Reliable estimates of the magnitude and frequency of floods are essential for the design of transportation and water-conveyance structures, flood-insurance studies, and flood-plain management. Such estimates are particularly important in densely populated urban areas. In order to increase the number of streamflow-gaging stations (streamgages) available for analysis, expand the geographical coverage that would allow for application of regional regression equations across State boundaries, and build on a previous flood-frequency investigation of rural U.S Geological Survey streamgages in the Southeast United States, a multistate approach was used to update methods for determining the magnitude and frequency of floods in urban and small, rural streams that are not substantially affected by regulation or tidal fluctuations in Georgia, South Carolina, and North Carolina. The at-site flood-frequency analysis of annual peak-flow data for urban and small, rural streams (through September 30, 2011) included 116 urban streamgages and 32 small, rural streamgages, defined in this report as basins draining less than 1 square mile. The regional regression analysis included annual peak-flow data from an additional 338 rural streamgages previously included in U.S. Geological Survey flood-frequency reports and 2 additional rural streamgages in North Carolina that were not included in the previous Southeast rural flood-frequency investigation for a total of 488 streamgages included in the urban and small, rural regression analysis. The at-site flood-frequency analyses for the urban and small, rural streamgages included the expected moments algorithm, which is a modification of the Bulletin 17B log-Pearson type III method for fitting the statistical distribution to the logarithms of the annual peak flows. Where applicable, the flood-frequency analysis also included low-outlier and historic information. Additionally, the application of a generalized Grubbs-Becks test allowed for the detection of multiple potentially influential low outliers. Streamgage basin characteristics were determined using geographical information system techniques. Initial ordinary least squares regression simulations reduced the number of basin characteristics on the basis of such factors as statistical significance, coefficient of determination, Mallow’s Cp statistic, and ease of measurement of the explanatory variable. Application of generalized least squares regression techniques produced final predictive (regression) equations for estimating the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probability flows for urban and small, rural ungaged basins for three hydrologic regions (HR1, Piedmont–Ridge and Valley; HR3, Sand Hills; and HR4, Coastal Plain), which previously had been defined from exploratory regression analysis in the Southeast rural flood-frequency investigation. Because of the limited availability of urban streamgages in the Coastal Plain of Georgia, South Carolina, and North Carolina, additional urban streamgages in Florida and New Jersey were used in the regression analysis for this region. Including the urban streamgages in New Jersey allowed for the expansion of the applicability of the predictive equations in the Coastal Plain from 3.5 to 53.5 square miles. Average standard error of prediction for the predictive equations, which is a measure of the average accuracy of the regression equations when predicting flood estimates for ungaged sites, range from 25.0 percent for the 10-percent annual exceedance probability regression equation for the Piedmont–Ridge and Valley region to 73.3 percent for the 0.2-percent annual exceedance probability regression equation for the Sand Hills region.

  13. Association of Hypertensive Disorders of Pregnancy With Risk of Neurodevelopmental Disorders in Offspring: A Systematic Review and Meta-analysis.

    PubMed

    Maher, Gillian M; O'Keeffe, Gerard W; Kearney, Patricia M; Kenny, Louise C; Dinan, Timothy G; Mattsson, Molly; Khashan, Ali S

    2018-06-06

    Although research suggests an association between hypertensive disorders of pregnancy (HDP) and autism spectrum disorder (ASD), attention-deficit/hyperactivity disorder (ADHD), and other neurodevelopmental disorders in offspring, consensus is lacking. Given the increasing prevalence of hypertension in pregnancy, it is important to examine the association of HDP with neurodevelopmental outcome. To synthesize the published literature on the association between HDP and risk of neurodevelopmental disorders in offspring in a systematic review and meta-analysis. On the basis of a preprepared protocol, a systematic search of PubMed, CINAHL, Embase, PsycINFO, and Web of Science was performed from inception through June 7, 2017, supplemented by hand searching of reference lists. Two investigators independently reviewed titles, abstracts, and full-text articles. English-language cohort and case-control studies were included in which HDP and neurodevelopmental disorders were reported. Data extraction and quality appraisal were performed independently by 2 reviewers. Meta-analysis of Observational Studies in Epidemiology (MOOSE) guidelines were followed throughout. Random-effects meta-analyses of estimated pooled odds ratios (ORs) for HDP and ASD and for HDP and ADHD. Stand-alone estimates were reported for all other neurodevelopmental disorders. Of 1166 studies identified, 61 unique articles met inclusion criteria. Twenty studies reported estimates for ASD. Eleven of these (including 777 518 participants) reported adjusted estimates, with a pooled adjusted OR of 1.35 (95% CI, 1.11-1.64). Ten studies reported estimates for ADHD. Six of these (including 1 395 605 participants) reported adjusted estimates, with a pooled adjusted OR of 1.29 (95% CI, 1.22-1.36). Subgroup analyses according to type of exposure (ie, preeclampsia or other HDP) showed no statistically significant differences for ASD or ADHD. Thirty-one studies met inclusion criteria for all other neurodevelopmental disorders. Individual estimates reported for these were largely inconsistent, with few patterns of association observed. Exposure to HDP may be associated with an increase in the risk of ASD and ADHD. These findings highlight the need for greater pediatric surveillance of infants exposed to HDP to allow early intervention that may improve neurodevelopmental outcome.

  14. Estimation of spatial-temporal gait parameters using a low-cost ultrasonic motion analysis system.

    PubMed

    Qi, Yongbin; Soh, Cheong Boon; Gunawan, Erry; Low, Kay-Soon; Thomas, Rijil

    2014-08-20

    In this paper, a low-cost motion analysis system using a wireless ultrasonic sensor network is proposed and investigated. A methodology has been developed to extract spatial-temporal gait parameters including stride length, stride duration, stride velocity, stride cadence, and stride symmetry from 3D foot displacements estimated by the combination of spherical positioning technique and unscented Kalman filter. The performance of this system is validated against a camera-based system in the laboratory with 10 healthy volunteers. Numerical results show the feasibility of the proposed system with average error of 2.7% for all the estimated gait parameters. The influence of walking speed on the measurement accuracy of proposed system is also evaluated. Statistical analysis demonstrates its capability of being used as a gait assessment tool for some medical applications.

  15. Bayesian dose-response analysis for epidemiological studies with complex uncertainty in dose estimation.

    PubMed

    Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L

    2016-02-10

    Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling

    DOE PAGES

    Olofsson, K. Erik J.; Hanson, Jeremy M.; Shiraki, Daisuke; ...

    2014-07-14

    Here, time-series analysis of magnetics data in tokamaks is typically done using block-based fast Fourier transform methods. This work presents the development and deployment of a new set of algorithms for magnetic probe array analysis. The method is based on an estimation technique known as stochastic subspace identification (SSI). Compared with the standard coherence approach or the direct singular value decomposition approach, the new technique exhibits several beneficial properties. For example, the SSI method does not require that frequencies are orthogonal with respect to the timeframe used in the analysis. Frequencies are obtained directly as parameters of localized time-series models.more » The parameters are extracted by solving small-scale eigenvalue problems. Applications include maximum-likelihood regularized eigenmode pattern estimation, detection of neoclassical tearing modes, including locked mode precursors, and automatic clustering of modes, and magnetics-pattern characterization of sawtooth pre- and postcursors, edge harmonic oscillations and fishbones.« less

  17. A new approach for estimating the Jupiter and Saturn gravity fields using Juno and Cassini measurements, trajectory estimation analysis, and a dynamical wind model optimization

    NASA Astrophysics Data System (ADS)

    Galanti, Eli; Durante, Daniele; Iess, Luciano; Kaspi, Yohai

    2017-04-01

    The ongoing Juno spacecraft measurements are improving our knowledge of Jupiter's gravity field. Similarly, the Cassini Grand Finale will improve the gravity estimate of Saturn. The analysis of the Juno and Cassini Doppler data will provide a very accurate reconstruction of spacial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity fields of Jupiter and Saturn, additional information needs to be incorporated into the analysis, especially with regards to the planets' wind structures. In this work we propose a new iterative approach for the estimation of Jupiter and Saturn gravity fields, using simulated measurements, a trajectory estimation model, and an adjoint based inverse thermal wind model. Beginning with an artificial gravitational field, the trajectory estimation model is used to obtain the gravitational moments. The solution from the trajectory model is then used as an initial guess for the thermal wind model, and together with an optimization method, the likely penetration depth of the winds is computed, and its uncertainty is evaluated. As a final step, the gravity harmonics solution from the thermal wind model is given back to the trajectory model, along with an estimate of their uncertainties, to be used as a priori for a new calculation of the gravity field. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that by using this method some of the gravitational moments are fitted better to the `observed' ones, mainly due to the added information from the dynamical model which includes the wind structure and its depth. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity moments estimated from the Juno and Cassini radio science experiments.

  18. Can price get the monkey off our back? A meta-analysis of illicit drug demand.

    PubMed

    Gallet, Craig A

    2014-01-01

    Because of the increased availability of price data over the past 15 years, several studies have estimated the demand for illicit drugs, providing 462 estimates of the price elasticity. Results from estimating several meta-regressions reveal that these price elasticity estimates are influenced by a number of study characteristics. For instance, the price elasticity differs across drugs, with its absolute value being smallest for marijuana, compared with cocaine and heroin. Furthermore, price elasticity estimates are sensitive to whether demand is modeled in the short-run or the long-run, measures of quantity and price, whether or not alcohol and other illicit drugs are included in the specification of demand, and the location of demand. However, a number of other factors, including the functional form of demand, several specification issues, the type of data and method used to estimate demand, and the quality of the publication outlet, have less influence on the price elasticity. Copyright © 2013 John Wiley & Sons, Ltd.

  19. An Analysis of Factor Extraction Strategies: A Comparison of the Relative Strengths of Principal Axis, Ordinary Least Squares, and Maximum Likelihood in Research Contexts That Include Both Categorical and Continuous Variables

    ERIC Educational Resources Information Center

    Coughlin, Kevin B.

    2013-01-01

    This study is intended to provide researchers with empirically derived guidelines for conducting factor analytic studies in research contexts that include dichotomous and continuous levels of measurement. This study is based on the hypotheses that ordinary least squares (OLS) factor analysis will yield more accurate parameter estimates than…

  20. Earth resources data analysis program, phase 3

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Tasks were performed in two areas: (1) systems analysis and (2) algorithmic development. The major effort in the systems analysis task was the development of a recommended approach to the monitoring of resource utilization data for the Large Area Crop Inventory Experiment (LACIE). Other efforts included participation in various studies concerning the LACIE Project Plan, the utility of the GE Image 100, and the specifications for a special purpose processor to be used in the LACIE. In the second task, the major effort was the development of improved algorithms for estimating proportions of unclassified remotely sensed data. Also, work was performed on optimal feature extraction and optimal feature extraction for proportion estimation.

  1. Crustal dynamics project data analysis, 1988: VLBI geodetic results, 1979 - 1987

    NASA Technical Reports Server (NTRS)

    Ma, C.; Ryan, J. W.; Caprette, D.

    1989-01-01

    The results obtained by the Goddard VLBI (very long base interferometry) Data Analysis Team from the analysis of 712 Mark 3 VLBI geodetic data sets acquired from fixed and mobile observing sites through the end of 1987 are reported. A large solution, GLB401, was used to obtain earth rotation parameters and site velocities. A second large solution, GLB405, was used to obtain baseline evolutions. Radio source positions were estimated globally while nutation offsets were estimated from each data set. Site positions are tabulated on a yearly basis from 1979 through 1988. The results include 55 sites and 270 baselines.

  2. Overcoming bias in estimating the volume-outcome relationship.

    PubMed

    Tsai, Alexander C; Votruba, Mark; Bridges, John F P; Cebul, Randall D

    2006-02-01

    To examine the effect of hospital volume on 30-day mortality for patients with congestive heart failure (CHF) using administrative and clinical data in conventional regression and instrumental variables (IV) estimation models. The primary data consisted of longitudinal information on comorbid conditions, vital signs, clinical status, and laboratory test results for 21,555 Medicare-insured patients aged 65 years and older hospitalized for CHF in northeast Ohio in 1991-1997. The patient was the primary unit of analysis. We fit a linear probability model to the data to assess the effects of hospital volume on patient mortality within 30 days of admission. Both administrative and clinical data elements were included for risk adjustment. Linear distances between patients and hospitals were used to construct the instrument, which was then used to assess the endogeneity of hospital volume. When only administrative data elements were included in the risk adjustment model, the estimated volume-outcome effect was statistically significant (p=.029) but small in magnitude. The estimate was markedly attenuated in magnitude and statistical significance when clinical data were added to the model as risk adjusters (p=.39). IV estimation shifted the estimate in a direction consistent with selective referral, but we were unable to reject the consistency of the linear probability estimates. Use of only administrative data for volume-outcomes research may generate spurious findings. The IV analysis further suggests that conventional estimates of the volume-outcome relationship may be contaminated by selective referral effects. Taken together, our results suggest that efforts to concentrate hospital-based CHF care in high-volume hospitals may not reduce mortality among elderly patients.

  3. Irrigation water demand: A meta-analysis of price elasticities

    NASA Astrophysics Data System (ADS)

    Scheierling, Susanne M.; Loomis, John B.; Young, Robert A.

    2006-01-01

    Metaregression models are estimated to investigate sources of variation in empirical estimates of the price elasticity of irrigation water demand. Elasticity estimates are drawn from 24 studies reported in the United States since 1963, including mathematical programming, field experiments, and econometric studies. The mean price elasticity is 0.48. Long-run elasticities, those that are most useful for policy purposes, are likely larger than the mean estimate. Empirical results suggest that estimates may be more elastic if they are derived from mathematical programming or econometric studies and calculated at a higher irrigation water price. Less elastic estimates are found to be derived from models based on field experiments and in the presence of high-valued crops.

  4. Quantitative CT: technique dependence of volume estimation on pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan

    2012-03-01

    Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.

  5. Estimation Model of Spacecraft Parameters and Cost Based on a Statistical Analysis of COMPASS Designs

    NASA Technical Reports Server (NTRS)

    Gerberich, Matthew W.; Oleson, Steven R.

    2013-01-01

    The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions.

  6. A Geography-Specific Approach to Estimating the Distributional Impact of Highway Tolls: An Application to the Puget Sound Region of Washington State

    PubMed Central

    Plotnick, Robert D.; Romich, Jennifer; Thacker, Jennifer; Dunbar, Matthew

    2011-01-01

    This study contributes to the debate about tolls’ equity impacts by examining the potential economic costs of tolling for low-income and non-low-income households. Using data from the Puget Sound metropolitan region in Washington State and GIS methods to map driving routes from home to work, we examine car ownership and transportation patterns among low-income and non-low-income households. We follow standard practice of estimating tolls’ potential impact only on households with workers who would drive on tolled and non-tolled facilities. We then redo the analysis including broader groups of households. We find that the degree of regressivity is quite sensitive to the set of households included in the analysis. The results suggest that distributional analyses of tolls should estimate impacts on all households in the relevant region in addition to impacts on just users of roads that are currently tolled or likely to be tolled. PMID:21818172

  7. New Methods for Assessing and Reducing Uncertainty in Microgravity Studies

    NASA Astrophysics Data System (ADS)

    Giniaux, J. M.; Hooper, A. J.; Bagnardi, M.

    2017-12-01

    Microgravity surveying, also known as dynamic or 4D gravimetry is a time-dependent geophysical method used to detect mass fluctuations within the shallow crust, by analysing temporal changes in relative gravity measurements. We present here a detailed uncertainty analysis of temporal gravity measurements, considering for the first time all possible error sources, including tilt, error in drift estimations and timing errors. We find that some error sources that are actually ignored, can have a significant impact on the total error budget and it is therefore likely that some gravity signals may have been misinterpreted in previous studies. Our analysis leads to new methods for reducing some of the uncertainties associated with residual gravity estimation. In particular, we propose different approaches for drift estimation and free air correction depending on the survey set up. We also provide formulae to recalculate uncertainties for past studies and lay out a framework for best practice in future studies. We demonstrate our new approach on volcanic case studies, which include Kilauea in Hawaii and Askja in Iceland.

  8. Accuracy of the visual estimation method as a predictor of food intake in Alzheimer's patients provided with different types of food.

    PubMed

    Amano, Nobuko; Nakamura, Tomiyo

    2018-02-01

    The visual estimation method is commonly used in hospitals and other care facilities to evaluate food intake through estimation of plate waste. In Japan, no previous studies have investigated the validity and reliability of this method under the routine conditions of a hospital setting. The present study aimed to evaluate the validity and reliability of the visual estimation method, in long-term inpatients with different levels of eating disability caused by Alzheimer's disease. The patients were provided different therapeutic diets presented in various food types. This study was performed between February and April 2013, and 82 patients with Alzheimer's disease were included. Plate waste was evaluated for the 3 main daily meals, for a total of 21 days, 7 consecutive days during each of the 3 months, originating a total of 4851 meals, from which 3984 were included. Plate waste was measured by the nurses through the visual estimation method, and by the hospital's registered dietitians through the actual measurement method. The actual measurement method was first validated to serve as a reference, and the level of agreement between both methods was then determined. The month, time of day, type of food provided, and patients' physical characteristics were considered for analysis. For the 3984 meals included in the analysis, the level of agreement between the measurement methods was 78.4%. Disagreement of measurements consisted of 3.8% of underestimation and 17.8% of overestimation. Cronbach's α (0.60, P < 0.001) indicated that the reliability of the visual estimation method was within the acceptable range. The visual estimation method was found to be a valid and reliable method for estimating food intake in patients with different levels of eating impairment. The successful implementation and use of the method depends upon adequate training and motivation of the nurses and care staff involved. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.

  9. Estimation of sample size and testing power (Part 4).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  10. A method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  11. Industrial Development: Citizen's Workbook for Assessing Economic and Public Finance Impacts.

    ERIC Educational Resources Information Center

    Morse, George; And Others

    A do-it-yourself workbook, this guide is designed to help small, rural communities estimate the impact of new industry on the local economy. Included in this workbook are: (1) an introductory section presenting impact analysis rationale, workbook directions, an exemplary analysis, and explanations re: computer analysis; (2) a list of the 45…

  12. Effect Sizes for Growth-Modeling Analysis for Controlled Clinical Trials in the Same Metric as for Classical Analysis

    ERIC Educational Resources Information Center

    Feingold, Alan

    2009-01-01

    The use of growth-modeling analysis (GMA)--including hierarchical linear models, latent growth models, and general estimating equations--to evaluate interventions in psychology, psychiatry, and prevention science has grown rapidly over the last decade. However, an effect size associated with the difference between the trajectories of the…

  13. Analysis of low flows and selected methods for estimating low-flow characteristics at partial-record and ungaged stream sites in western Washington

    USGS Publications Warehouse

    Curran, Christopher A.; Eng, Ken; Konrad, Christopher P.

    2012-01-01

    Regional low-flow regression models for estimating Q7,10 at ungaged stream sites are developed from the records of daily discharge at 65 continuous gaging stations (including 22 discontinued gaging stations) for the purpose of evaluating explanatory variables. By incorporating the base-flow recession time constant τ as an explanatory variable in the regression model, the root-mean square error for estimating Q7,10 at ungaged sites can be lowered to 72 percent (for known values of τ), which is 42 percent less than if only basin area and mean annual precipitation are used as explanatory variables. If partial-record sites are included in the regression data set, τ must be estimated from pairs of discharge measurements made during continuous periods of declining low flows. Eight measurement pairs are optimal for estimating τ at partial-record sites, and result in a lowering of the root-mean square error by 25 percent. A low-flow survey strategy that includes paired measurements at partial-record sites requires additional effort and planning beyond a standard strategy, but could be used to enhance regional estimates of τ and potentially reduce the error of regional regression models for estimating low-flow characteristics at ungaged sites.

  14. Meta-analysis of the association between short-term exposure to ambient ozone and respiratory hospital admissions

    NASA Astrophysics Data System (ADS)

    Ji, Meng; Cohan, Daniel S.; Bell, Michelle L.

    2011-04-01

    Ozone is associated with health impacts including respiratory outcomes; however, results differ across studies. Meta-analysis is an increasingly important approach to synthesizing evidence across studies. We conducted meta-analysis of short-term ozone exposure and respiratory hospitalizations to evaluate variation across studies and explore some of the challenges in meta-analysis. We identified 136 estimates from 96 studies and investigated how estimates differed by age, ozone metric, season, lag, region, disease category, and hospitalization type. Overall results indicate associations between ozone and various kinds of respiratory hospitalizations; however, study characteristics affected risk estimates. Estimates were similar, but higher, for the elderly compared to all ages and for previous day exposure compared to same day exposure. Comparison across studies was hindered by variation in definitions of disease categories, as some (e.g., asthma) were identified through >= 3 different sets of ICD codes. Although not all analyses exhibited evidence of publication bias, adjustment for publication bias generally lowered overall estimates. Emergency hospitalizations for total respiratory disease increased by 4.47% (95% interval: 2.48, 6.50%) per 10 ppb 24 h ozone among the elderly without adjustment for publication bias and 2.97% (1.05, 4.94%) with adjustment. Comparison of multi-city study results and meta-analysis based on single-city studies further suggested publication bias.

  15. U.S. Geological Survey groundwater toolbox, a graphical and mapping interface for analysis of hydrologic data (version 1.0): user guide for estimation of base flow, runoff, and groundwater recharge from streamflow data

    USGS Publications Warehouse

    Barlow, Paul M.; Cunningham, William L.; Zhai, Tong; Gray, Mark

    2015-01-01

    This report is a user guide for the streamflow-hydrograph analysis methods provided with version 1.0 of the U.S. Geological Survey (USGS) Groundwater Toolbox computer program. These include six hydrograph-separation methods to determine the groundwater-discharge (base-flow) and surface-runoff components of streamflow—the Base-Flow Index (BFI; Standard and Modified), HYSEP (Fixed Interval, Sliding Interval, and Local Minimum), and PART methods—and the RORA recession-curve displacement method and associated RECESS program to estimate groundwater recharge from streamflow data. The Groundwater Toolbox is a customized interface built on the nonproprietary, open source MapWindow geographic information system software. The program provides graphing, mapping, and analysis capabilities in a Microsoft Windows computing environment. In addition to the four hydrograph-analysis methods, the Groundwater Toolbox allows for the retrieval of hydrologic time-series data (streamflow, groundwater levels, and precipitation) from the USGS National Water Information System, downloading of a suite of preprocessed geographic information system coverages and meteorological data from the National Oceanic and Atmospheric Administration National Climatic Data Center, and analysis of data with several preprocessing and postprocessing utilities. With its data retrieval and analysis tools, the Groundwater Toolbox provides methods to estimate many of the components of the water budget for a hydrologic basin, including precipitation; streamflow; base flow; runoff; groundwater recharge; and total, groundwater, and near-surface evapotranspiration.

  16. LFSTAT - An R-Package for Low-Flow Analysis

    NASA Astrophysics Data System (ADS)

    Koffler, D.; Laaha, G.

    2012-04-01

    When analysing daily streamflow data focusing on low flow and drought, the state of the art is well documented in the Manual on Low-Flow Estimation and Prediction [1] published by the WMO. While it is clear what has to be done, it is not so clear how to preform the analysis and make the calculation as reproducible as possible. Our software solution expands the high preforming statistical open source software package R to analyse daily stream flow data focusing on low-flows. As command-line based programs are not everyone's preference, we also offer a plug-in for the R-Commander, an easy to use graphical user interface (GUI) to analyse data in R. Functionality includes estimation of the most important low-flow indices. Beside standardly used flow indices also BFI and Recession constants can be computed. The main applications of L-moment based Extreme value analysis and regional frequency analysis (RFA) are available. Calculation of streamflow deficits is another important feature. The most common graphics are prepared and can easily be modified according to the users preferences. Graphics include hydrographs for different periods, flexible streamflow deficit plots, baseflow visualisation, flow duration curves as well as double mass curves just to name a few. The package uses a S3-class called lfobj (low-flow objects). Once this objects are created, analysis can be preformed by mouse-click, and a script can be saved to make the analysis easy reproducible. At the moment we are offering implementation of all major methods proposed in the WMO manual on Low-flow Estimation and Predictions. Future plans include e.g. report export in odt-file using odf-weave. We hope to offer a tool to ease and structure the analysis of stream flow data focusing on low-flows and to make analysis transparent and communicable. The package is designed for hydrological research and water management practice, but can also be used in teaching students the first steps in low-flow hydrology.

  17. Mean population salt intake estimated from 24-h urine samples and spot urine samples: a systematic review and meta-analysis.

    PubMed

    Huang, Liping; Crino, Michelle; Wu, Jason H Y; Woodward, Mark; Barzi, Federica; Land, Mary-Anne; McLean, Rachael; Webster, Jacqui; Enkhtungalag, Batsaikhan; Neal, Bruce

    2016-02-01

    Estimating equations based on spot urine samples have been identified as a possible alternative approach to 24-h urine collections for determining mean population salt intake. This review compares estimates of mean population salt intake based upon spot and 24-h urine samples. We systematically searched for all studies that reported estimates of daily salt intake based upon both spot and 24-h urine samples for the same population. The associations between the two were quantified and compared overall and in subsets of studies. A total of 538 records were identified, 108 were assessed as full text and 29 were included. The included studies involved 10,414 participants from 34 countries and made 71 comparisons available for the primary analysis. Overall average population salt intake estimated from 24-h urine samples was 9.3 g/day compared with 9.0 g/day estimated from the spot urine samples. Estimates based upon spot urine samples had excellent sensitivity (97%) and specificity (100%) at classifying mean population salt intake as above or below the World Health Organization maximum target of 5 g/day. Compared with the 24-h samples, estimates based upon spot urine overestimated intake at lower levels of consumption and underestimated intake at higher levels of consumption. Estimates of mean population salt intake based upon spot urine samples can provide countries with a good indication of mean population salt intake and whether action on salt consumption is required. Published by Oxford University Press on behalf of the International Epidemiological Association 2015. This work is written by US Government employees and is in the public domain in the US.

  18. Absolute probability estimates of lethal vessel strikes to North Atlantic right whales in Roseway Basin, Scotian Shelf.

    PubMed

    van der Hoop, Julie M; Vanderlaan, Angelia S M; Taggart, Christopher T

    2012-10-01

    Vessel strikes are the primary source of known mortality for the endangered North Atlantic right whale (Eubalaena glacialis). Multi-institutional efforts to reduce mortality associated with vessel strikes include vessel-routing amendments such as the International Maritime Organization voluntary "area to be avoided" (ATBA) in the Roseway Basin right whale feeding habitat on the southwestern Scotian Shelf. Though relative probabilities of lethal vessel strikes have been estimated and published, absolute probabilities remain unknown. We used a modeling approach to determine the regional effect of the ATBA, by estimating reductions in the expected number of lethal vessel strikes. This analysis differs from others in that it explicitly includes a spatiotemporal analysis of real-time transits of vessels through a population of simulated, swimming right whales. Combining automatic identification system (AIS) vessel navigation data and an observationally based whale movement model allowed us to determine the spatial and temporal intersection of vessels and whales, from which various probability estimates of lethal vessel strikes are derived. We estimate one lethal vessel strike every 0.775-2.07 years prior to ATBA implementation, consistent with and more constrained than previous estimates of every 2-16 years. Following implementation, a lethal vessel strike is expected every 41 years. When whale abundance is held constant across years, we estimate that voluntary vessel compliance with the ATBA results in an 82% reduction in the per capita rate of lethal strikes; very similar to a previously published estimate of 82% reduction in the relative risk of a lethal vessel strike. The models we developed can inform decision-making and policy design, based on their ability to provide absolute, population-corrected, time-varying estimates of lethal vessel strikes, and they are easily transported to other regions and situations.

  19. Structural weights analysis of advanced aerospace vehicles using finite element analysis

    NASA Technical Reports Server (NTRS)

    Bush, Lance B.; Lentz, Christopher A.; Rehder, John J.; Naftel, J. Chris; Cerro, Jeffrey A.

    1989-01-01

    A conceptual/preliminary level structural design system has been developed for structural integrity analysis and weight estimation of advanced space transportation vehicles. The system includes a three-dimensional interactive geometry modeler, a finite element pre- and post-processor, a finite element analyzer, and a structural sizing program. Inputs to the system include the geometry, surface temperature, material constants, construction methods, and aerodynamic and inertial loads. The results are a sized vehicle structure capable of withstanding the static loads incurred during assembly, transportation, operations, and missions, and a corresponding structural weight. An analysis of the Space Shuttle external tank is included in this paper as a validation and benchmark case of the system.

  20. Risk of myocardial infarction and stroke in bipolar disorder: a systematic review and exploratory meta-analysis

    PubMed Central

    Prieto, M.L.; Cuéllar-Barboza, A.B.; Bobo, W.V.; Roger, V.L.; Bellivier, F.; Leboyer, M.; West, C.P.; Frye, M.A.

    2016-01-01

    Objective To review the evidence on and estimate the risk of myocardial infarction and stroke in bipolar disorder. Method A systematic search using MEDLINE, EMBASE, PsycINFO, Web of Science, Scopus, Cochrane Database of Systematic Reviews, and bibliographies (1946 – May, 2013) was conducted. Case-control and cohort studies of bipolar disorder patients age 15 or older with myocardial infarction or stroke as outcomes were included. Two independent reviewers extracted data and assessed quality. Estimates of effect were summarized using random-effects meta-analysis. Results Five cohort studies including 13 115 911 participants (27 092 bipolar) were included. Due to the use of registers, different statistical methods, and inconsistent adjustment for confounders, there was significant methodological heterogeneity among studies. The exploratory meta-analysis yielded no evidence for a significant increase in the risk of myocardial infarction: [relative risk (RR): 1.09, 95% CI 0.96–1.24, P = 0.20; I2 = 6%]. While there was evidence of significant study heterogeneity, the risk of stroke in bipolar disorder was significantly increased (RR 1.74, 95% CI 1.29–2.35; P = 0.0003; I2 = 83%). Conclusion There may be a differential risk of myocardial infarction and stroke in patients with bipolar disorder. Confidence in these pooled estimates was limited by the small number of studies, significant heterogeneity and dissimilar methodological features. PMID:24850482

  1. Potential impacts of the Alberta fetal alcohol spectrum disorder service networks on secondary disabilities: a cost-benefit analysis.

    PubMed

    Thanh, Nguyen Xuan; Moffatt, Jessica; Jacobs, Philip; Chuck, Anderson W; Jonsson, Egon

    2013-01-01

    To estimate the break-even effectiveness of the Alberta Fetal Alcohol Spectrum Disorder (FASD) Service Networks in reducing occurrences of secondary disabilities associated with FASD. The secondary disabilities addressed within this study include crime, homelessness, mental health problems, and school disruption (for children) or unemployment (for adults). We used a cost-benefit analysis approach where benefits of the service networks were the cost difference between the two approaches: having the 12 service networks and having no service network in place, across Alberta. We used a threshold analysis to estimate the break-even effectiveness (i.e. the effectiveness level at which the service networks became cost-saving). If no network was in place throughout the province, the secondary disabilities would cost $22.85 million (including $8.62 million for adults and $14.24 million for children) per year. Given the cost of network was $6.12 million per year, the break-even effectiveness was estimated at 28% (range: 25% to 32%). Although not all benefits associated with the service networks are included, such as the exclusion of the primary benefit to those experiencing FASD, the benefits to FASD caregivers, and the preventative benefits, the economic and social burden associated with secondary disabilities will "pay-off" if the effectiveness of the program in reducing secondary disabilities is 28%.

  2. Defining Tsunami Magnitude as Measure of Potential Impact

    NASA Astrophysics Data System (ADS)

    Titov, V. V.; Tang, L.

    2016-12-01

    The goal of tsunami forecast, as a system for predicting potential impact of a tsunami at coastlines, requires quick estimate of a tsunami magnitude. This goal has been recognized since the beginning of tsunami research. The work of Kajiura, Soloviev, Abe, Murty, and many others discussed several scales for tsunami magnitude based on estimates of tsunami energy. However, difficulties of estimating tsunami energy based on available tsunami measurements at coastal sea-level stations has carried significant uncertainties and has been virtually impossible in real time, before tsunami impacts coastlines. The slow process of tsunami magnitude estimates, including collection of vast amount of available coastal sea-level data from affected coastlines, made it impractical to use any tsunami magnitude scales in tsunami warning operations. Uncertainties of estimates made tsunami magnitudes difficult to use as universal scale for tsunami analysis. Historically, the earthquake magnitude has been used as a proxy of tsunami impact estimates, since real-time seismic data is available of real-time processing and ample amount of seismic data is available for an elaborate post event analysis. This measure of tsunami impact carries significant uncertainties in quantitative tsunami impact estimates, since the relation between the earthquake and generated tsunami energy varies from case to case. In this work, we argue that current tsunami measurement capabilities and real-time modeling tools allow for establishing robust tsunami magnitude that will be useful for tsunami warning as a quick estimate for tsunami impact and for post-event analysis as a universal scale for tsunamis inter-comparison. We present a method for estimating the tsunami magnitude based on tsunami energy and present application of the magnitude analysis for several historical events for inter-comparison with existing methods.

  3. Microcephaly Prevalence in Infants Born to Zika Virus-Infected Women: A Systematic Review and Meta-Analysis

    PubMed Central

    2017-01-01

    Zika virus is an emergent flavivirus transmitted by Aedes genus mosquitoes that recently reached the Americas and was soon implicated in an increase of microcephaly incidence. The objective of the present study is to systematically review the published data and perform a meta-analysis to estimate the prevalence of microcephaly in babies born to Zika virus-infected women during pregnancy. We searched PubMed and Cochrane databases, included cohort studies, and excluded case reports and case series publications. We extracted sample sizes and the number of microcephaly cases from eight studies, which permitted a calculation of prevalence rates that are pooled in a random-effects model meta-analysis. We estimated the prevalence of microcephaly of 2.3% (95% CI = 1.0–5.3%) among all pregnancies. Limitations include mixed samples of women infected at different pregnancy times, since it is known that infection at the first trimester is associated with higher risk to congenital anomalies. The estimates are deceptively low, given the devastating impact the infection causes over children and their families. We hope our study contributes to public health knowledge to fight Zika virus epidemics to protect mothers and their newborns. PMID:28783051

  4. Factors Associated With Changes in Body Composition Shortly After Orthotopic Liver Transplantation: The Potential Influence of Immunosuppressive Agents.

    PubMed

    Brito-Costa, Ana; Pereira-da-Silva, Luís; Papoila, Ana Luísa; Alves, Marta; Mateus, Élia; Nolasco, Fernando; Barroso, Eduardo

    2016-08-01

    This study aimed to determine factors associated with body composition changes shortly after liver transplantation (LTx), including the influence of immunosuppressive agents. The combined resting energy expenditure (REE) and handgrip strength provided a valuable assessment in data interpretation of body composition. This observational single-center study included a cohort of consecutive end-stage liver disease patients with indications for LTx over 2 years. Cyclosporine was preferred for diabetic, hepatitis C-infected, and human immunodeficiency virus-infected patients per the transplant center protocol. Subjective Global Assessment, handgrip strength, multifrequency bioelectrical impedance analysis, and REE measurements were collected. The assessments were performed before LTx (T0) and at medians of 9 (T1) and 36 (T2) days after LTx. The fat mass index (FMI) and lean mass index (LMI) were surrogates of adiposity and skeletal muscle, respectively. Multiple linear regression analysis was used. Fifty-six patients with a mean age of 53.7 (8.5) years were included; 87.5% were men. Preoperative Subjective Global Assessment undernourishment (β-estimate = 17.9; P = 0.004) and of drug addiction absence (β estimate = 14.6; P = 0.049) were associated with FMI increase. Higher REE at T1 (per 100 kcal) was associated with LMI increase (β estimate = 1.70; P = 0.012) and body cell mass increase (β estimate = 1.60; P = 0.049). The cyclosporine-based regimen was associated with FMI decrease (β estimate = -25.64; P < 0.001) and LMI increase (β estimate = 23.76; P < 0.001) when compared with a tacrolimus-based regimen. Steroids did not affect body composition. The cyclosporine-based regimen was independently associated with decreased adiposity and increased skeletal muscle compared with the tacrolimus-based regimen. Future randomized controlled trials are needed to confirm these findings.

  5. Updated Value of Service Reliability Estimates for Electric Utility Customers in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, Michael; Schellenberg, Josh; Blundell, Marshall

    2015-01-01

    This report updates the 2009 meta-analysis that provides estimates of the value of service reliability for electricity customers in the United States (U.S.). The meta-dataset now includes 34 different datasets from surveys fielded by 10 different utility companies between 1989 and 2012. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods, it was possible to integrate their results into a single meta-dataset describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can bemore » generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the U.S. for industrial, commercial, and residential customers. This report focuses on the backwards stepwise selection process that was used to develop the final revised model for all customer classes. Across customer classes, the revised customer interruption cost model has improved significantly because it incorporates more data and does not include the many extraneous variables that were in the original specification from the 2009 meta-analysis. The backwards stepwise selection process led to a more parsimonious model that only included key variables, while still achieving comparable out-of-sample predictive performance. In turn, users of interruption cost estimation tools such as the Interruption Cost Estimate (ICE) Calculator will have less customer characteristics information to provide and the associated inputs page will be far less cumbersome. The upcoming new version of the ICE Calculator is anticipated to be released in 2015.« less

  6. Digital signal processing and control and estimation theory -- Points of tangency, area of intersection, and parallel directions

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1976-01-01

    A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.

  7. Estimating acreage by double sampling using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)

    1982-01-01

    Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.

  8. 37 CFR 351.10 - Evidence.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...” include still photographs, video tapes, and motion pictures. (2) Separation of irrelevant portions... considered in the analysis, the techniques of data collection, the techniques of estimation and testing, and...

  9. 37 CFR 351.10 - Evidence.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...” include still photographs, video tapes, and motion pictures. (2) Separation of irrelevant portions... considered in the analysis, the techniques of data collection, the techniques of estimation and testing, and...

  10. 37 CFR 351.10 - Evidence.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...” include still photographs, video tapes, and motion pictures. (2) Separation of irrelevant portions... considered in the analysis, the techniques of data collection, the techniques of estimation and testing, and...

  11. 37 CFR 351.10 - Evidence.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...” include still photographs, video tapes, and motion pictures. (2) Separation of irrelevant portions... considered in the analysis, the techniques of data collection, the techniques of estimation and testing, and...

  12. Computer program for design analysis of radial-inflow turbines

    NASA Technical Reports Server (NTRS)

    Glassman, A. J.

    1976-01-01

    A computer program written in FORTRAN that may be used for the design analysis of radial-inflow turbines was documented. The following information is included: loss model (estimation of losses), the analysis equations, a description of the input and output data, the FORTRAN program listing and list of variables, and sample cases. The input design requirements include the power, mass flow rate, inlet temperature and pressure, and rotational speed. The program output data includes various diameters, efficiencies, temperatures, pressures, velocities, and flow angles for the appropriate calculation stations. The design variables include the stator-exit angle, rotor radius ratios, and rotor-exit tangential velocity distribution. The losses are determined by an internal loss model.

  13. Satellite Power Systems (SPS) space transportation cost analysis and evaluation

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A picture of Space Power Systems space transportation costs at the present time is given with respect to accuracy as stated, reasonableness of the methods used, assumptions made, and uncertainty associated with the estimates. The approach used consists of examining space transportation costs from several perspectives to perform a variety of sensitivity analyses or reviews and examine the findings in terms of internal consistency and external comparison with analogous systems. These approaches are summarized as a theoretical and historical review including a review of stated and unstated assumptions used to derive the costs, and a performance or technical review. These reviews cover the overall transportation program as well as the individual vehicles proposed. The review of overall cost assumptions is the principal means used for estimating the cost uncertainty derived. The cost estimates used as the best current estimate are included.

  14. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2002-01-01

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  15. Crustal dynamics project data analysis, 1987. Volume 2: Mobile VLBI geodetic results, 1982-1986

    NASA Technical Reports Server (NTRS)

    Ma, C.; Ryan, J. W.

    1987-01-01

    The Goddard VLBI group reports the results of analyzing 101 Mark III data sets acquired from mobile observing sites through the end of 1986 and available to the Crustal Dynamics Project. The fixed VLBI observations at Hat Creek, Ft. Davis, Mojave, and OVRO are included as they participate heavily in the mobile schedules. One large solution GLB171 was used to obtain baseline length and transverse evolutions. Radio source positions were estimated globally, while nutation offsets were estimated from each data set. The results include 28 mobile sites.

  16. Advanced methods of structural and trajectory analysis for transport aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.

    1995-01-01

    This report summarizes the efforts in two areas: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of trajectory optimization. The majority of the effort was spent in the structural weight area. A draft of 'Analytical Fuselage and Wing Weight Estimation of Transport Aircraft', resulting from this research, is included as an appendix.

  17. Per Capita Alcohol Consumption and Suicide Rates in the U.S., 1950-2002

    ERIC Educational Resources Information Center

    Landberg, Jonas

    2009-01-01

    The aim of this paper was to estimate how suicide rates in the United States are affected by changes in per capita consumption during the postwar period. The analysis included Annual suicide rates and per capita alcohol consumption data (total and beverage specific) for the period 1950-2002. Gender- and age-specific models were estimated using the…

  18. Induced abortion rate in Iran: a meta-analysis.

    PubMed

    Motaghi, Zahra; Poorolajal, Jalal; Keramat, Afsaneh; Shariati, Mohammad; Yunesian, Masud; Masoumi, Seyyedeh Zahra

    2013-10-01

    About 44 million induced abortions take place worldwide annually, of which 50% are unsafe. The results of studies investigated the induced abortion rate in Iran are inconsistent. The aim of this meta-analysis was to estimate the incidence rate of induced abortion in Iran. National and international electronic databases, as well as conference databases until July 2012 were searched. Reference lists of articles were screened and the studies' authors were contacted for additional unpublished studies. Cross-sectional studies addressing induced abortion in Iran were included in this meta-analysis. The primary outcome of interest was the induced abortion rate (the number of abortions per 1000 women aged 15-44 years in a year) or the ratio (the number of abortions per 100 live births in a year). The secondary outcome of interest was the prevalence of unintended pregnancies (the number of mistimed, unplanned, or unwanted pregnancies per total pregnancies). Data were analyzed using random effect models. Of 603 retrieved studies, using search strategy, 10 studies involving 102,394 participants were eventually included in the meta-analysis. The induced abortion rate and ratio were estimated as 8.9 per 1000 women aged 15-44 years (95% CI: 5.46, 12.33) and 5.34 per 100 live births (95% CI: 3.61, 7.07), respectively. The prevalence of unintended pregnancy was estimated as 27.94 per 100 pregnant women (95% CI: 23.46, 32.42). The results of this meta-analysis helped a better understanding of the incidence of induced abortion in Iran compared to the other developing countries in Asia. However, additional sources of data on abortion other than medical records and survey studies are needed to estimate the true rate of unsafe abortion in Iran.

  19. Economic analysis of measles elimination program in the Republic of Korea, 2001: a cost benefit analysis study.

    PubMed

    Bae, Geun-Ryang; Choe, Young June; Go, Un Yeong; Kim, Yong-Ik; Lee, Jong-Koo

    2013-05-31

    In this study, we modeled the cost benefit analysis for three different measles vaccination strategies based upon three different measles-containing vaccines in Korea, 2001. We employed an economic analysis model using vaccination coverage data and population-based measles surveillance data, along with available estimates of the costs for the different strategies. In addition, we have included analysis on benefit of reduction of complication by mumps and rubella. We evaluated four different strategies: strategy 1, keep-up program with a second dose measles-mumps-rubella (MMR) vaccine at 4-6 years without catch-up campaign; strategy 2, additional catch-up campaign with measles (M) vaccine; strategy 3, catch-up campaign with measles-rubella (MR) vaccine; and strategy 4, catch-up campaign with MMR vaccine. The cost of vaccination included cost for vaccines, vaccination practices and other administrative expenses. The direct benefit of estimated using data from National Health Insurance Company, a government-operated system that reimburses all medical costs spent on designated illness in Korea. With the routine one-dose MMR vaccination program, we estimated a baseline of 178,560 measles cases over the 20 years; when the catch-up campaign with M, MR or MMR vaccines was conducted, we estimated the measles cases would decrease to 5936 cases. Among all strategies, the two-dose MMR keep-up program with MR catch-up campaign showed the highest benefit-cost ratio of 1.27 with a net benefit of 51.6 billion KRW. Across different vaccination strategies, our finding suggest that MR catch-up campaign in conjunction with two-dose MMR keep-up program was the most appropriate option in terms of economic costs and public health effects associated with measles elimination strategy in Korea. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Power Watch: Increasing Transparency and Accessibility of Data in the Global Power Sector to Accelerate the Transition to a Lower Carbon Economy

    NASA Astrophysics Data System (ADS)

    Hennig, R. J.; Friedrich, J.; Malaguzzi Valeri, L.; McCormick, C.; Lebling, K.; Kressig, A.

    2016-12-01

    The Power Watch project will offer open data on the global electricity sector starting with power plants and their impacts on climate and water systems; it will also offer visualizations and decision making tools. Power Watch will create the first comprehensive, open database of power plants globally by compiling data from national governments, public and private utilities, transmission grid operators, and other data providers to create a core dataset that has information on over 80% of global installed capacity for electrical generation. Power plant data will at a minimum include latitude and longitude, capacity, fuel type, emissions, water usage, ownership, and annual generation. By providing data that is both comprehensive, as well as making it publically available, this project will support decision making and analysis by actors across the economy and in the research community. The Power Watch research effort focuses on creating a global standard for power plant information, gathering and standardizing data from multiple sources, matching information from multiple sources on a plant level, testing cross-validation approaches (regional statistics, crowdsourcing, satellite data, and others) and developing estimation methodologies for generation, emissions, and water usage. When not available from official reports, emissions, annual generation, and water usage will be estimated. Water use estimates of power plants will be based on capacity, fuel type and satellite imagery to identify cooling types. This analysis is being piloted in several states in India and will then be scaled up to a global level. Other planned applications of of the Power Watch data include improving understanding of energy access, air pollution, emissions estimation, stranded asset analysis, life cycle analysis, tracking of proposed plants and curtailment analysis.

  1. An introduction to Bayesian statistics in health psychology.

    PubMed

    Depaoli, Sarah; Rus, Holly M; Clifton, James P; van de Schoot, Rens; Tiemensma, Jitske

    2017-09-01

    The aim of the current article is to provide a brief introduction to Bayesian statistics within the field of health psychology. Bayesian methods are increasing in prevalence in applied fields, and they have been shown in simulation research to improve the estimation accuracy of structural equation models, latent growth curve (and mixture) models, and hierarchical linear models. Likewise, Bayesian methods can be used with small sample sizes since they do not rely on large sample theory. In this article, we discuss several important components of Bayesian statistics as they relate to health-based inquiries. We discuss the incorporation and impact of prior knowledge into the estimation process and the different components of the analysis that should be reported in an article. We present an example implementing Bayesian estimation in the context of blood pressure changes after participants experienced an acute stressor. We conclude with final thoughts on the implementation of Bayesian statistics in health psychology, including suggestions for reviewing Bayesian manuscripts and grant proposals. We have also included an extensive amount of online supplementary material to complement the content presented here, including Bayesian examples using many different software programmes and an extensive sensitivity analysis examining the impact of priors.

  2. Evaluation of Piloted Inputs for Onboard Frequency Response Estimation

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Martos, Borja

    2013-01-01

    Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.

  3. Compatibility check of measured aircraft responses using kinematic equations and extended Kalman filter

    NASA Technical Reports Server (NTRS)

    Klein, V.; Schiess, J. R.

    1977-01-01

    An extended Kalman filter smoother and a fixed point smoother were used for estimation of the state variables in the six degree of freedom kinematic equations relating measured aircraft responses and for estimation of unknown constant bias and scale factor errors in measured data. The computing algorithm includes an analysis of residuals which can improve the filter performance and provide estimates of measurement noise characteristics for some aircraft output variables. The technique developed was demonstrated using simulated and real flight test data. Improved accuracy of measured data was obtained when the data were corrected for estimated bias errors.

  4. Structural Equation Modeling: A Framework for Ocular and Other Medical Sciences Research

    PubMed Central

    Christ, Sharon L.; Lee, David J.; Lam, Byron L.; Diane, Zheng D.

    2017-01-01

    Structural equation modeling (SEM) is a modeling framework that encompasses many types of statistical models and can accommodate a variety of estimation and testing methods. SEM has been used primarily in social sciences but is increasingly used in epidemiology, public health, and the medical sciences. SEM provides many advantages for the analysis of survey and clinical data, including the ability to model latent constructs that may not be directly observable. Another major feature is simultaneous estimation of parameters in systems of equations that may include mediated relationships, correlated dependent variables, and in some instances feedback relationships. SEM allows for the specification of theoretically holistic models because multiple and varied relationships may be estimated together in the same model. SEM has recently expanded by adding generalized linear modeling capabilities that include the simultaneous estimation of parameters of different functional form for outcomes with different distributions in the same model. Therefore, mortality modeling and other relevant health outcomes may be evaluated. Random effects estimation using latent variables has been advanced in the SEM literature and software. In addition, SEM software has increased estimation options. Therefore, modern SEM is quite general and includes model types frequently used by health researchers, including generalized linear modeling, mixed effects linear modeling, and population average modeling. This article does not present any new information. It is meant as an introduction to SEM and its uses in ocular and other health research. PMID:24467557

  5. Estimation of treatment effects in all-comers randomized clinical trials with a predictive marker.

    PubMed

    Choai, Yuki; Matsui, Shigeyuki

    2015-03-01

    Recent advances in genomics and biotechnologies have accelerated the development of molecularly targeted treatments and accompanying markers to predict treatment responsiveness. However, it is common at the initiation of a definitive phase III clinical trial that there is no compelling biological basis or early trial data for a candidate marker regarding its capability in predicting treatment effects. In this case, it is reasonable to include all patients as eligible for randomization, but to plan for prospective subgroup analysis based on the marker. One analysis plan in such all-comers designs is the so-called fallback approach that first tests for overall treatment efficacy and then proceeds to testing in a biomarker-positive subgroup if the first test is not significant. In this approach, owing to the adaptive nature of the analysis and a correlation between the two tests, a bias will arise in estimating the treatment effect in the biomarker-positive subgroup after a non-significant first overall test. In this article, we formulate the bias function and show a difficulty in obtaining unbiased estimators for a whole range of an associated parameter. To address this issue, we propose bias-corrected estimation methods, including those based on an approximation of the bias function under a bounded range of the parameter using polynomials. We also provide an interval estimation method based on a bivariate doubly truncated normal distribution. Simulation experiments demonstrated a success in bias reduction. Application to a phase III trial for lung cancer is provided. © 2014, The International Biometric Society.

  6. Annual Review of Research Under the Joint Services Electronics Program.

    DTIC Science & Technology

    1978-10-01

    Electronic Science at Texas Tech University. Specific topics covered include fault analysis, Stochastic control and estimation, nonlinear control, multidimensional system theory , Optical noise, and pattern recognition.

  7. Wavelet analysis for wind fields estimation.

    PubMed

    Leite, Gladeston C; Ushizima, Daniela M; Medeiros, Fátima N S; de Lima, Gilson G

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B(3) spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms(-1). Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms.

  8. An improved adaptive weighting function method for State Estimation in Power Systems with VSC-MTDC

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Yang, Xiaonan; Lang, Yansheng; Song, Xuri; Wang, Minkun; Luo, Yadi; Wu, Lingyun; Liu, Peng

    2017-04-01

    This paper presents an effective approach for state estimation in power systems that include multi-terminal voltage source converter based high voltage direct current (VSC-MTDC), called improved adaptive weighting function method. The proposed approach is simplified in which the VSC-MTDC system is solved followed by the AC system. Because the new state estimation method only changes the weight and keeps the matrix dimension unchanged. Accurate and fast convergence of AC/DC system can be realized by adaptive weight function method. This method also provides the technical support for the simulation analysis and accurate regulation of AC/DC system. Both the oretical analysis and numerical tests verify practicability, validity and convergence of new method.

  9. Estimating gene function with least squares nonnegative matrix factorization.

    PubMed

    Wang, Guoli; Ochs, Michael F

    2007-01-01

    Nonnegative matrix factorization is a machine learning algorithm that has extracted information from data in a number of fields, including imaging and spectral analysis, text mining, and microarray data analysis. One limitation with the method for linking genes through microarray data in order to estimate gene function is the high variance observed in transcription levels between different genes. Least squares nonnegative matrix factorization uses estimates of the uncertainties on the mRNA levels for each gene in each condition, to guide the algorithm to a local minimum in normalized chi2, rather than a Euclidean distance or divergence between the reconstructed data and the data itself. Herein, application of this method to microarray data is demonstrated in order to predict gene function.

  10. Estimating a test's accuracy using tailored meta-analysis-How setting-specific data may aid study selection.

    PubMed

    Willis, Brian H; Hyde, Christopher J

    2014-05-01

    To determine a plausible estimate for a test's performance in a specific setting using a new method for selecting studies. It is shown how routine data from practice may be used to define an "applicable region" for studies in receiver operating characteristic space. After qualitative appraisal, studies are selected based on the probability that their study accuracy estimates arose from parameters lying in this applicable region. Three methods for calculating these probabilities are developed and used to tailor the selection of studies for meta-analysis. The Pap test applied to the UK National Health Service (NHS) Cervical Screening Programme provides a case example. The meta-analysis for the Pap test included 68 studies, but at most 17 studies were considered applicable to the NHS. For conventional meta-analysis, the sensitivity and specificity (with 95% confidence intervals) were estimated to be 72.8% (65.8, 78.8) and 75.4% (68.1, 81.5) compared with 50.9% (35.8, 66.0) and 98.0% (95.4, 99.1) from tailored meta-analysis using a binomial method for selection. Thus, for a cervical intraepithelial neoplasia (CIN) 1 prevalence of 2.2%, the post-test probability for CIN 1 would increase from 6.2% to 36.6% between the two methods of meta-analysis. Tailored meta-analysis provides a method for augmenting study selection based on the study's applicability to a setting. As such, the summary estimate is more likely to be plausible for a setting and could improve diagnostic prediction in practice. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Grey literature in meta-analyses.

    PubMed

    Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J

    2003-01-01

    In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.

  12. Allowing for Correlations between Correlations in Random-Effects Meta-Analysis of Correlation Matrices

    ERIC Educational Resources Information Center

    Prevost, A. Toby; Mason, Dan; Griffin, Simon; Kinmonth, Ann-Louise; Sutton, Stephen; Spiegelhalter, David

    2007-01-01

    Practical meta-analysis of correlation matrices generally ignores covariances (and hence correlations) between correlation estimates. The authors consider various methods for allowing for covariances, including generalized least squares, maximum marginal likelihood, and Bayesian approaches, illustrated using a 6-dimensional response in a series of…

  13. 24 CFR 1006.101 - Housing plan requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... disposition; (C) A financial analysis of the proposed demolition/disposition; and (D) Any additional... including an analysis of the manner in which the activities will enable the DHHL to meet its mission, goals... the estimated housing needs for all families to be served by the DHHL. (3) Financial resources. An...

  14. 24 CFR 1006.101 - Housing plan requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... disposition; (C) A financial analysis of the proposed demolition/disposition; and (D) Any additional... including an analysis of the manner in which the activities will enable the DHHL to meet its mission, goals... the estimated housing needs for all families to be served by the DHHL. (3) Financial resources. An...

  15. Multivariate Longitudinal Analysis with Bivariate Correlation Test

    PubMed Central

    Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory

    2016-01-01

    In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692

  16. Multivariate Longitudinal Analysis with Bivariate Correlation Test.

    PubMed

    Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory

    2016-01-01

    In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.

  17. Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Boucher, Matthew J.

    2017-01-01

    Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.

  18. RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.

    PubMed

    Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z

    2017-04-01

    We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Geostatistical applications in environmental remediation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, R.N.; Purucker, S.T.; Lyon, B.F.

    1995-02-01

    Geostatistical analysis refers to a collection of statistical methods for addressing data that vary in space. By incorporating spatial information into the analysis, geostatistics has advantages over traditional statistical analysis for problems with a spatial context. Geostatistics has a history of success in earth science applications, and its popularity is increasing in other areas, including environmental remediation. Due to recent advances in computer technology, geostatistical algorithms can be executed at a speed comparable to many standard statistical software packages. When used responsibly, geostatistics is a systematic and defensible tool can be used in various decision frameworks, such as the Datamore » Quality Objectives (DQO) process. At every point in the site, geostatistics can estimate both the concentration level and the probability or risk of exceeding a given value. Using these probability maps can assist in identifying clean-up zones. Given any decision threshold and an acceptable level of risk, the probability maps identify those areas that are estimated to be above or below the acceptable risk. Those areas that are above the threshold are of the most concern with regard to remediation. In addition to estimating clean-up zones, geostatistics can assist in designing cost-effective secondary sampling schemes. Those areas of the probability map with high levels of estimated uncertainty are areas where more secondary sampling should occur. In addition, geostatistics has the ability to incorporate soft data directly into the analysis. These data include historical records, a highly correlated secondary contaminant, or expert judgment. The role of geostatistics in environmental remediation is a tool that in conjunction with other methods can provide a common forum for building consensus.« less

  20. Reference Model 5 (RM5): Oscillating Surge Wave Energy Converter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y. H.; Jenne, D. S.; Thresher, R.

    This report is an addendum to SAND2013-9040: Methodology for Design and Economic Analysis of Marine Energy Conversion (MEC) Technologies. This report describes an Oscillating Water Column Wave Energy Converter (OSWEC) reference model design in a complementary manner to Reference Models 1-4 contained in the above report. A conceptual design for a taut moored oscillating surge wave energy converter was developed. The design had an annual electrical power of 108 kilowatts (kW), rated power of 360 kW, and intended deployment at water depths between 50 m and 100 m. The study includes structural analysis, power output estimation, a hydraulic power conversionmore » chain system, and mooring designs. The results were used to estimate device capital cost and annual operation and maintenance costs. The device performance and costs were used for the economic analysis, following the methodology presented in SAND2013-9040 that included costs for designing, manufacturing, deploying, and operating commercial-scale MEC arrays up to 100 devices. The levelized cost of energy estimated for the Reference Model 5 OSWEC, presented in this report, was for a single device and arrays of 10, 50, and 100 units, and it enabled the economic analysis to account for cost reductions associated with economies of scale. The baseline commercial levelized cost of energy estimate for the Reference Model 5 device in an array comprised of 10 units is $1.44/kilowatt-hour (kWh), and the value drops to approximately $0.69/kWh for an array of 100 units.« less

  1. Analgesic effects of treatments for non-specific low back pain: a meta-analysis of placebo-controlled randomized trials.

    PubMed

    Machado, L A C; Kamper, S J; Herbert, R D; Maher, C G; McAuley, J H

    2009-05-01

    Estimates of treatment effects reported in placebo-controlled randomized trials are less subject to bias than those estimates provided by other study designs. The objective of this meta-analysis was to estimate the analgesic effects of treatments for non-specific low back pain reported in placebo-controlled randomized trials. Medline, Embase, Cinahl, PsychInfo and Cochrane Central Register of Controlled Trials databases were searched for eligible trials from earliest records to November 2006. Continuous pain outcomes were converted to a common 0-100 scale and pooled using a random effects model. A total of 76 trials reporting on 34 treatments were included. Fifty percent of the investigated treatments had statistically significant effects, but for most the effects were small or moderate: 47% had point estimates of effects of <10 points on the 100-point scale, 38% had point estimates from 10 to 20 points and 15% had point estimates of >20 points. Treatments reported to have large effects (>20 points) had been investigated only in a single trial. This meta-analysis revealed that the analgesic effects of many treatments for non-specific low back pain are small and that they do not differ in populations with acute or chronic symptoms.

  2. Incorporating indirect costs into a cost-benefit analysis of laparoscopic adjustable gastric banding.

    PubMed

    Finkelstein, Eric A; Allaire, Benjamin T; Dibonaventura, Marco Dacosta; Burgess, Somali M

    2012-01-01

    The objective of this study was to estimate the time to breakeven and 5-year net costs of laparoscopic adjustable gastric banding (LAGB) taking both direct and indirect costs and cost savings into account. Estimates of direct cost savings from LAGB were available from the literature. Although longitudinal data on indirect cost savings were not available, these estimates were generated by quantifying the relationship between medical expenditures and absenteeism and between medical expenditures and presenteeism (reduced on-the-job productivity) and combining these elasticity estimates with estimates of the direct cost savings to generate total savings. These savings were then combined with the direct and indirect costs of the procedure to quantify net savings. By including indirect costs, the time to breakeven was reduced by half a year, from 16 to 14 quarters. After 5 years, net savings in medical expenditures from a gastric banding procedure were estimated to be $4970 (±$3090). Including absenteeism increased savings to $6180 (±$3550). Savings were further increased to $10,960 (±$5864) when both absenteeism and presenteeism estimates were included. This study presented a novel approach for including absenteeism and presenteeism estimates in cost-benefit analyses. Application of the approach to gastric banding among surgery-eligible obese employees revealed that the inclusion of indirect costs and cost savings improves the business case for the procedure. This approach can easily be extended to other populations and treatments. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  3. The Andrews’ Principles of Risk, Need, and Responsivity as Applied in Drug Abuse Treatment Programs: Meta-Analysis of Crime and Drug Use Outcomes

    PubMed Central

    Prendergast, Michael L.; Pearson, Frank S.; Podus, Deborah; Hamilton, Zachary K.; Greenwell, Lisa

    2013-01-01

    Objectives The purpose of the present meta-analysis was to answer the question: Can the Andrews principles of risk, needs, and responsivity, originally developed for programs that treat offenders, be extended to programs that treat drug abusers? Methods Drawing from a dataset that included 243 independent comparisons, we conducted random-effects meta-regression and ANOVA-analog meta-analyses to test the Andrews principles by averaging crime and drug use outcomes over a diverse set of programs for drug abuse problems. Results For crime outcomes, in the meta-regressions the point estimates for each of the principles were substantial, consistent with previous studies of the Andrews principles. There was also a substantial point estimate for programs exhibiting a greater number of the principles. However, almost all of the 95% confidence intervals included the zero point. For drug use outcomes, in the meta-regressions the point estimates for each of the principles was approximately zero; however, the point estimate for programs exhibiting a greater number of the principles was somewhat positive. All of the estimates for the drug use principles had confidence intervals that included the zero point. Conclusions This study supports previous findings from primary research studies targeting the Andrews principles that those principles are effective in reducing crime outcomes, here in meta-analytic research focused on drug treatment programs. By contrast, programs that follow the principles appear to have very little effect on drug use outcomes. Primary research studies that experimentally test the Andrews principles in drug treatment programs are recommended. PMID:24058325

  4. Survival rate of AIDS disease and mortality in HIV-infected patients: a meta-analysis.

    PubMed

    Poorolajal, J; Hooshmand, E; Mahjub, H; Esmailnasab, N; Jenabi, E

    2016-10-01

    The life expectancy of patients with human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) reported by several epidemiological studies is inconsistent. This meta-analysis was conducted to estimate the survival rate from HIV diagnosis to AIDS onset and from AIDS onset to death. The electronic databases PubMed, Web of Science and Scopus were searched to February 2016. In addition, the reference lists of included studies were checked to identify further references, and the database of the International AIDS Society was also searched. Cohort studies addressing the survival rate in patients diagnosed with HIV/AIDS were included in this meta-analysis. The outcomes of interest were the survival rate of patients diagnosed with HIV progressing to AIDS, and the survival rate of patients with AIDS dying from AIDS-related causes with or without highly active antiretroviral therapy (HAART). The survival rate (P) was estimated with 95% confidence intervals based on random-effects models. In total, 27,862 references were identified, and 57 studies involving 294,662 participants were included in this meta-analysis. Two, 4-, 6-, 8-, 10- and 12-year survival probabilities of progression from HIV diagnosis to AIDS onset were estimated to be 82%, 72%, 64%, 57%, 26% and 19%, respectively. Two, 4-, 6-, 8- and 10-year survival probabilities of progression from AIDS onset to AIDS-related death in patients who received HAART were estimated to be 87%, 86%, 78%, 78%, and 61%, respectively, and 2-, 4- and 6-year survival probabilities of progression from AIDS onset to AIDS-related death in patients who did not receive HAART were estimated to be 48%, 26% and 18%, respectively. Evidence of considerable heterogeneity was found. The majority of the studies had a moderate to high risk of bias. The majority of HIV-positive patients progress to AIDS within the first decade of diagnosis. Most patients who receive HAART will survive for >10 years after the onset of AIDS, whereas the majority of the patients who do not receive HAART die within 2 years of the onset of AIDS. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  5. Medical costs and quality-adjusted life years associated with smoking: a systematic review.

    PubMed

    Feirman, Shari P; Glasser, Allison M; Teplitskaya, Lyubov; Holtgrave, David R; Abrams, David B; Niaura, Raymond S; Villanti, Andrea C

    2016-07-27

    Estimated medical costs ("T") and QALYs ("Q") associated with smoking are frequently used in cost-utility analyses of tobacco control interventions. The goal of this study was to understand how researchers have addressed the methodological challenges involved in estimating these parameters. Data were collected as part of a systematic review of tobacco modeling studies. We searched five electronic databases on July 1, 2013 with no date restrictions and synthesized studies qualitatively. Studies were eligible for the current analysis if they were U.S.-based, provided an estimate for Q, and used a societal perspective and lifetime analytic horizon to estimate T. We identified common methods and frequently cited sources used to obtain these estimates. Across all 18 studies included in this review, 50 % cited a 1992 source to estimate the medical costs associated with smoking and 56 % cited a 1996 study to derive the estimate for QALYs saved by quitting or preventing smoking. Approaches for estimating T varied dramatically among the studies included in this review. T was valued as a positive number, negative number and $0; five studies did not include estimates for T in their analyses. The most commonly cited source for Q based its estimate on the Health Utilities Index (HUI). Several papers also cited sources that based their estimates for Q on the Quality of Well-Being Scale and the EuroQol five dimensions questionnaire (EQ-5D). Current estimates of the lifetime medical care costs and the QALYs associated with smoking are dated and do not reflect the latest evidence on the health effects of smoking, nor the current costs and benefits of smoking cessation and prevention. Given these limitations, we recommend that researchers conducting economic evaluations of tobacco control interventions perform extensive sensitivity analyses around these parameter estimates.

  6. Glacier volume estimation of Cascade Volcanoes—an analysis and comparison with other methods

    USGS Publications Warehouse

    Driedger, Carolyn L.; Kennard, P.M.

    1986-01-01

    During the 1980 eruption of Mount St. Helens, the occurrence of floods and mudflows made apparent a need to assess mudflow hazards on other Cascade volcanoes. A basic requirement for such analysis is information about the volume and distribution of snow and ice on these volcanoes. An analysis was made of the volume-estimation methods developed by previous authors and a volume estimation method was developed for use in the Cascade Range. A radio echo-sounder, carried in a backpack, was used to make point measurements of ice thickness on major glaciers of four Cascade volcanoes (Mount Rainier, Washington; Mount Hood and the Three Sisters, Oregon; and Mount Shasta, California). These data were used to generate ice-thickness maps and bedrock topographic maps for developing and testing volume-estimation methods. Subsequently, the methods were applied to the unmeasured glaciers on those mountains and, as a test of the geographical extent of applicability, to glaciers beyond the Cascades having measured volumes. Two empirical relationships were required in order to predict volumes for all the glaciers. Generally, for glaciers less than 2.6 km in length, volume was found to be estimated best by using glacier area, raised to a power. For longer glaciers, volume was found to be estimated best by using a power law relationship, including slope and shear stress. The necessary variables can be estimated from topographic maps and aerial photographs.

  7. Multiple-rule bias in the comparison of classification rules

    PubMed Central

    Yousefi, Mohammadmahdi R.; Hua, Jianping; Dougherty, Edward R.

    2011-01-01

    Motivation: There is growing discussion in the bioinformatics community concerning overoptimism of reported results. Two approaches contributing to overoptimism in classification are (i) the reporting of results on datasets for which a proposed classification rule performs well and (ii) the comparison of multiple classification rules on a single dataset that purports to show the advantage of a certain rule. Results: This article provides a careful probabilistic analysis of the second issue and the ‘multiple-rule bias’, resulting from choosing a classification rule having minimum estimated error on the dataset. It quantifies this bias corresponding to estimating the expected true error of the classification rule possessing minimum estimated error and it characterizes the bias from estimating the true comparative advantage of the chosen classification rule relative to the others by the estimated comparative advantage on the dataset. The analysis is applied to both synthetic and real data using a number of classification rules and error estimators. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routines and error estimation methods. The code for multiple-rule analysis is implemented in MATLAB. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi11a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21546390

  8. Volterra series truncation and kernel estimation of nonlinear systems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Billings, S. A.

    2017-02-01

    The Volterra series model is a direct generalisation of the linear convolution integral and is capable of displaying the intrinsic features of a nonlinear system in a simple and easy to apply way. Nonlinear system analysis using Volterra series is normally based on the analysis of its frequency-domain kernels and a truncated description. But the estimation of Volterra kernels and the truncation of Volterra series are coupled with each other. In this paper, a novel complex-valued orthogonal least squares algorithm is developed. The new algorithm provides a powerful tool to determine which terms should be included in the Volterra series expansion and to estimate the kernels and thus solves the two problems all together. The estimated results are compared with those determined using the analytical expressions of the kernels to validate the method. To further evaluate the effectiveness of the method, the physical parameters of the system are also extracted from the measured kernels. Simulation studies demonstrates that the new approach not only can truncate the Volterra series expansion and estimate the kernels of a weakly nonlinear system, but also can indicate the applicability of the Volterra series analysis in a severely nonlinear system case.

  9. Assimilation of TOPEX Sea Level Measurements with a Reduced-Gravity, Shallow Water Model of the Tropical Pacific Ocean

    NASA Technical Reports Server (NTRS)

    Fukumori, Ichiro

    1995-01-01

    Sea surface height variability measured by TOPEX is analyzed in the tropical Pacific Ocean by way of assimilation into a wind-driven, reduced-gravity, shallow water model using an approximate Kalman filter and smoother. The analysis results in an optimal fit of the dynamic model to the observations, providing it dynamically consistent interpolation of sea level and estimation of the circulation. Nearly 80% of the expected signal variance is accounted for by the model within 20 deg of the equator, and estimation uncertainty is substantially reduced by the voluminous observation. Notable features resolved by the analysis include seasonal changes associated with the North Equatorial Countercurrent and equatorial Kelvin and Rossby waves. Significant discrepancies are also found between the estimate and TOPEX measurements, especially near the eastern boundary. Improvements in the estimate made by the assimilation are validated by comparisons with independent tide gauge and current meter observations. The employed filter and smoother are based on approximately computed estimation error covariance matrices, utilizing a spatial transformation and an symptotic approximation. The analysis demonstrates the practical utility of a quasi-optimal filter and smoother.

  10. CXTFIT/Excel A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; Mayes, Melanie; Parker, Jack C

    2010-01-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less

  11. Validity of segmental bioelectrical impedance analysis for estimating fat-free mass in children including overweight individuals.

    PubMed

    Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki

    2017-02-01

    This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.

  12. Estimation et validation des derivees de stabilite et controle du modele dynamique non-lineaire d'un drone a voilure fixe

    NASA Astrophysics Data System (ADS)

    Courchesne, Samuel

    Knowledge of the dynamic characteristics of a fixed-wing UAV is necessary to design flight control laws and to conceive a high quality flight simulator. The basic features of a flight mechanic model include the properties of mass, inertia and major aerodynamic terms. They respond to a complex process involving various numerical analysis techniques and experimental procedures. This thesis focuses on the analysis of estimation techniques applied to estimate problems of stability and control derivatives from flight test data provided by an experimental UAV. To achieve this objective, a modern identification methodology (Quad-M) is used to coordinate the processing tasks from multidisciplinary fields, such as parameter estimation modeling, instrumentation, the definition of flight maneuvers and validation. The system under study is a non-linear model with six degrees of freedom with a linear aerodynamic model. The time domain techniques are used for identification of the drone. The first technique, the equation error method is used to determine the structure of the aerodynamic model. Thereafter, the output error method and filter error method are used to estimate the aerodynamic coefficients values. The Matlab scripts for estimating the parameters obtained from the American Institute of Aeronautics and Astronautics (AIAA) are used and modified as necessary to achieve the desired results. A commendable effort in this part of research is devoted to the design of experiments. This includes an awareness of the system data acquisition onboard and the definition of flight maneuvers. The flight tests were conducted under stable flight conditions and with low atmospheric disturbance. Nevertheless, the identification results showed that the filter error method is most effective for estimating the parameters of the drone due to the presence of process noise and measurement. The aerodynamic coefficients are validated using a numerical analysis of the vortex method. In addition, a simulation model incorporating the estimated parameters is used to compare the behavior of states measured. Finally, a good correspondence between the results is demonstrated despite a limited number of flight data. Keywords: drone, identification, estimation, nonlinear, flight test, system, aerodynamic coefficient.

  13. Excitations for Rapidly Estimating Flight-Control Parameters

    NASA Technical Reports Server (NTRS)

    Moes, Tim; Smith, Mark; Morelli, Gene

    2006-01-01

    A flight test on an F-15 airplane was performed to evaluate the utility of prescribed simultaneous independent surface excitations (PreSISE) for real-time estimation of flight-control parameters, including stability and control derivatives. The ability to extract these derivatives in nearly real time is needed to support flight demonstration of intelligent flight-control system (IFCS) concepts under development at NASA, in academia, and in industry. Traditionally, flight maneuvers have been designed and executed to obtain estimates of stability and control derivatives by use of a post-flight analysis technique. For an IFCS, it is required to be able to modify control laws in real time for an aircraft that has been damaged in flight (because of combat, weather, or a system failure). The flight test included PreSISE maneuvers, during which all desired control surfaces are excited simultaneously, but at different frequencies, resulting in aircraft motions about all coordinate axes. The objectives of the test were to obtain data for post-flight analysis and to perform the analysis to determine: 1) The accuracy of derivatives estimated by use of PreSISE, 2) The required durations of PreSISE inputs, and 3) The minimum required magnitudes of PreSISE inputs. The PreSISE inputs in the flight test consisted of stacked sine-wave excitations at various frequencies, including symmetric and differential excitations of canard and stabilator control surfaces and excitations of aileron and rudder control surfaces of a highly modified F-15 airplane. Small, medium, and large excitations were tested in 15-second maneuvers at subsonic, transonic, and supersonic speeds. Typical excitations are shown in Figure 1. Flight-test data were analyzed by use of pEst, which is an industry-standard output-error technique developed by Dryden Flight Research Center. Data were also analyzed by use of Fourier-transform regression (FTR), which was developed for onboard, real-time estimation of the derivatives.

  14. System-of-Systems Technology-Portfolio-Analysis Tool

    NASA Technical Reports Server (NTRS)

    O'Neil, Daniel; Mankins, John; Feingold, Harvey; Johnson, Wayne

    2012-01-01

    Advanced Technology Life-cycle Analysis System (ATLAS) is a system-of-systems technology-portfolio-analysis software tool. ATLAS affords capabilities to (1) compare estimates of the mass and cost of an engineering system based on competing technological concepts; (2) estimate life-cycle costs of an outer-space-exploration architecture for a specified technology portfolio; (3) collect data on state-of-the-art and forecasted technology performance, and on operations and programs; and (4) calculate an index of the relative programmatic value of a technology portfolio. ATLAS facilitates analysis by providing a library of analytical spreadsheet models for a variety of systems. A single analyst can assemble a representation of a system of systems from the models and build a technology portfolio. Each system model estimates mass, and life-cycle costs are estimated by a common set of cost models. Other components of ATLAS include graphical-user-interface (GUI) software, algorithms for calculating the aforementioned index, a technology database, a report generator, and a form generator for creating the GUI for the system models. At the time of this reporting, ATLAS is a prototype, embodied in Microsoft Excel and several thousand lines of Visual Basic for Applications that run on both Windows and Macintosh computers.

  15. An improved principal component analysis based region matching method for fringe direction estimation

    NASA Astrophysics Data System (ADS)

    He, A.; Quan, C.

    2018-04-01

    The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.

  16. Lost productivity due to premature mortality in developed and emerging countries: an application to smoking cessation.

    PubMed

    Menzin, Joseph; Marton, Jeno P; Menzin, Jordan A; Willke, Richard J; Woodward, Rebecca M; Federico, Victoria

    2012-06-25

    Researchers and policy makers have determined that accounting for productivity costs, or "indirect costs," may be as important as including direct medical expenditures when evaluating the societal value of health interventions. These costs are also important when estimating the global burden of disease. The estimation of indirect costs is commonly done on a country-specific basis. However, there are few studies that evaluate indirect costs across countries using a consistent methodology. Using the human capital approach, we developed a model that estimates productivity costs as the present value of lifetime earnings (PVLE) lost due to premature mortality. Applying this methodology, the model estimates productivity costs for 29 selected countries, both developed and emerging. We also provide an illustration of how the inclusion of productivity costs contributes to an analysis of the societal burden of smoking. A sensitivity analysis is undertaken to assess productivity costs on the basis of the friction cost approach. PVLE estimates were higher for certain subpopulations, such as men, younger people, and people in developed countries. In the case study, productivity cost estimates from our model showed that productivity loss was a substantial share of the total cost burden of premature mortality due to smoking, accounting for over 75 % of total lifetime costs in the United States and 67 % of total lifetime costs in Brazil. Productivity costs were much lower using the friction cost approach among those of working age. Our PVLE model is a novel tool allowing researchers to incorporate the value of lost productivity due to premature mortality into economic analyses of treatments for diseases or health interventions. We provide PVLE estimates for a number of emerging and developed countries. Including productivity costs in a health economics study allows for a more comprehensive analysis, and, as demonstrated by our illustration, can have important effects on the results and conclusions.

  17. Lost productivity due to premature mortality in developed and emerging countries: an application to smoking cessation

    PubMed Central

    2012-01-01

    Background Researchers and policy makers have determined that accounting for productivity costs, or “indirect costs,” may be as important as including direct medical expenditures when evaluating the societal value of health interventions. These costs are also important when estimating the global burden of disease. The estimation of indirect costs is commonly done on a country-specific basis. However, there are few studies that evaluate indirect costs across countries using a consistent methodology. Methods Using the human capital approach, we developed a model that estimates productivity costs as the present value of lifetime earnings (PVLE) lost due to premature mortality. Applying this methodology, the model estimates productivity costs for 29 selected countries, both developed and emerging. We also provide an illustration of how the inclusion of productivity costs contributes to an analysis of the societal burden of smoking. A sensitivity analysis is undertaken to assess productivity costs on the basis of the friction cost approach. Results PVLE estimates were higher for certain subpopulations, such as men, younger people, and people in developed countries. In the case study, productivity cost estimates from our model showed that productivity loss was a substantial share of the total cost burden of premature mortality due to smoking, accounting for over 75 % of total lifetime costs in the United States and 67 % of total lifetime costs in Brazil. Productivity costs were much lower using the friction cost approach among those of working age. Conclusions Our PVLE model is a novel tool allowing researchers to incorporate the value of lost productivity due to premature mortality into economic analyses of treatments for diseases or health interventions. We provide PVLE estimates for a number of emerging and developed countries. Including productivity costs in a health economics study allows for a more comprehensive analysis, and, as demonstrated by our illustration, can have important effects on the results and conclusions. PMID:22731620

  18. Notes on a New Coherence Estimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bickel, Douglas L.

    This document discusses some interesting features of the new coherence estimator in [1] . The estimator is d erived from a slightly different viewpoint. We discuss a few properties of the estimator, including presenting the probability density function of the denominator of the new estimator , which is a new feature of this estimator . Finally, we present an appr oximate equation for analysis of the sensitivity of the estimator to the knowledge of the noise value. ACKNOWLEDGEMENTS The preparation of this report is the result of an unfunded research and development activity. Sandia National Laboratories is a multi -more » program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000.« less

  19. Site-specific estimation of peak-streamflow frequency using generalized least-squares regression for natural basins in Texas

    USGS Publications Warehouse

    Asquith, William H.; Slade, R.M.

    1999-01-01

    The U.S. Geological Survey, in cooperation with the Texas Department of Transportation, has developed a computer program to estimate peak-streamflow frequency for ungaged sites in natural basins in Texas. Peak-streamflow frequency refers to the peak streamflows for recurrence intervals of 2, 5, 10, 25, 50, and 100 years. Peak-streamflow frequency estimates are needed by planners, managers, and design engineers for flood-plain management; for objective assessment of flood risk; for cost-effective design of roads and bridges; and also for the desin of culverts, dams, levees, and other flood-control structures. The program estimates peak-streamflow frequency using a site-specific approach and a multivariate generalized least-squares linear regression. A site-specific approach differs from a traditional regional regression approach by developing unique equations to estimate peak-streamflow frequency specifically for the ungaged site. The stations included in the regression are selected using an informal cluster analysis that compares the basin characteristics of the ungaged site to the basin characteristics of all the stations in the data base. The program provides several choices for selecting the stations. Selecting the stations using cluster analysis ensures that the stations included in the regression will have the most pertinent information about flooding characteristics of the ungaged site and therefore provide the basis for potentially improved peak-streamflow frequency estimation. An evaluation of the site-specific approach in estimating peak-streamflow frequency for gaged sites indicates that the site-specific approach is at least as accurate as a traditional regional regression approach.

  20. Test-retest reliability of the diagnosis of schizoaffective disorder in childhood and adolescence - A systematic review and meta-analysis.

    PubMed

    Salamon, Sarah; Santelmann, Hanno; Franklin, Jeremy; Baethge, Christopher

    2018-04-01

    Reliability of schizoaffective disorder (SAD) diagnoses is low in adults but unclear in children and adolescents (CAD). We estimate the test-retest reliability of SAD and its key differential diagnoses (schizophrenia, bipolar disorder, and unipolar depression). Systematic literature search of Medline, Embase, and PsycInfo for studies on test-retest reliability of SAD, in CAD. Cohen's kappa was extracted from studies. We performed meta-analysis for kappa, including subgroup and sensitivity analysis (PROSPERO protocol: CRD42013006713). Out of > 4000 records screened, seven studies were included. We estimated kappa values of 0.27 [95%-CI: 0.07 0.47] for SAD, 0.56 [0.29; 0.83] for schizophrenia, 0.64 [0.55; 0.74] for bipolar disorder, and 0.66 [0.52; 0.81] for unipolar depression. In 5/7 studies kappa of SAD was lower than that of schizophrenia; similar trends emerged for bipolar disorder (4/5) and unipolar depression (2/3). Estimates of positive agreement of SAD diagnoses supported these results. The number of studies and patients included is low. The point-estimate of the test-retest reliability of schizoaffective disorder is only fair, and lower than that of its main differential diagnoses. All kappa values under study were lower in children and adolescents samples than those reported for adults. Clinically, schizoaffective disorder should be diagnosed in strict adherence to the operationalized criteria and ought to be re-evaluated regularly. Should larger studies confirm the insufficient reliability of schizoaffective disorder in children and adolescents, the clinical value of the diagnosis is highly doubtful. Copyright © 2017. Published by Elsevier B.V.

  1. Studies of the net surface radiative flux from satellite radiances during FIFE

    NASA Technical Reports Server (NTRS)

    Frouin, Robert

    1993-01-01

    Studies of the net surface radiative flux from satellite radiances during First ISLSCP Field Experiment (FIFE) are presented. Topics covered include: radiative transfer model validation; calibration of VISSR and AVHRR solar channels; development and refinement of algorithms to estimate downward solar and terrestrial irradiances at the surface, including photosynthetically available radiation (PAR) and surface albedo; verification of these algorithms using in situ measurements; production of maps of shortwave irradiance, surface albedo, and related products; analysis of the temporal variability of shortwave irradiance over the FIFE site; development of a spectroscopy technique to estimate atmospheric total water vapor amount; and study of optimum linear combinations of visible and near-infrared reflectances for estimating the fraction of PAR absorbed by plants.

  2. Current estimates of the cure fraction: a feasibility study of statistical cure for breast and colorectal cancer.

    PubMed

    Stedman, Margaret R; Feuer, Eric J; Mariotto, Angela B

    2014-11-01

    The probability of cure is a long-term prognostic measure of cancer survival. Estimates of the cure fraction, the proportion of patients "cured" of the disease, are based on extrapolating survival models beyond the range of data. The objective of this work is to evaluate the sensitivity of cure fraction estimates to model choice and study design. Data were obtained from the Surveillance, Epidemiology, and End Results (SEER)-9 registries to construct a cohort of breast and colorectal cancer patients diagnosed from 1975 to 1985. In a sensitivity analysis, cure fraction estimates are compared from different study designs with short- and long-term follow-up. Methods tested include: cause-specific and relative survival, parametric mixture, and flexible models. In a separate analysis, estimates are projected for 2008 diagnoses using study designs including the full cohort (1975-2008 diagnoses) and restricted to recent diagnoses (1998-2008) with follow-up to 2009. We show that flexible models often provide higher estimates of the cure fraction compared to parametric mixture models. Log normal models generate lower estimates than Weibull parametric models. In general, 12 years is enough follow-up time to estimate the cure fraction for regional and distant stage colorectal cancer but not for breast cancer. 2008 colorectal cure projections show a 15% increase in the cure fraction since 1985. Estimates of the cure fraction are model and study design dependent. It is best to compare results from multiple models and examine model fit to determine the reliability of the estimate. Early-stage cancers are sensitive to survival type and follow-up time because of their longer survival. More flexible models are susceptible to slight fluctuations in the shape of the survival curve which can influence the stability of the estimate; however, stability may be improved by lengthening follow-up and restricting the cohort to reduce heterogeneity in the data. Published by Oxford University Press 2014.

  3. Measurements of Reynolds stress profiles in unstratified tidal flow

    USGS Publications Warehouse

    Stacey, M.T.; Monismith, Stephen G.; Burau, J.R.

    1999-01-01

    In this paper we present a method for measuring profiles of turbulence quantities using a broadband acoustic doppler current profiler (ADCP). The method follows previous work on the continental shelf and extends the analysis to develop estimates of the errors associated with the estimation methods. ADCP data was collected in an unstratified channel and the results of the analysis are compared to theory. This comparison shows that the method provides an estimate of the Reynolds stresses, which is unbiased by Doppler noise, and an estimate of the turbulent kinetic energy (TKE) which is biased by an amount proportional to the Doppler noise. The noise in each of these quantities as well as the bias in the TKE match well with the theoretical values produced by the error analysis. The quantification of profiles of Reynolds stresses simultaneous with the measurement of mean velocity profiles allows for extensive analysis of the turbulence of the flow. In this paper, we examine the relation between the turbulence and the mean flow through the calculation of u*, the friction velocity, and Cd, the coefficient of drag. Finally, we calculate quantities of particular interest in turbulence modeling and analysis, the characteristic lengthscales, including a lengthscale which represents the stream-wise scale of the eddies which dominate the Reynolds stresses. Copyright 1999 by the American Geophysical Union.

  4. Initial dynamic load estimates during configuration design

    NASA Technical Reports Server (NTRS)

    Schiff, Daniel

    1987-01-01

    This analysis includes the structural response to shock and vibration and evaluates the maximum deflections and material stresses and the potential for the occurrence of elastic instability, fatigue and fracture. The required computations are often performed by means of finite element analysis (FEA) computer programs in which the structure is simulated by a finite element model which may contain thousands of elements. The formulation of a finite element model can be time consuming, and substantial additional modeling effort may be necessary if the structure requires significant changes after initial analysis. Rapid methods for obtaining rough estimates of the structural response to shock and vibration are presented for the purpose of providing guidance during the initial mechanical design configuration stage.

  5. Accuracy Rates of Sex Estimation by Forensic Anthropologists through Comparison with DNA Typing Results in Forensic Casework.

    PubMed

    Thomas, Richard M; Parks, Connie L; Richard, Adam H

    2016-09-01

    A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  6. Inference for High-dimensional Differential Correlation Matrices *

    PubMed Central

    Cai, T. Tony; Zhang, Anru

    2015-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed. PMID:26500380

  7. Marine mammal tracks from two-hydrophone acoustic recordings made with a glider

    NASA Astrophysics Data System (ADS)

    Küsel, Elizabeth T.; Munoz, Tessa; Siderius, Martin; Mellinger, David K.; Heimlich, Sara

    2017-04-01

    A multinational oceanographic and acoustic sea experiment was carried out in the summer of 2014 off the western coast of the island of Sardinia, Mediterranean Sea. During this experiment, an underwater glider fitted with two hydrophones was evaluated as a potential tool for marine mammal population density estimation studies. An acoustic recording system was also tested, comprising an inexpensive, off-the-shelf digital recorder installed inside the glider. Detection and classification of sounds produced by whales and dolphins, and sometimes tracking and localization, are inherent components of population density estimation from passive acoustics recordings. In this work we discuss the equipment used as well as analysis of the data obtained, including detection and estimation of bearing angles. A human analyst identified the presence of sperm whale (Physeter macrocephalus) regular clicks as well as dolphin clicks and whistles. Cross-correlating clicks recorded on both data channels allowed for the estimation of the direction (bearing) of clicks, and realization of animal tracks. Insights from this bearing tracking analysis can aid in population density estimation studies by providing further information (bearings), which can improve estimates.

  8. A combined registration and finite element analysis method for fast estimation of intraoperative brain shift; phantom and animal model study.

    PubMed

    Mohammadi, Amrollah; Ahmadian, Alireza; Rabbani, Shahram; Fattahi, Ehsan; Shirani, Shapour

    2017-12-01

    Finite element models for estimation of intraoperative brain shift suffer from huge computational cost. In these models, image registration and finite element analysis are two time-consuming processes. The proposed method is an improved version of our previously developed Finite Element Drift (FED) registration algorithm. In this work the registration process is combined with the finite element analysis. In the Combined FED (CFED), the deformation of whole brain mesh is iteratively calculated by geometrical extension of a local load vector which is computed by FED. While the processing time of the FED-based method including registration and finite element analysis was about 70 s, the computation time of the CFED was about 3.2 s. The computational cost of CFED is almost 50% less than similar state of the art brain shift estimators based on finite element models. The proposed combination of registration and structural analysis can make the calculation of brain deformation much faster. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Potential for Bias When Estimating Critical Windows for Air Pollution in Children's Health.

    PubMed

    Wilson, Ander; Chiu, Yueh-Hsiu Mathilda; Hsu, Hsiao-Hsien Leon; Wright, Robert O; Wright, Rosalind J; Coull, Brent A

    2017-12-01

    Evidence supports an association between maternal exposure to air pollution during pregnancy and children's health outcomes. Recent interest has focused on identifying critical windows of vulnerability. An analysis based on a distributed lag model (DLM) can yield estimates of a critical window that are different from those from an analysis that regresses the outcome on each of the 3 trimester-average exposures (TAEs). Using a simulation study, we assessed bias in estimates of critical windows obtained using 3 regression approaches: 1) 3 separate models to estimate the association with each of the 3 TAEs; 2) a single model to jointly estimate the association between the outcome and all 3 TAEs; and 3) a DLM. We used weekly fine-particulate-matter exposure data for 238 births in a birth cohort in and around Boston, Massachusetts, and a simulated outcome and time-varying exposure effect. Estimates using separate models for each TAE were biased and identified incorrect windows. This bias arose from seasonal trends in particulate matter that induced correlation between TAEs. Including all TAEs in a single model reduced bias. DLM produced unbiased estimates and added flexibility to identify windows. Analysis of body mass index z score and fat mass in the same cohort highlighted inconsistent estimates from the 3 methods. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. A non-stationary cost-benefit analysis approach for extreme flood estimation to explore the nexus of 'Risk, Cost and Non-stationarity'

    NASA Astrophysics Data System (ADS)

    Qi, Wei

    2017-11-01

    Cost-benefit analysis is commonly used for engineering planning and design problems in practice. However, previous cost-benefit based design flood estimation is based on stationary assumption. This study develops a non-stationary cost-benefit based design flood estimation approach. This approach integrates a non-stationary probability distribution function into cost-benefit analysis, and influence of non-stationarity on expected total cost (including flood damage and construction costs) and design flood estimation can be quantified. To facilitate design flood selections, a 'Risk-Cost' analysis approach is developed, which reveals the nexus of extreme flood risk, expected total cost and design life periods. Two basins, with 54-year and 104-year flood data respectively, are utilized to illustrate the application. It is found that the developed approach can effectively reveal changes of expected total cost and extreme floods in different design life periods. In addition, trade-offs are found between extreme flood risk and expected total cost, which reflect increases in cost to mitigate risk. Comparing with stationary approaches which generate only one expected total cost curve and therefore only one design flood estimation, the proposed new approach generate design flood estimation intervals and the 'Risk-Cost' approach selects a design flood value from the intervals based on the trade-offs between extreme flood risk and expected total cost. This study provides a new approach towards a better understanding of the influence of non-stationarity on expected total cost and design floods, and could be beneficial to cost-benefit based non-stationary design flood estimation across the world.

  11. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials

    PubMed Central

    Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2016-01-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group. PMID:27177885

  12. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.

    PubMed

    Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2017-06-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.

  13. Systematic review finds that study data not published in full text articles have unclear impact on meta-analyses results in medical research.

    PubMed

    Schmucker, Christine M; Blümle, Anette; Schell, Lisa K; Schwarzer, Guido; Oeller, Patrick; Cabrera, Laura; von Elm, Erik; Briel, Matthias; Meerpohl, Joerg J

    2017-01-01

    A meta-analysis as part of a systematic review aims to provide a thorough, comprehensive and unbiased statistical summary of data from the literature. However, relevant study results could be missing from a meta-analysis because of selective publication and inadequate dissemination. If missing outcome data differ systematically from published ones, a meta-analysis will be biased with an inaccurate assessment of the intervention effect. As part of the EU-funded OPEN project (www.open-project.eu) we conducted a systematic review that assessed whether the inclusion of data that were not published at all and/or published only in the grey literature influences pooled effect estimates in meta-analyses and leads to different interpretation. Systematic review of published literature (methodological research projects). Four bibliographic databases were searched up to February 2016 without restriction of publication year or language. Methodological research projects were considered eligible for inclusion if they reviewed a cohort of meta-analyses which (i) compared pooled effect estimates of meta-analyses of health care interventions according to publication status of data or (ii) examined whether the inclusion of unpublished or grey literature data impacts the result of a meta-analysis. Seven methodological research projects including 187 meta-analyses comparing pooled treatment effect estimates according to different publication status were identified. Two research projects showed that published data showed larger pooled treatment effects in favour of the intervention than unpublished or grey literature data (Ratio of ORs 1.15, 95% CI 1.04-1.28 and 1.34, 95% CI 1.09-1.66). In the remaining research projects pooled effect estimates and/or overall findings were not significantly changed by the inclusion of unpublished and/or grey literature data. The precision of the pooled estimate was increased with narrower 95% confidence interval. Although we may anticipate that systematic reviews and meta-analyses not including unpublished or grey literature study results are likely to overestimate the treatment effects, current empirical research shows that this is only the case in a minority of reviews. Therefore, currently, a meta-analyst should particularly consider time, effort and costs when adding such data to their analysis. Future research is needed to identify which reviews may benefit most from including unpublished or grey data.

  14. Systematic review finds that study data not published in full text articles have unclear impact on meta-analyses results in medical research

    PubMed Central

    Blümle, Anette; Schell, Lisa K.; Schwarzer, Guido; Oeller, Patrick; Cabrera, Laura; von Elm, Erik; Briel, Matthias; Meerpohl, Joerg J.

    2017-01-01

    Background A meta-analysis as part of a systematic review aims to provide a thorough, comprehensive and unbiased statistical summary of data from the literature. However, relevant study results could be missing from a meta-analysis because of selective publication and inadequate dissemination. If missing outcome data differ systematically from published ones, a meta-analysis will be biased with an inaccurate assessment of the intervention effect. As part of the EU-funded OPEN project (www.open-project.eu) we conducted a systematic review that assessed whether the inclusion of data that were not published at all and/or published only in the grey literature influences pooled effect estimates in meta-analyses and leads to different interpretation. Methods and findings Systematic review of published literature (methodological research projects). Four bibliographic databases were searched up to February 2016 without restriction of publication year or language. Methodological research projects were considered eligible for inclusion if they reviewed a cohort of meta-analyses which (i) compared pooled effect estimates of meta-analyses of health care interventions according to publication status of data or (ii) examined whether the inclusion of unpublished or grey literature data impacts the result of a meta-analysis. Seven methodological research projects including 187 meta-analyses comparing pooled treatment effect estimates according to different publication status were identified. Two research projects showed that published data showed larger pooled treatment effects in favour of the intervention than unpublished or grey literature data (Ratio of ORs 1.15, 95% CI 1.04–1.28 and 1.34, 95% CI 1.09–1.66). In the remaining research projects pooled effect estimates and/or overall findings were not significantly changed by the inclusion of unpublished and/or grey literature data. The precision of the pooled estimate was increased with narrower 95% confidence interval. Conclusions Although we may anticipate that systematic reviews and meta-analyses not including unpublished or grey literature study results are likely to overestimate the treatment effects, current empirical research shows that this is only the case in a minority of reviews. Therefore, currently, a meta-analyst should particularly consider time, effort and costs when adding such data to their analysis. Future research is needed to identify which reviews may benefit most from including unpublished or grey data. PMID:28441452

  15. Global precipitation estimates based on a technique for combining satellite-based estimates, rain gauge analysis, and NWP model precipitation information

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Adler, Robert F.; Rudolf, Bruno; Schneider, Udo; Keehn, Peter R.

    1995-01-01

    The 'satellite-gauge model' (SGM) technique is described for combining precipitation estimates from microwave satellite data, infrared satellite data, rain gauge analyses, and numerical weather prediction models into improved estimates of global precipitation. Throughout, monthly estimates on a 2.5 degrees x 2.5 degrees lat-long grid are employed. First, a multisatellite product is developed using a combination of low-orbit microwave and geosynchronous-orbit infrared data in the latitude range 40 degrees N - 40 degrees S (the adjusted geosynchronous precipitation index) and low-orbit microwave data alone at higher latitudes. Then the rain gauge analysis is brougth in, weighting each field by its inverse relative error variance to produce a nearly global, observationally based precipitation estimate. To produce a complete global estimate, the numerical model results are used to fill data voids in the combined satellite-gauge estimate. Our sequential approach to combining estimates allows a user to select the multisatellite estimate, the satellite-gauge estimate, or the full SGM estimate (observationally based estimates plus the model information). The primary limitation in the method is imperfections in the estimation of relative error for the individual fields. The SGM results for one year of data (July 1987 to June 1988) show important differences from the individual estimates, including model estimates as well as climatological estimates. In general, the SGM results are drier in the subtropics than the model and climatological results, reflecting the relatively dry microwave estimates that dominate the SGM in oceanic regions.

  16. On the performance of surface renewal analysis to estimate sensible heat flux over two growing rice fields under the influence of regional advection

    NASA Astrophysics Data System (ADS)

    Castellví, F.; Snyder, R. L.

    2009-09-01

    SummaryHigh-frequency temperature data were recorded at one height and they were used in Surface Renewal (SR) analysis to estimate sensible heat flux during the full growing season of two rice fields located north-northeast of Colusa, CA (in the Sacramento Valley). One of the fields was seeded into a flooded paddy and the other was drill seeded before flooding. To minimize fetch requirements, the measurement height was selected to be close to the maximum expected canopy height. The roughness sub-layer depth was estimated to discriminate if the temperature data came from the inertial or roughness sub-layer. The equation to estimate the roughness sub-layer depth was derived by combining simple mixing-length theory, mixing-layer analogy, equations to account for stable atmospheric surface layer conditions, and semi-empirical canopy-architecture relationships. The potential for SR analysis as a method that operates in the full surface boundary layer was tested using data collected over growing vegetation at a site influenced by regional advection of sensible heat flux. The inputs used to estimate the sensible heat fluxes included air temperature sampled at 10 Hz, the mean and variance of the horizontal wind speed, the canopy height, and the plant area index for a given intermediate height of the canopy. Regardless of the stability conditions and measurement height above the canopy, sensible heat flux estimates using SR analysis gave results that were similar to those measured with the eddy covariance method. Under unstable cases, it was shown that the performance was sensitive to estimation of the roughness sub-layer depth. However, an expression was provided to select the crucial scale required for its estimation.

  17. Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform

    PubMed Central

    Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong

    2018-01-01

    Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm. PMID:29438317

  18. Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.

    PubMed

    Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong

    2018-02-13

    Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.

  19. Fault tree analysis for integrated and probabilistic risk analysis of drinking water systems.

    PubMed

    Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof

    2009-04-01

    Drinking water systems are vulnerable and subject to a wide range of risks. To avoid sub-optimisation of risk-reduction options, risk analyses need to include the entire drinking water system, from source to tap. Such an integrated approach demands tools that are able to model interactions between different events. Fault tree analysis is a risk estimation tool with the ability to model interactions between events. Using fault tree analysis on an integrated level, a probabilistic risk analysis of a large drinking water system in Sweden was carried out. The primary aims of the study were: (1) to develop a method for integrated and probabilistic risk analysis of entire drinking water systems; and (2) to evaluate the applicability of Customer Minutes Lost (CML) as a measure of risk. The analysis included situations where no water is delivered to the consumer (quantity failure) and situations where water is delivered but does not comply with water quality standards (quality failure). Hard data as well as expert judgements were used to estimate probabilities of events and uncertainties in the estimates. The calculations were performed using Monte Carlo simulations. CML is shown to be a useful measure of risks associated with drinking water systems. The method presented provides information on risk levels, probabilities of failure, failure rates and downtimes of the system. This information is available for the entire system as well as its different sub-systems. Furthermore, the method enables comparison of the results with performance targets and acceptable levels of risk. The method thus facilitates integrated risk analysis and consequently helps decision-makers to minimise sub-optimisation of risk-reduction options.

  20. Prevalence of treponema species detected in endodontic infections: systematic review and meta-regression analysis.

    PubMed

    Leite, Fábio R M; Nascimento, Gustavo G; Demarco, Flávio F; Gomes, Brenda P F A; Pucci, Cesar R; Martinho, Frederico C

    2015-05-01

    This systematic review and meta-regression analysis aimed to calculate a combined prevalence estimate and evaluate the prevalence of different Treponema species in primary and secondary endodontic infections, including symptomatic and asymptomatic cases. The MEDLINE/PubMed, Embase, Scielo, Web of Knowledge, and Scopus databases were searched without starting date restriction up to and including March 2014. Only reports in English were included. The selected literature was reviewed by 2 authors and classified as suitable or not to be included in this review. Lists were compared, and, in case of disagreements, decisions were made after a discussion based on inclusion and exclusion criteria. A pooled prevalence of Treponema species in endodontic infections was estimated. Additionally, a meta-regression analysis was performed. Among the 265 articles identified in the initial search, only 51 were included in the final analysis. The studies were classified into 2 different groups according to the type of endodontic infection and whether it was an exclusively primary/secondary study (n = 36) or a primary/secondary comparison (n = 15). The pooled prevalence of Treponema species was 41.5% (95% confidence interval, 35.9-47.0). In the multivariate model of meta-regression analysis, primary endodontic infections (P < .001), acute apical abscess, symptomatic apical periodontitis (P < .001), and concomitant presence of 2 or more species (P = .028) explained the heterogeneity regarding the prevalence rates of Treponema species. Our findings suggest that Treponema species are important pathogens involved in endodontic infections, particularly in cases of primary and acute infections. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  1. The SRI-WEFA Soviet Econometric Model: Phase One Documentation

    DTIC Science & Technology

    1975-03-01

    established prices. We also have an estimated equation for an end-use residual category which conceptually includes state grain reserves, other undis...forecasting. An important virtue of the econometric discipline is that it requires one first to conceptualize and estimate regularities of behavior...any de- scriptive analysis. Within the framwork of an econometric model, the analyst is able to discriminate among these "special events

  2. A Radial Basis Function Approach to Financial Time Series Analysis

    DTIC Science & Technology

    1993-12-01

    including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data...collection of practical techniques to address these issues for a modeling methodology . Radial Basis Function networks. These techniques in- clude efficient... methodology often then amounts to a careful consideration of the interplay between model complexity and reliability. These will be recurrent themes

  3. Spectral Analysis for DIAL and Lidar Detection of TATP

    DTIC Science & Technology

    2008-08-13

    REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is. estimated to average 1...hour per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining the data needed, and...completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of

  4. Arkansas, 2009 forest inventory and analysis factsheet

    Treesearch

    James F. Rosson

    2011-01-01

    The summary includes estimates of forest land area (table 1), ownership (table 2), forest-type groups (table 3), volume (tables 4 and 5), biomass (tables 6 and 7), and pine plantation area (table 8) along with maps of Arkansas’ survey units (fig. 1), percent forest by county (fig. 2), and distribution of pine plantations (fig. 3). The estimates are presented by survey...

  5. A Mathematical View of Water Table Fluctuations in a Shallow Aquifer in Brazil.

    PubMed

    Neto, Dagmar C; Chang, Hung K; van Genuchten, Martinus Th

    2016-01-01

    Detailed monitoring of the groundwater table can provide important data about both short- and long-term aquifer processes, including information useful for estimating recharge and facilitating groundwater modeling and remediation efforts. In this paper, we presents results of 4 years (2002 to 2005) of monitoring groundwater water levels in the Rio Claro Aquifer using observation wells drilled at the Rio Claro campus of São Paulo State University in Brazil. The data were used to follow natural periodic fluctuations in the water table, specifically those resulting from earth tides and seasonal recharge cycles. Statistical analyses included methods of time-series analysis using Fourier analysis, cross-correlation, and R/S analysis. Relationships could be established between rainfall and well recovery, as well as the persistence and degree of autocorrelation of the water table variations. We further used numerical solutions of the Richards equation to obtain estimates of the recharge rate and seasonable groundwater fluctuations. Seasonable soil moisture transit times through the vadose zone obtained with the numerical solution were very close to those obtained with the cross-correlation analysis. We also employed a little-used deep drainage boundary condition to obtain estimates of seasonable water table fluctuations, which were found to be consistent with observed transient groundwater levels during the period of study. © 2015, National Ground Water Association.

  6. Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

    NASA Technical Reports Server (NTRS)

    Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron

    1994-01-01

    This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.

  7. Improving the accuracy of burn-surface estimation.

    PubMed

    Nichter, L S; Williams, J; Bryant, C A; Edlich, R F

    1985-09-01

    A user-friendly computer-assisted method of calculating total body surface area burned (TBSAB) has been developed. This method is more accurate, faster, and subject to less error than conventional methods. For comparison, the ability of 30 physicians to estimate TBSAB was tested. Parameters studied included the effect of prior burn care experience, the influence of burn size, the ability to accurately sketch the size of burns on standard burn charts, and the ability to estimate percent TBSAB from the sketches. Despite the ability for physicians of all levels of training to accurately sketch TBSAB, significant burn size over-estimation (p less than 0.01) and large interrater variability of potential consequence was noted. Direct benefits of a computerized system are many. These include the need for minimal user experience and the ability for wound-trend analysis, permanent record storage, calculation of fluid and caloric requirements, hemodynamic parameters, and the ability to compare meaningfully the different treatment protocols.

  8. Improved analysis of ground vibrations produced by man-made sources.

    PubMed

    Ainalis, Daniel; Ducarne, Loïc; Kaufmann, Olivier; Tshibangu, Jean-Pierre; Verlinden, Olivier; Kouroussis, Georges

    2018-03-01

    Man-made sources of ground vibration must be carefully monitored in urban areas in order to ensure that structural damage and discomfort to residents is prevented or minimised. The research presented in this paper provides a comparative evaluation of various methods used to analyse a series of tri-axial ground vibration measurements generated by rail, road, and explosive blasting. The first part of the study is focused on comparing various techniques to estimate the dominant frequency, including time-frequency analysis. The comparative evaluation of the various methods to estimate the dominant frequency revealed that, depending on the method used, there can be significant variation in the estimates obtained. A new and improved analysis approach using the continuous wavelet transform was also presented, using the time-frequency distribution to estimate the localised dominant frequency and peak particle velocity. The technique can be used to accurately identify the level and frequency content of a ground vibration signal as it varies with time, and identify the number of times the threshold limits of damage are exceeded. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Statistical analysis of the calibration procedure for personnel radiation measurement instruments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, W.J.; Bengston, S.J.; Kalbeitzer, F.L.

    1980-11-01

    Thermoluminescent analyzer (TLA) calibration procedures were used to estimate personnel radiation exposure levels at the Idaho National Engineering Laboratory (INEL). A statistical analysis is presented herein based on data collected over a six month period in 1979 on four TLA's located in the Department of Energy (DOE) Radiological and Environmental Sciences Laboratory at the INEL. The data were collected according to the day-to-day procedure in effect at that time. Both gamma and beta radiation models are developed. Observed TLA readings of thermoluminescent dosimeters are correlated with known radiation levels. This correlation is then used to predict unknown radiation doses frommore » future analyzer readings of personnel thermoluminescent dosimeters. The statistical techniques applied in this analysis include weighted linear regression, estimation of systematic and random error variances, prediction interval estimation using Scheffe's theory of calibration, the estimation of the ratio of the means of two normal bivariate distributed random variables and their corresponding confidence limits according to Kendall and Stuart, tests of normality, experimental design, a comparison between instruments, and quality control.« less

  10. Kernel canonical-correlation Granger causality for multiple time series

    NASA Astrophysics Data System (ADS)

    Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu

    2011-04-01

    Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.

  11. Cardiac conduction velocity estimation from sequential mapping assuming known Gaussian distribution for activation time estimation error.

    PubMed

    Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian

    2016-08-01

    In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.

  12. Low-Temperature Hydrothermal Resource Potential

    DOE Data Explorer

    Katherine Young

    2016-06-30

    Compilation of data (spreadsheet and shapefiles) for several low-temperature resource types, including isolated springs and wells, delineated area convection systems, sedimentary basins and coastal plains sedimentary systems. For each system, we include estimates of the accessible resource base, mean extractable resource and beneficial heat. Data compiled from USGS and other sources. The paper (submitted to GRC 2016) describing the methodology and analysis is also included.

  13. Weight Assessment for Fuselage Shielding on Aircraft With Open-Rotor Engines and Composite Blade Loss

    NASA Technical Reports Server (NTRS)

    Carney, Kelly; Pereira, Michael; Kohlman, Lee; Goldberg, Robert; Envia, Edmane; Lawrence, Charles; Roberts, Gary; Emmerling, William

    2013-01-01

    The Federal Aviation Administration (FAA) has been engaged in discussions with airframe and engine manufacturers concerning regulations that would apply to new technology fuel efficient "openrotor" engines. Existing regulations for the engines and airframe did not envision features of these engines that include eliminating the fan blade containment systems and including two rows of counter-rotating blades. Damage to the airframe from a failed blade could potentially be catastrophic. Therefore the feasibility of using aircraft fuselage shielding was investigated. In order to establish the feasibility of this shielding, a study was conducted to provide an estimate for the fuselage shielding weight required to provide protection from an open-rotor blade loss. This estimate was generated using a two-step procedure. First, a trajectory analysis was performed to determine the blade orientation and velocity at the point of impact with the fuselage. The trajectory analysis also showed that a blade dispersion angle of 3deg bounded the probable dispersion pattern and so was used for the weight estimate. Next, a finite element impact analysis was performed to determine the required shielding thickness to prevent fuselage penetration. The impact analysis was conducted using an FAA-provided composite blade geometry. The fuselage geometry was based on a medium-sized passenger composite airframe. In the analysis, both the blade and fuselage were assumed to be constructed from a T700S/PR520 triaxially-braided composite architecture. Sufficient test data on T700S/PR520 is available to enable reliable analysis, and also demonstrate its good impact resistance properties. This system was also used in modeling the surrogate blade. The estimated additional weight required for fuselage shielding for a wing- mounted counterrotating open-rotor blade is 236 lb per aircraft. This estimate is based on the shielding material serving the dual use of shielding and fuselage structure. If the shielding material is not used for dual purpose, and is only used for shielding, then the additional weight per aircraft is estimated to be 428 lb. This weight estimate is based upon a number of assumptions that would need to be revised when applying this concept to an actual airplane design. For example, the weight savings that will result when there is no fan blade containment system, manufacturing limitations which may increase the weight where variable thicknesses was assumed, engine placement on the wing versus aft fuselage, etc.

  14. A lower and more constrained estimate of climate sensitivity using updated observations and detailed radiative forcing time series

    NASA Astrophysics Data System (ADS)

    Skeie, R. B.; Berntsen, T.; Aldrin, M.; Holden, M.; Myhre, G.

    2012-04-01

    A key question in climate science is to quantify the sensitivity of the climate system to perturbation in the radiative forcing (RF). This sensitivity is often represented by the equilibrium climate sensitivity, but this quantity is poorly constrained with significant probabilities for high values. In this work the equilibrium climate sensitivity (ECS) is estimated based on observed near-surface temperature change from the instrumental record, changes in ocean heat content and detailed RF time series. RF time series from pre-industrial times to 2010 for all main anthropogenic and natural forcing mechanisms are estimated and the cloud lifetime effect and the semi-direct effect, which are not RF mechanisms in a strict sense, are included in the analysis. The RF time series are linked to the observations of ocean heat content and temperature change through an energy balance model and a stochastic model, using a Bayesian approach to estimate the ECS from the data. The posterior mean of the ECS is 1.9˚C with 90% credible interval (C.I.) ranging from 1.2 to 2.9˚C, which is tighter than previously published estimates. Observational data up to and including year 2010 are used in this study. This is at least ten additional years compared to the majority of previously published studies that have used the instrumental record in attempts to constrain the ECS. We show that the additional 10 years of data, and especially 10 years of additional ocean heat content data, have significantly narrowed the probability density function of the ECS. If only data up to and including year 2000 are used in the analysis, the 90% C.I. is 1.4 to 10.6˚C with a pronounced heavy tail in line with previous estimates of ECS constrained by observations in the 20th century. Also the transient climate response (TCR) is estimated in this study. Using observational data up to and including year 2010 gives a 90% C.I. of 1.0 to 2.1˚C, while the 90% C.I. is significantly broader ranging from 1.1 to 3.4 ˚C if only data up to and including year 2000 is used.

  15. Fourier analysis of multitracer cosmological surveys

    NASA Astrophysics Data System (ADS)

    Abramo, L. Raul; Secco, Lucas F.; Loureiro, Arthur

    2016-02-01

    We present optimal quadratic estimators for the Fourier analysis of cosmological surveys that detect several different types of tracers of large-scale structure. Our estimators can be used to simultaneously fit the matter power spectrum and the biases of the tracers - as well as redshift-space distortions (RSDs), non-Gaussianities (NGs), or any other effects that are manifested through differences between the clusterings of distinct species of tracers. Our estimators reduce to the one by Feldman, Kaiser & Peacock (FKP) in the case of a survey consisting of a single species of tracer. We show that the multitracer estimators are unbiased, and that their covariance is given by the inverse of the multitracer Fisher matrix. When the biases, RSDs and NGs are fixed to their fiducial values, and one is only interested in measuring the underlying power spectrum, our estimators are projected into the estimator found by Percival, Verde & Peacock. We have tested our estimators on simple (lognormal) simulated galaxy maps, and we show that it performs as expected, being either equivalent or superior to the FKP method in all cases we analysed. Finally, we have shown how to extend the multitracer technique to include the one-halo term of the power spectrum.

  16. Population drinking and fatal injuries in Eastern Europe: a time-series analysis of six countries.

    PubMed

    Landberg, Jonas

    2010-01-01

    To estimate to what extent injury mortality rates in 6 Eastern European countries are affected by changes in population drinking during the post-war period. The analysis included injury mortality rates and per capita alcohol consumption in Russia, Belarus, Poland, Hungary, Bulgaria and the former Czechoslovakia. Total population and gender-specific models were estimated using auto regressive integrated moving average time-series modelling. The estimates for the total population were generally positive and significant. For Russia and Belarus, a 1-litre increase in per capita consumption was associated with an increase in injury mortality of 7.5 and 5.5 per 100,000 inhabitants, respectively. The estimates for the remaining countries ranged between 1.4 and 2.0. The gender-specific estimates displayed national variations similar to the total population estimates although the estimates for males were higher than for females in all countries. The results suggest that changes in per capita consumption have a significant impact on injury mortality in these countries, but the strength of the association tends to be stronger in countries where intoxication-oriented drinking is more common. Copyright 2009 S. Karger AG, Basel.

  17. Hierarchical models and Bayesian analysis of bird survey information

    USGS Publications Warehouse

    Sauer, J.R.; Link, W.A.; Royle, J. Andrew; Ralph, C. John; Rich, Terrell D.

    2005-01-01

    Summary of bird survey information is a critical component of conservation activities, but often our summaries rely on statistical methods that do not accommodate the limitations of the information. Prioritization of species requires ranking and analysis of species by magnitude of population trend, but often magnitude of trend is a misleading measure of actual decline when trend is poorly estimated. Aggregation of population information among regions is also complicated by varying quality of estimates among regions. Hierarchical models provide a reasonable means of accommodating concerns about aggregation and ranking of quantities of varying precision. In these models the need to consider multiple scales is accommodated by placing distributional assumptions on collections of parameters. For collections of species trends, this allows probability statements to be made about the collections of species-specific parameters, rather than about the estimates. We define and illustrate hierarchical models for two commonly encountered situations in bird conservation: (1) Estimating attributes of collections of species estimates, including ranking of trends, estimating number of species with increasing populations, and assessing population stability with regard to predefined trend magnitudes; and (2) estimation of regional population change, aggregating information from bird surveys over strata. User-friendly computer software makes hierarchical models readily accessible to scientists.

  18. Factor analysis of an instrument to measure the impact of disease on daily life.

    PubMed

    Pedrosa, Rafaela Batista Dos Santos; Rodrigues, Roberta Cunha Matheus; Padilha, Kátia Melissa; Gallani, Maria Cecília Bueno Jayme; Alexandre, Neusa Maria Costa

    2016-01-01

    to verify the structure of factors of an instrument to measure the Heart Valve Disease Impact on Daily Life (IDCV) when applied to coronary artery disease patients. the study included 153 coronary artery disease patients undergoing outpatient follow-up care. The IDCV structure of factors was initially assessed by means of confirmatory factor analysis and, subsequently, by exploratory factor analysis. The Varimax rotation method was used to estimate the main components of analysis, eigenvalues greater than one for extraction of factors, and factor loading greater than 0.40 for selection of items. Internal consistency was estimated using Cronbach's alpha coefficient. confirmatory factor analysis did not confirm the original structure of factors of the IDCV. Exploratory factor analysis showed three dimensions, which together explained 78% of the measurement variance. future studies with expansion of case selection are necessary to confirm the IDCV new structure of factors.

  19. Suicide among agricultural, forestry, and fishery workers.

    PubMed

    Shiri, Rahman

    2018-01-01

    In their meta-analysis, Klingelschmidt and her associates (1) found that agricultural, forestry, and fishery workers are at 48% higher risk of suicide than the working-age population. Moreover, they found that the excess risk is even greater among Japanese agricultural workers than workers from other high-income countries. There are several concerns regarding this meta-analysis. It appears that the excess risk has been overestimated for these workers. Furthermore, the excess risk in Japan is not different than other high-income countries. First, in a systematic review, a literature search is comprehensive. A search of a single database is unlikely to identify most of relevant studies, and these types of reviews are not therefore considered as systematic reviews (2). In this review, a specialized database (-PsycINFO) or a European database (EMBASE or -Scopus) was not searched. Second, following the PRISMA guidelines, the critical appraisal of included studies (quality assessment) is a requirement for a systematic review. In a meta-analysis of observational studies, selection bias and confounding should be ruled out. Third, the reviewers did not correctly extract confidence intervals (CI) for the estimates of several studies such as Hassler 2004, Fleming 1999, and Fragar 2011. Moreover, some studies reported both the least- and maximally adjusted risk estimates. The reviewers, however, extracted age- or the least-adjusted risk estimate. A confounder-adjusted estimate is a more appropriate estimate of the true association. In some studies [eg, Kposowa (3) Agerbo (4)], the excess risk dropped by 52-71% after adjustment for confounders. As a sensitivity analysis, the reviewers could limit their meta-analysis to a subgroup of studies controlled for confounders. Fourth, the reviewers did not estimate an overall risk estimate for each study. They included the estimates of 2-6 subgroups for 22 studies in forest and funnel plots. A fixed-effect meta-analysis is a more appropriate model to combine the subgroups of a single study. Moreover, for the assessment of publication bias, it is not appropriate to include several subsamples of a single study in a funnel plot. Using estimates of subgroups can change a large study into several smaller studies. Fifth, some of the included studies compared agricultural, forestry, and fishery workers with a specific occupational group. The reviewers could calculate a risk estimate using all other occupational groups as a comparison group and exclude those studies that did not provide sufficient data for estimating such a risk estimate. In some studies, the excess risk for agricultural, forestry, and fishery workers disappears after comparing with other occupational groups [eg, adjusted risk ratios (RR) for Kposowa (3) = 1.02, 95% CI 0.41-2.54]. This is a main reason for observed higher excess risk in Japanese workers. Wada et al (5) compared Japanese agricultural workers with sales workers and Suzuki et al (6) compared Japanese agricultural, forestry, and fishery workers with production process and related workers. Using all other occupational groups as a reference group, age-adjusted RR dropped from 3.53 (95% CI 2.84-4.38) to 2.61 (95% CI 2.10-3.25) for Wada et al (5) and from 3.24 (CI 2.95-3.57, both sexes combined) to 1.31 (CI 1.27-1.35 age-adjusted OR after excluding unemployed people) for Suzuki et al (6). The pooled estimate of these two register-based studies was 1.33 (95% CI 1.29-1.37) using a fixed model and 1.83 (95% CI 0.93-3.60) using a random model. Sixth, most of the included studies used register data, which had little information on the background characteristics of the participants. A majority of these studies controlled the estimates for age and sex only. Moreover, in this review, prospective cohort studies did not support the observed association. A meta-analysis of 11 case-control and prospective cohort studies shows no significant excess risk of suicide for agricultural, forestry, and fishery workers (pooled estimate = 1.02, 95% CI 0.71-1.47 for 6 cohort studies and 1.13, 95% CI 0.92-1.39, I2 = 91% for 11 case control and cohort studies, combining maximally adjusted risk estimates and comparing agricultural, forestry, and fishery workers with all other occupational groups where possible). The excess risk found in this review (1) can thus largely be due to confounding. References 1. Klingelschmidt J, Milner A, Khireddine-Medouni I, Witt K, Alexopoulos EC, Toivanen S, LaMontagne AD, Chastang JF, Niedhammer I. Suicide among agricultural, forestry, and fishery workers: a systematic literature review and meta-analysis. Scand J Work Environ Health. 2018;44(1):3-15. https://doi.org/10.5271/sjweh.3682.  2. Puljak L. If there is only one author or only one database was searched, a study should not be called a systematic review. J Clin Epidemiol. 2017;91:4-5. https://doi.org/10.1016/j.jclinepi.2017.08.002.  3. Kposowa AJ. Suicide mortality in the United States: differentials by industrial and occupational groups. Am J Ind Med. 1999;36:645-52. https://doi.org/10.1002/(SICI)1097-0274(199912)36:63.0.CO;2-T. 4. Agerbo E, Gunnell D, Bonde JP, Mortensen PB, Nordentoft M. Suicide and occupation: the impact of socio-economic, demographic and psychiatric differences. Psychol Med. 2007;37:1131-40. https://doi.org/10.1017/S0033291707000487.  5. Wada K, Gilmour S. Inequality in mortality by occupation related to economic crisis from 1980 to 2010 among working-age Japanese males. Sci Rep. 2016;6:22255. https://doi.org/10.1038/srep22255. 6. Suzuki E, Kashima S, Kawachi I, Subramanian SV. Social and geographical inequalities in suicide in Japan from 1975 through 2005: a census-based longitudinal analysis. PLoS One. 2013;8:e63443. https://doi.org/10.1371/journal.pone.0063443.

  20. Estimating Intervention Effects across Different Types of Single-Subject Experimental Designs: Empirical Illustration

    ERIC Educational Resources Information Center

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S. Natasha; Van den Noortgate, Wim

    2015-01-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs…

  1. Energy Analysis of Offshore Systems | Wind | NREL

    Science.gov Websites

    successful research to understand and improve the cost of wind generation technology. As a research approaches used to estimate direct and indirect economic impacts of offshore wind. Chart of cost data for report on cost trends. Recent studies include: Analysis of capital cost trends for planned and installed

  2. Conjoint Analysis: A Study of the Effects of Using Person Variables.

    ERIC Educational Resources Information Center

    Fraas, John W.; Newman, Isadore

    Three statistical techniques--conjoint analysis, a multiple linear regression model, and a multiple linear regression model with a surrogate person variable--were used to estimate the relative importance of five university attributes for students in the process of selecting a college. The five attributes include: availability and variety of…

  3. Financial planning for major initiatives: a framework for success.

    PubMed

    Harris, John M

    2007-11-01

    A solid framework for assessing a major strategic initiative consists of four broad steps: Initial considerations, including level of analysis required and resources that will be brought to bear. Preliminary financial estimates for board approval to further assess the initiative. Assessment of potential partners' interest in the project. Feasibility analysis for board green light.

  4. Some Supplementary Methods for the Analysis of the Delis-Kaplan Executive Function System

    ERIC Educational Resources Information Center

    Crawford, John R.; Garthwaite, Paul H.; Sutherland, David; Borland, Nicola

    2011-01-01

    Supplementary methods for the analysis of the Delis-Kaplan Executive Function System (Delis, Kaplan, & Kramer, 2001) are made available, including (a) quantifying the number of abnormally low achievement scores exhibited by an individual and accompanying this with an estimate of the percentage of the normative population expected to exhibit at…

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Richard O.

    The application of statistics to environmental pollution monitoring studies requires a knowledge of statistical analysis methods particularly well suited to pollution data. This book fills that need by providing sampling plans, statistical tests, parameter estimation procedure techniques, and references to pertinent publications. Most of the statistical techniques are relatively simple, and examples, exercises, and case studies are provided to illustrate procedures. The book is logically divided into three parts. Chapters 1, 2, and 3 are introductory chapters. Chapters 4 through 10 discuss field sampling designs and Chapters 11 through 18 deal with a broad range of statistical analysis procedures. Somemore » statistical techniques given here are not commonly seen in statistics book. For example, see methods for handling correlated data (Sections 4.5 and 11.12), for detecting hot spots (Chapter 10), and for estimating a confidence interval for the mean of a lognormal distribution (Section 13.2). Also, Appendix B lists a computer code that estimates and tests for trends over time at one or more monitoring stations using nonparametric methods (Chapters 16 and 17). Unfortunately, some important topics could not be included because of their complexity and the need to limit the length of the book. For example, only brief mention could be made of time series analysis using Box-Jenkins methods and of kriging techniques for estimating spatial and spatial-time patterns of pollution, although multiple references on these topics are provided. Also, no discussion of methods for assessing risks from environmental pollution could be included.« less

  6. Estimating the spatial distribution of soil organic matter density and geochemical properties in a polygonal shaped Arctic Tundra using core sample analysis and X-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Soom, F.; Ulrich, C.; Dafflon, B.; Wu, Y.; Kneafsey, T. J.; López, R. D.; Peterson, J.; Hubbard, S. S.

    2016-12-01

    The Arctic tundra with its permafrost dominated soils is one of the regions most affected by global climate change, and in turn, can also influence the changing climate through biogeochemical processes, including greenhouse gas release or storage. Characterization of shallow permafrost distribution and characteristics are required for predicting ecosystem feedbacks to a changing climate over decadal to century timescales, because they can drive active layer deepening and land surface deformation, which in turn can significantly affect hydrological and biogeochemical responses, including greenhouse gas dynamics. In this study, part of the Next-Generation Ecosystem Experiment (NGEE-Arctic), we use X-ray computed tomography (CT) to estimate wet bulk density of cores extracted from a field site near Barrow AK, which extend 2-3m through the active layer into the permafrost. We use multi-dimensional relationships inferred from destructive core sample analysis to infer organic matter density, dry bulk density and ice content, along with some geochemical properties from nondestructive CT-scans along the entire length of the cores, which was not obtained by the spatially limited destructive laboratory analysis. Multi-parameter cross-correlations showed good agreement between soil properties estimated from CT scans versus properties obtained through destructive sampling. Soil properties estimated from cores located in different types of polygons provide valuable information about the vertical distribution of soil and permafrost properties as a function of geomorphology.

  7. Laboratory evaluation of a field-portable sealed source X-ray fluorescence spectrometer for determination of metals in air filter samples.

    PubMed

    Lawryk, Nicholas J; Feng, H Amy; Chen, Bean T

    2009-07-01

    Recent advances in field-portable X-ray fluorescence (FP XRF) spectrometer technology have made it a potentially valuable screening tool for the industrial hygienist to estimate worker exposures to airborne metals. Although recent studies have shown that FP XRF technology may be better suited for qualitative or semiquantitative analysis of airborne lead in the workplace, these studies have not extensively addressed its ability to measure other elements. This study involved a laboratory-based evaluation of a representative model FP XRF spectrometer to measure elements commonly encountered in workplace settings that may be collected on air sample filter media, including chromium, copper, iron, manganese, nickel, lead, and zinc. The evaluation included assessments of (1) response intensity with respect to location on the probe window, (2) limits of detection for five different filter media, (3) limits of detection as a function of analysis time, and (4) bias, precision, and accuracy estimates. Teflon, polyvinyl chloride, polypropylene, and mixed cellulose ester filter media all had similarly low limits of detection for the set of elements examined. Limits of detection, bias, and precision generally improved with increasing analysis time. Bias, precision, and accuracy estimates generally improved with increasing element concentration. Accuracy estimates met the National Institute for Occupational Safety and Health criterion for nearly all the element and concentration combinations. Based on these results, FP XRF spectrometry shows potential to be useful in the assessment of worker inhalation exposures to other metals in addition to lead.

  8. Crustal dynamics project data analysis, 1987. Volume 1: Fixed station VLBI geodetic results, 1979-1986

    NASA Technical Reports Server (NTRS)

    Ryan, J. W.; Ma, C.

    1987-01-01

    The Goddard VLBI group reports the results of analyzing Mark III data sets from fixed observatories through the end of 1986 and available to the Crustal Dynamics Project. All full-day data from POLARIS/IRIS are included. The mobile VLBI sites at Platteville (Colorado), Penticton (British Columbia), and Yellowknife (Northwest Territories) are also included since these occupations bear on the study of plate stability. Two large solutions, GLB121 and GLB122, were used to obtain Earth rotation parameters and baseline evolutions, respectively. Radio source positions were estimated globally while nutation offsets were estimated from each data set. The results include 25 sites and 108 baselines.

  9. ECONOMIC ANALYSIS FOR THE GROUND WATER RULE ...

    EPA Pesticide Factsheets

    The Ground Water Rule Economic Analysis provides a description of the need for the rule, consideration of regulatory alternatives, baseline analysis including national ground water system profile and an estimate of pathogen and indicator occurrence (Chapter 4), a risk assessment and benefits analysis (Chapter 5), and a cost analysis ( Chapter 6). Chapters 4, 5 and 6, selected appendices and sections of other chapters will be peer reviewed. The objective of the Economic Analysis Document is to support the final Ground Water Rule.

  10. Moderation analysis using a two-level regression model.

    PubMed

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

  11. Analysis of Meteorological Satellite location and data collection system concepts

    NASA Technical Reports Server (NTRS)

    Wallace, R. G.; Reed, D. L.

    1981-01-01

    A satellite system that employs a spaceborne RF interferometer to determine the location and velocity of data collection platforms attached to meteorological balloons is proposed. This meteorological advanced location and data collection system (MALDCS) is intended to fly aboard a low polar orbiting satellite. The flight instrument configuration includes antennas supported on long deployable booms. The platform location and velocity estimation errors introduced by the dynamic and thermal behavior of the antenna booms and the effects of the presence of the booms on the performance of the spacecraft's attitude control system, and the control system design considerations critical to stable operations are examined. The physical parameters of the Astromast type of deployable boom were used in the dynamic and thermal boom analysis, and the TIROS N system was assumed for the attitude control analysis. Velocity estimation error versus boom length was determined. There was an optimum, minimum error, antenna separation distance. A description of the proposed MALDCS system and a discussion of ambiguity resolution are included.

  12. Augmented Topological Descriptors of Pore Networks for Material Science.

    PubMed

    Ushizima, D; Morozov, D; Weber, G H; Bianchi, A G C; Sethian, J A; Bethel, E W

    2012-12-01

    One potential solution to reduce the concentration of carbon dioxide in the atmosphere is the geologic storage of captured CO2 in underground rock formations, also known as carbon sequestration. There is ongoing research to guarantee that this process is both efficient and safe. We describe tools that provide measurements of media porosity, and permeability estimates, including visualization of pore structures. Existing standard algorithms make limited use of geometric information in calculating permeability of complex microstructures. This quantity is important for the analysis of biomineralization, a subsurface process that can affect physical properties of porous media. This paper introduces geometric and topological descriptors that enhance the estimation of material permeability. Our analysis framework includes the processing of experimental data, segmentation, and feature extraction and making novel use of multiscale topological analysis to quantify maximum flow through porous networks. We illustrate our results using synchrotron-based X-ray computed microtomography of glass beads during biomineralization. We also benchmark the proposed algorithms using simulated data sets modeling jammed packed bead beds of a monodispersive material.

  13. Inference and Prediction of Metabolic Network Fluxes

    PubMed Central

    Nikoloski, Zoran; Perez-Storey, Richard; Sweetlove, Lee J.

    2015-01-01

    In this Update, we cover the basic principles of the estimation and prediction of the rates of the many interconnected biochemical reactions that constitute plant metabolic networks. This includes metabolic flux analysis approaches that utilize the rates or patterns of redistribution of stable isotopes of carbon and other atoms to estimate fluxes, as well as constraints-based optimization approaches such as flux balance analysis. Some of the major insights that have been gained from analysis of fluxes in plants are discussed, including the functioning of metabolic pathways in a network context, the robustness of the metabolic phenotype, the importance of cell maintenance costs, and the mechanisms that enable energy and redox balancing at steady state. We also discuss methodologies to exploit 'omic data sets for the construction of tissue-specific metabolic network models and to constrain the range of permissible fluxes in such models. Finally, we consider the future directions and challenges faced by the field of metabolic network flux phenotyping. PMID:26392262

  14. Best practices for roundabouts on state highways.

    DOT National Transportation Integrated Search

    2013-07-01

    This report presents a series of research findings from an investigation into roundabout operations. This includes a comparison of : several analysis tools for estimating roundabout performance (the Highway Capacity Manual, SIDRA, ARCADY, VISSIM, and...

  15. The relative impact of baryons and cluster shape on weak lensing mass estimates of galaxy clusters

    NASA Astrophysics Data System (ADS)

    Lee, B. E.; Le Brun, A. M. C.; Haq, M. E.; Deering, N. J.; King, L. J.; Applegate, D.; McCarthy, I. G.

    2018-05-01

    Weak gravitational lensing depends on the integrated mass along the line of sight. Baryons contribute to the mass distribution of galaxy clusters and the resulting mass estimates from lensing analysis. We use the cosmo-OWLS suite of hydrodynamic simulations to investigate the impact of baryonic processes on the bias and scatter of weak lensing mass estimates of clusters. These estimates are obtained by fitting NFW profiles to mock data using MCMC techniques. In particular, we examine the difference in estimates between dark matter-only runs and those including various prescriptions for baryonic physics. We find no significant difference in the mass bias when baryonic physics is included, though the overall mass estimates are suppressed when feedback from AGN is included. For lowest-mass systems for which a reliable mass can be obtained (M200 ≈ 2 × 1014M⊙), we find a bias of ≈-10 per cent. The magnitude of the bias tends to decrease for higher mass clusters, consistent with no bias for the most massive clusters which have masses comparable to those found in the CLASH and HFF samples. For the lowest mass clusters, the mass bias is particularly sensitive to the fit radii and the limits placed on the concentration prior, rendering reliable mass estimates difficult. The scatter in mass estimates between the dark matter-only and the various baryonic runs is less than between different projections of individual clusters, highlighting the importance of triaxiality.

  16. Mission analysis and performance specification studies report, appendix A

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The Near Term Hybrid Passenger Vehicle Development Program tasks included defining missions, developing distributions of daily travel and composite driving cycles for these missions, providing information necessary to estimate the potential replacement of the existing fleet by hybrids, and estimating acceleration/gradeability performance requirements for safe operation. The data was then utilized to develop mission specifications, define reference vehicles, develop hybrid vehicle performance specifications, and make fuel consumption estimates for the vehicles. The major assumptions which underlie the approach taken to the mission analysis and development of performance specifications are the following: the daily operating range of a hybrid vehicle should not be limited by the stored energy capacity and the performance of such a vehicle should not be strongly dependent on the battery state of charge.

  17. Wavelet Analysis for Wind Fields Estimation

    PubMed Central

    Leite, Gladeston C.; Ushizima, Daniela M.; Medeiros, Fátima N. S.; de Lima, Gilson G.

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B3 spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms−1. Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699

  18. Sampling in freshwater environments: suspended particle traps and variability in the final data.

    PubMed

    Barbizzi, Sabrina; Pati, Alessandra

    2008-11-01

    This paper reports one practical method to estimate the measurement uncertainty including sampling, derived by the approach implemented by Ramsey for soil investigations. The methodology has been applied to estimate the measurements uncertainty (sampling and analyses) of (137)Cs activity concentration (Bq kg(-1)) and total carbon content (%) in suspended particle sampling in a freshwater ecosystem. Uncertainty estimates for between locations, sampling and analysis components have been evaluated. For the considered measurands, the relative expanded measurement uncertainties are 12.3% for (137)Cs and 4.5% for total carbon. For (137)Cs, the measurement (sampling+analysis) variance gives the major contribution to the total variance, while for total carbon the spatial variance is the dominant contributor to the total variance. The limitations and advantages of this basic method are discussed.

  19. Space-based power conversion and power relay systems: Preliminary analysis of alternate systems

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The results are presented of nine months of technical study of non-photovoltaic options for the generation of electricity for terrestrial use by satellite power stations (SPS). A concept for the augmentation of ground-based solar power plants by orbital sunlight reflectors was also studied. Three SPS types having a solar energy source and two which used nuclear reactors were investigated. Data derived for each included: (1) configuration definition, including mass statement; (2) information for use in environmental impact assessment; (3) energy balance (ratio of energy produced to that required to achieve operation), and (4) development and other cost estimates. Cost estimates were dependent upon the total program (development, placement and operation of a number of satellites) which was postulated. This postulation was based upon an analysis of national power capacity trends and guidelines received from MSFC.

  20. The importance of operations, risk, and cost assessment to space transfer systems design

    NASA Technical Reports Server (NTRS)

    Ball, J. M.; Komerska, R. J.; Rowell, L. F.

    1992-01-01

    This paper examines several methodologies which contribute to comprehensive subsystem cost estimation. The example of a space-based lunar space transfer vehicle (STV) design is used to illustrate how including both primary and secondary factors into cost affects the decision of whether to use aerobraking or propulsion for earth orbit capture upon lunar return. The expected dominant cost factor in this decision is earth-to-orbit launch cost driven by STV mass. However, to quantify other significant cost factors, this cost comparison included a risk analysis to identify development and testing costs, a Taguchi design of experiments to determine a minimum mass aerobrake design, and a detailed operations analysis. As a result, the predicted cost advantage of aerobraking, while still positive, was subsequently reduced by about 30 percent compared to the simpler mass-based cost estimates.

  1. Risk of myocardial infarction and stroke in bipolar disorder: a systematic review and exploratory meta-analysis.

    PubMed

    Prieto, M L; Cuéllar-Barboza, A B; Bobo, W V; Roger, V L; Bellivier, F; Leboyer, M; West, C P; Frye, M A

    2014-11-01

    To review the evidence on and estimate the risk of myocardial infarction and stroke in bipolar disorder. A systematic search using MEDLINE, EMBASE, PsycINFO, Web of Science, Scopus, Cochrane Database of Systematic Reviews, and bibliographies (1946 - May, 2013) was conducted. Case-control and cohort studies of bipolar disorder patients age 15 or older with myocardial infarction or stroke as outcomes were included. Two independent reviewers extracted data and assessed quality. Estimates of effect were summarized using random-effects meta-analysis. Five cohort studies including 13 115 911 participants (27 092 bipolar) were included. Due to the use of registers, different statistical methods, and inconsistent adjustment for confounders, there was significant methodological heterogeneity among studies. The exploratory meta-analysis yielded no evidence for a significant increase in the risk of myocardial infarction: [relative risk (RR): 1.09, 95% CI 0.96-1.24, P = 0.20; I(2)  = 6%]. While there was evidence of significant study heterogeneity, the risk of stroke in bipolar disorder was significantly increased (RR 1.74, 95% CI 1.29-2.35; P = 0.0003; I(2)  = 83%). There may be a differential risk of myocardial infarction and stroke in patients with bipolar disorder. Confidence in these pooled estimates was limited by the small number of studies, significant heterogeneity and dissimilar methodological features. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.

  3. Estimated hepatitis C prevalence and key population sizes in San Francisco: A foundation for elimination.

    PubMed

    Facente, Shelley N; Grebe, Eduard; Burk, Katie; Morris, Meghan D; Murphy, Edward L; Mirzazadeh, Ali; Smith, Aaron A; Sanchez, Melissa A; Evans, Jennifer L; Nishimura, Amy; Raymond, Henry F

    2018-01-01

    Initiated in 2016, End Hep C SF is a comprehensive initiative to eliminate hepatitis C (HCV) infection in San Francisco. The introduction of direct-acting antivirals to treat and cure HCV provides an opportunity for elimination. To properly measure progress, an estimate of baseline HCV prevalence, and of the number of people in various subpopulations with active HCV infection, is required to target and measure the impact of interventions. Our analysis was designed to incorporate multiple relevant data sources and estimate HCV burden for the San Francisco population as a whole, including specific key populations at higher risk of infection. Our estimates are based on triangulation of data found in case registries, medical records, observational studies, and published literature from 2010 through 2017. We examined subpopulations based on sex, age and/or HCV risk group. When multiple sources of data were available for subpopulation estimates, we calculated a weighted average using inverse variance weighting. Credible ranges (CRs) were derived from 95% confidence intervals of population size and prevalence estimates. We estimate that 21,758 residents of San Francisco are HCV seropositive (CR: 10,274-42,067), representing an overall seroprevalence of 2.5% (CR: 1.2%- 4.9%). Of these, 16,408 are estimated to be viremic (CR: 6,505-37,407), though this estimate includes treated cases; up to 12,257 of these (CR: 2,354-33,256) are people who are untreated and infectious. People who injected drugs in the last year represent 67.9% of viremic HCV infections. We estimated approximately 7,400 (51%) more HCV seropositive cases than are included in San Francisco's HCV surveillance case registry. Our estimate provides a useful baseline against which the impact of End Hep C SF can be measured.

  4. The economic impacts of the September 11 terrorist attacks: a computable general equilibrium analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oladosu, Gbadebo A; Rose, Adam; Bumsoo, Lee

    This paper develops a bottom-up approach that focuses on behavioral responses in estimating the total economic impacts of the September 11, 2001, World Trade Center (WTC) attacks. The estimation includes several new features. First, is the collection of data on the relocation of firms displaced by the attack, the major source of resilience in muting the direct impacts of the event. Second, is a new estimate of the major source of impacts off-site -- the ensuing decline of air travel and related tourism in the U.S. due to the social amplification of the fear of terrorism. Third, the estimation ismore » performed for the first time using Computable General Equilibrium (CGE) analysis, including a new approach to reflecting the direct effects of external shocks. This modeling framework has many advantages in this application, such as the ability to include behavioral responses of individual businesses and households, to incorporate features of inherent and adaptive resilience at the level of the individual decision maker and the market, and to gauge quantity and price interaction effects across sectors of the regional and national economies. We find that the total business interruption losses from the WTC attacks on the U.S. economy were only slightly over $100 billion, or less than 1.0% of Gross Domestic Product. The impacts were only a loss of $14 billion of Gross Regional Product for the New York Metropolitan Area.« less

  5. A Bayesian approach to multi-messenger astronomy: identification of gravitational-wave host galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, XiLong; Messenger, Christopher; Heng, Ik Siong

    We present a general framework for incorporating astrophysical information into Bayesian parameter estimation techniques used by gravitational wave data analysis to facilitate multi-messenger astronomy. Since the progenitors of transient gravitational wave events, such as compact binary coalescences, are likely to be associated with a host galaxy, improvements to the source sky location estimates through the use of host galaxy information are explored. To demonstrate how host galaxy properties can be included, we simulate a population of compact binary coalescences and show that for ∼8.5% of simulations within 200 Mpc, the top 10 most likely galaxies account for a ∼50% ofmore » the total probability of hosting a gravitational wave source. The true gravitational wave source host galaxy is in the top 10 galaxy candidates ∼10% of the time. Furthermore, we show that by including host galaxy information, a better estimate of the inclination angle of a compact binary gravitational wave source can be obtained. We also demonstrate the flexibility of our method by incorporating the use of either the B or K band into our analysis.« less

  6. Counting the cost: estimating the economic benefit of pedophile treatment programs.

    PubMed

    Shanahan, M; Donato, R

    2001-04-01

    The principal objective of this paper is to identify the economic costs and benefits of pedophile treatment programs incorporating both the tangible and intangible cost of sexual abuse to victims. Cost estimates of cognitive behavioral therapy programs in Australian prisons are compared against the tangible and intangible costs to victims of being sexually abused. Estimates are prepared that take into account a number of problematic issues. These include the range of possible recidivism rates for treatment programs; the uncertainty surrounding the number of child sexual molestation offences committed by recidivists; and the methodological problems associated with estimating the intangible costs of sexual abuse on victims. Despite the variation in parameter estimates that impact on the cost-benefit analysis of pedophile treatment programs, it is found that potential range of economic costs from child sexual abuse are substantial and the economic benefits to be derived from appropriate and effective treatment programs are high. Based on a reasonable set of parameter estimates, in-prison, cognitive therapy treatment programs for pedophiles are likely to be of net benefit to society. Despite this, a critical area of future research must include further methodological developments in estimating the quantitative impact of child sexual abuse in the community.

  7. The trade-off between hospital cost and quality of care. An exploratory empirical analysis.

    PubMed

    Morey, R C; Fine, D J; Loree, S W; Retzlaff-Roberts, D L; Tsubakitani, S

    1992-08-01

    The debate concerning quality of care in hospitals, its "value" and affordability, is increasingly of concern to providers, consumers, and purchasers in the United States and elsewhere. We undertook an exploratory study to estimate the impact on hospital-wide costs if quality-of-care levels were varied. To do so, we obtained costs and service output data regarding 300 U.S. hospitals, representing approximately a 5% cross section of all hospitals operating in 1983; both inpatient and outpatient services were included. The quality-of-care measure used for the exploratory analysis was the ratio of actual deaths in the hospital for the year in question to the forecasted number of deaths for the hospital; the hospital mortality forecaster had earlier (and elsewhere) been built from analyses of 6 million discharge abstracts, and took into account each hospital's actual individual admissions, including key patient descriptors for each admission. Such adjusted death rates have increasingly been used as potential indicators of quality, with recent research lending support for the viability of that linkage. The authors then utilized the economic construct of allocative efficiency relying on "best practices" concepts and peer groupings, built using the "envelopment" philosophy of Data Envelopment Analysis and Pareto efficiency. These analytical techniques estimated the efficiently delivered costs required to meet prespecified levels of quality of care. The marginal additional cost per each death deferred in 1983 was estimated to be approximately $29,000 (in 1990 dollars) for the average efficient hospital. Also, over a feasible range, a 1% increase in the level of quality of care delivered was estimated to increase hospital cost by an average of 1.34%. This estimated elasticity of quality on cost also increased with the number of beds in the hospital.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Performance Prototype Trough (PPT) Concentrating Collector consists of four 80-foot modules in a 320-foot row. The collector was analyzed, including cost estimates and manufacturing processes to produce collectors in volumes from 100 to 100,000 modules per year. The four different reflector concepts considered were the sandwich reflector structure, sheet metal reflector structure, molded reflector structure, and glass laminate structure. The sheet metal and glass laminate structures are emphasized with their related structure concepts. A preliminary manufacturing plan is offered that includes: documentation of the manufacturing process with production flow diagrams; labor and material costs at various production levels; machinerymore » and equipment requirements including preliminary design specifications; and capital investment costs for a new plant. Of five reflector designs considered, the two judged best and considered at length are thin annealed glass and steel laminate on steel frame panel and thermally sagged glass. Also discussed are market considerations, costing and selling price estimates, design cost analysis and make/buy analysis. (LEW)« less

  9. Cost effectiveness of on- and off-field conservation practices designed to reduce nitrogen in downstream water

    USDA-ARS?s Scientific Manuscript database

    The objective of this analysis is to estimate and compare the cost-effectiveness of on- and off-field approaches to reducing nitrogen loadings. On-field practices include improving the timing, rate, and method of nitrogen application. Off-field practices include restoring wetlands and establishing v...

  10. Kansas's forests, 2005: statistics, methods, and quality assurance

    Treesearch

    Patrick D. Miles; W. Keith Moser; Charles J. Barnett

    2011-01-01

    The first full annual inventory of Kansas's forests was completed in 2005 after 8,868 plots were selected and 468 forested plots were visited and measured. This report includes detailed information on forest inventory methods and data quality estimates. Important resource statistics are included in the tables. A detailed analysis of Kansas inventory is presented...

  11. Bias due to selective inclusion and reporting of outcomes and analyses in systematic reviews of randomised trials of healthcare interventions.

    PubMed

    Page, Matthew J; McKenzie, Joanne E; Kirkham, Jamie; Dwan, Kerry; Kramer, Sharon; Green, Sally; Forbes, Andrew

    2014-10-01

    Systematic reviews may be compromised by selective inclusion and reporting of outcomes and analyses. Selective inclusion occurs when there are multiple effect estimates in a trial report that could be included in a particular meta-analysis (e.g. from multiple measurement scales and time points) and the choice of effect estimate to include in the meta-analysis is based on the results (e.g. statistical significance, magnitude or direction of effect). Selective reporting occurs when the reporting of a subset of outcomes and analyses in the systematic review is based on the results (e.g. a protocol-defined outcome is omitted from the published systematic review). To summarise the characteristics and synthesise the results of empirical studies that have investigated the prevalence of selective inclusion or reporting in systematic reviews of randomised controlled trials (RCTs), investigated the factors (e.g. statistical significance or direction of effect) associated with the prevalence and quantified the bias. We searched the Cochrane Methodology Register (to July 2012), Ovid MEDLINE, Ovid EMBASE, Ovid PsycINFO and ISI Web of Science (each up to May 2013), and the US Agency for Healthcare Research and Quality (AHRQ) Effective Healthcare Program's Scientific Resource Center (SRC) Methods Library (to June 2013). We also searched the abstract books of the 2011 and 2012 Cochrane Colloquia and the article alerts for methodological work in research synthesis published from 2009 to 2011 and compiled in Research Synthesis Methods. We included both published and unpublished empirical studies that investigated the prevalence and factors associated with selective inclusion or reporting, or both, in systematic reviews of RCTs of healthcare interventions. We included empirical studies assessing any type of selective inclusion or reporting, such as investigations of how frequently RCT outcome data is selectively included in systematic reviews based on the results, outcomes and analyses are discrepant between protocol and published review or non-significant outcomes are partially reported in the full text or summary within systematic reviews. Two review authors independently selected empirical studies for inclusion, extracted the data and performed a risk of bias assessment. A third review author resolved any disagreements about inclusion or exclusion of empirical studies, data extraction and risk of bias. We contacted authors of included studies for additional unpublished data. Primary outcomes included overall prevalence of selective inclusion or reporting, association between selective inclusion or reporting and the statistical significance of the effect estimate, and association between selective inclusion or reporting and the direction of the effect estimate. We combined prevalence estimates and risk ratios (RRs) using a random-effects meta-analysis model. Seven studies met the inclusion criteria. No studies had investigated selective inclusion of results in systematic reviews, or discrepancies in outcomes and analyses between systematic review registry entries and published systematic reviews. Based on a meta-analysis of four studies (including 485 Cochrane Reviews), 38% (95% confidence interval (CI) 23% to 54%) of systematic reviews added, omitted, upgraded or downgraded at least one outcome between the protocol and published systematic review. The association between statistical significance and discrepant outcome reporting between protocol and published systematic review was uncertain. The meta-analytic estimate suggested an increased risk of adding or upgrading (i.e. changing a secondary outcome to primary) when the outcome was statistically significant, although the 95% CI included no association and a decreased risk as plausible estimates (RR 1.43, 95% CI 0.71 to 2.85; two studies, n = 552 meta-analyses). Also, the meta-analytic estimate suggested an increased risk of downgrading (i.e. changing a primary outcome to secondary) when the outcome was statistically significant, although the 95% CI included no association and a decreased risk as plausible estimates (RR 1.26, 95% CI 0.60 to 2.62; two studies, n = 484 meta-analyses). None of the included studies had investigated whether the association between statistical significance and adding, upgrading or downgrading of outcomes was modified by the type of comparison, direction of effect or type of outcome; or whether there is an association between direction of the effect estimate and discrepant outcome reporting.Several secondary outcomes were reported in the included studies. Two studies found that reasons for discrepant outcome reporting were infrequently reported in published systematic reviews (6% in one study and 22% in the other). One study (including 62 Cochrane Reviews) found that 32% (95% CI 21% to 45%) of systematic reviews did not report all primary outcomes in the abstract. Another study (including 64 Cochrane and 118 non-Cochrane reviews) found that statistically significant primary outcomes were more likely to be completely reported in the systematic review abstract than non-significant primary outcomes (RR 2.66, 95% CI 1.81 to 3.90). None of the studies included systematic reviews published after 2009 when reporting standards for systematic reviews (Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) Statement, and Methodological Expectations of Cochrane Intervention Reviews (MECIR)) were disseminated, so the results might not be generalisable to more recent systematic reviews. Discrepant outcome reporting between the protocol and published systematic review is fairly common, although the association between statistical significance and discrepant outcome reporting is uncertain. Complete reporting of outcomes in systematic review abstracts is associated with statistical significance of the results for those outcomes. Systematic review outcomes and analysis plans should be specified prior to seeing the results of included studies to minimise post-hoc decisions that may be based on the observed results. Modifications that occur once the review has commenced, along with their justification, should be clearly reported. Effect estimates and CIs should be reported for all systematic review outcomes regardless of the results. The lack of research on selective inclusion of results in systematic reviews needs to be addressed and studies that avoid the methodological weaknesses of existing research are also needed.

  12. Valuation of National Park System Visitation: The Efficient Use of Count Data Models, Meta-Analysis, and Secondary Visitor Survey Data

    NASA Astrophysics Data System (ADS)

    Neher, Christopher; Duffield, John; Patterson, David

    2013-09-01

    The National Park Service (NPS) currently manages a large and diverse system of park units nationwide which received an estimated 279 million recreational visits in 2011. This article uses park visitor data collected by the NPS Visitor Services Project to estimate a consistent set of count data travel cost models of park visitor willingness to pay (WTP). Models were estimated using 58 different park unit survey datasets. WTP estimates for these 58 park surveys were used within a meta-regression analysis model to predict average and total WTP for NPS recreational visitation system-wide. Estimated WTP per NPS visit in 2011 averaged 102 system-wide, and ranged across park units from 67 to 288. Total 2011 visitor WTP for the NPS system is estimated at 28.5 billion with a 95% confidence interval of 19.7-43.1 billion. The estimation of a meta-regression model using consistently collected data and identical specification of visitor WTP models greatly reduces problems common to meta-regression models, including sample selection bias, primary data heterogeneity, and heteroskedasticity, as well as some aspects of panel effects. The article provides the first estimate of total annual NPS visitor WTP within the literature directly based on NPS visitor survey data.

  13. Application of ray-traced tropospheric slant delays to geodetic VLBI analysis

    NASA Astrophysics Data System (ADS)

    Hofmeister, Armin; Böhm, Johannes

    2017-08-01

    The correction of tropospheric influences via so-called path delays is critical for the analysis of observations from space geodetic techniques like the very long baseline interferometry (VLBI). In standard VLBI analysis, the a priori slant path delays are determined using the concept of zenith delays, mapping functions and gradients. The a priori use of ray-traced delays, i.e., tropospheric slant path delays determined with the technique of ray-tracing through the meteorological data of numerical weather models (NWM), serves as an alternative way of correcting the influences of the troposphere on the VLBI observations within the analysis. In the presented research, the application of ray-traced delays to the VLBI analysis of sessions in a time span of 16.5 years is investigated. Ray-traced delays have been determined with program RADIATE (see Hofmeister in Ph.D. thesis, Department of Geodesy and Geophysics, Faculty of Mathematics and Geoinformation, Technische Universität Wien. http://resolver.obvsg.at/urn:nbn:at:at-ubtuw:1-3444, 2016) utilizing meteorological data provided by NWM of the European Centre for Medium-Range Weather Forecasts (ECMWF). In comparison with a standard VLBI analysis, which includes the tropospheric gradient estimation, the application of the ray-traced delays to an analysis, which uses the same parameterization except for the a priori slant path delay handling and the used wet mapping factors for the zenith wet delay (ZWD) estimation, improves the baseline length repeatability (BLR) at 55.9% of the baselines at sub-mm level. If no tropospheric gradients are estimated within the compared analyses, 90.6% of all baselines benefit from the application of the ray-traced delays, which leads to an average improvement of the BLR of 1 mm. The effects of the ray-traced delays on the terrestrial reference frame are also investigated. A separate assessment of the RADIATE ray-traced delays is carried out by comparison to the ray-traced delays from the National Aeronautics and Space Administration Goddard Space Flight Center (NASA GSFC) (Eriksson and MacMillan in http://lacerta.gsfc.nasa.gov/tropodelays, 2016) with respect to the analysis performances in terms of BLR results. If tropospheric gradient estimation is included in the analysis, 51.3% of the baselines benefit from the RADIATE ray-traced delays at sub-mm difference level. If no tropospheric gradients are estimated within the analysis, the RADIATE ray-traced delays deliver a better BLR at 63% of the baselines compared to the NASA GSFC ray-traced delays.

  14. The contextual effects of social capital on health: a cross-national instrumental variable analysis.

    PubMed

    Kim, Daniel; Baum, Christopher F; Ganz, Michael L; Subramanian, S V; Kawachi, Ichiro

    2011-12-01

    Past research on the associations between area-level/contextual social capital and health has produced conflicting evidence. However, interpreting this rapidly growing literature is difficult because estimates using conventional regression are prone to major sources of bias including residual confounding and reverse causation. Instrumental variable (IV) analysis can reduce such bias. Using data on up to 167,344 adults in 64 nations in the European and World Values Surveys and applying IV and ordinary least squares (OLS) regression, we estimated the contextual effects of country-level social trust on individual self-rated health. We further explored whether these associations varied by gender and individual levels of trust. Using OLS regression, we found higher average country-level trust to be associated with better self-rated health in both women and men. Instrumental variable analysis yielded qualitatively similar results, although the estimates were more than double in size in both sexes when country population density and corruption were used as instruments. The estimated health effects of raising the percentage of a country's population that trusts others by 10 percentage points were at least as large as the estimated health effects of an individual developing trust in others. These findings were robust to alternative model specifications and instruments. Conventional regression and to a lesser extent IV analysis suggested that these associations are more salient in women and in women reporting social trust. In a large cross-national study, our findings, including those using instrumental variables, support the presence of beneficial effects of higher country-level trust on self-rated health. Previous findings for contextual social capital using traditional regression may have underestimated the true associations. Given the close linkages between self-rated health and all-cause mortality, the public health gains from raising social capital within and across countries may be large. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. The contextual effects of social capital on health: a cross-national instrumental variable analysis

    PubMed Central

    Kim, Daniel; Baum, Christopher F; Ganz, Michael; Subramanian, S V; Kawachi, Ichiro

    2011-01-01

    Past observational studies of the associations of area-level/contextual social capital with health have revealed conflicting findings. However, interpreting this rapidly growing literature is difficult because estimates using conventional regression are prone to major sources of bias including residual confounding and reverse causation. Instrumental variable (IV) analysis can reduce such bias. Using data on up to 167 344 adults in 64 nations in the European and World Values Surveys and applying IV and ordinary least squares (OLS) regression, we estimated the contextual effects of country-level social trust on individual self-rated health. We further explored whether these associations varied by gender and individual levels of trust. Using OLS regression, we found higher average country-level trust to be associated with better self-rated health in both women and men. Instrumental variable analysis yielded qualitatively similar results, although the estimates were more than double in size in women and men using country population density and corruption as instruments. The estimated health effects of raising the percentage of a country's population that trusts others by 10 percentage points were at least as large as the estimated health effects of an individual developing trust in others. These findings were robust to alternative model specifications and instruments. Conventional regression and to a lesser extent IV analysis suggested that these associations are more salient in women and in women reporting social trust. In a large cross-national study, our findings, including those using instrumental variables, support the presence of beneficial effects of higher country-level trust on self-rated health. Past findings for contextual social capital using traditional regression may have underestimated the true associations. Given the close linkages between self-rated health and all-cause mortality, the public health gains from raising social capital within countries may be large. PMID:22078106

  16. LFSTAT - Low-Flow Analysis in R

    NASA Astrophysics Data System (ADS)

    Koffler, Daniel; Laaha, Gregor

    2013-04-01

    The calculation of characteristic stream flow during dry conditions is a basic requirement for many problems in hydrology, ecohydrology and water resources management. As opposed to floods, a number of different indices are used to characterise low flows and streamflow droughts. Although these indices and methods of calculation have been well documented in the WMO Manual on Low-flow Estimation and Prediction [1], a comprehensive software was missing which enables a fast and standardized calculation of low flow statistics. We present the new software package lfstat to fill in this obvious gap. Our software package is based on the statistical open source software R, and expands it to analyse daily stream flow data records focusing on low-flows. As command-line based programs are not everyone's preference, we also offer a plug-in for the R-Commander, an easy to use graphical user interface (GUI) provided for R which is based on tcl/tk. The functionality of lfstat includes estimation methods for low-flow indices, extreme value statistics, deficit characteristics, and additional graphical methods to control the computation of complex indices and to illustrate the data. Beside the basic low flow indices, the baseflow index and recession constants can be computed. For extreme value statistics, state-of-the-art methods for L-moment based local and regional frequency analysis (RFA) are available. The tools for deficit characteristics include various pooling and threshold selection methods to support the calculation of drought duration and deficit indices. The most common graphics for low flow analysis are available, and the plots can be modified according to the user preferences. Graphics include hydrographs for different periods, flexible streamflow deficit plots, baseflow visualisation, recession diagnostic, flow duration curves as well as double mass curves, and many more. From a technical point of view, the package uses a S3-class called lfobj (low-flow objects). This objects are usual R-data-frames including date, flow, hydrological year and possibly baseflow information. Once these objects are created, analysis can be performed by mouse-click and a script can be saved to make the analysis easily reproducible. At the moment we are offering implementation of all major methods proposed in the WMO manual on Low-flow Estimation and Predictions [1]. Future plans include a dynamic low flow report in odt-file format using odf-weave which allows automatic updates if data or analysis change. We hope to offer a tool to ease and structure the analysis of stream flow data focusing on low-flows and to make analysis transparent and communicable. The package can also be used in teaching students the first steps in low-flow hydrology. The software packages can be installed from CRAN (latest stable) and R-Forge: http://r-forge.r-project.org (development version). References: [1] Gustard, Alan; Demuth, Siegfried, (eds.) Manual on Low-flow Estimation and Prediction. Geneva, Switzerland, World Meteorological Organization, (Operational Hydrology Report No. 50, WMO-No. 1029).

  17. A Qualitative Analysis of the Navy’s HSI Billet Structure

    DTIC Science & Technology

    2008-06-01

    of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other...subspecialty code. The research results support the hypothesis that the work requirements of the July 2007 data set of 4600P-coded billets (billets

  18. Canopy reflectance modelling of semiarid vegetation

    NASA Technical Reports Server (NTRS)

    Franklin, Janet

    1994-01-01

    Three different types of remote sensing algorithms for estimating vegetation amount and other land surface biophysical parameters were tested for semiarid environments. These included statistical linear models, the Li-Strahler geometric-optical canopy model, and linear spectral mixture analysis. The two study areas were the National Science Foundation's Jornada Long Term Ecological Research site near Las Cruces, NM, in the northern Chihuahuan desert, and the HAPEX-Sahel site near Niamey, Niger, in West Africa, comprising semiarid rangeland and subtropical crop land. The statistical approach (simple and multiple regression) resulted in high correlations between SPOT satellite spectral reflectance and shrub and grass cover, although these correlations varied with the spatial scale of aggregation of the measurements. The Li-Strahler model produced estimated of shrub size and density for both study sites with large standard errors. In the Jornada, the estimates were accurate enough to be useful for characterizing structural differences among three shrub strata. In Niger, the range of shrub cover and size in short-fallow shrublands is so low that the necessity of spatially distributed estimation of shrub size and density is questionable. Spectral mixture analysis of multiscale, multitemporal, multispectral radiometer data and imagery for Niger showed a positive relationship between fractions of spectral endmembers and surface parameters of interest including soil cover, vegetation cover, and leaf area index.

  19. Production cost analysis of Euphorbia lathyris. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendel, D.A.

    1979-08-01

    The purpose of this study is to estimate costs of production for Euphorbia lathyris (hereafter referred to as Euphorbia) in commercial-scale quantities. Selection of five US locations for analysis was based on assumed climatic and cultivation requirements. The five areas are: nonirrigated areas (Southeast Kansas and Central Oklahoma, Northeast Louisiana and Central Mississippi, Southern Illinois), and irrigated areas: (San Joaquin Valley and the Imperial Valley, California and Yuma, Arizona). Cost estimates are tailored to reflect each region's requirements and capabilities. Variable costs for inputs such as cultivation, planting, fertilization, pesticide application, and harvesting include material costs, equipment ownership, operating costs,more » and labor. Fixed costs include land, management, and transportation of the plant material to a conversion facility. Euphorbia crop production costs, on the average, range between $215 per acre in nonirrigated areas to $500 per acre in irrigated areas. Extraction costs for conversion of Euphorbia plant material to oil are estimated at $33.76 per barrel of oil, assuming a plant capacity of 3000 dry ST/D. Estimated Euphorbia crop production costs are competitive with those of corn. Alfalfa production costs per acre are less than those of Euphorbia in the Kansas/Oklahoma and Southern Illinois site, but greater in the irrigated regions. This disparity is accounted for largely by differences in productivity and irrigation requirements.« less

  20. Dropout from individual psychotherapy for major depression: A meta-analysis of randomized clinical trials.

    PubMed

    Cooper, Andrew A; Conklin, Laren R

    2015-08-01

    Dropout from mental health treatment poses a substantial problem, but rates vary substantially across studies and diagnoses. Focused reviews are needed to provide more detailed estimates for specific areas of research. Randomized clinical trials involving individual psychotherapy for unipolar depression are ubiquitous and important, but empirical data on average dropout rates from these studies is lacking. We conducted a random-effects meta-analysis of 54 such studies (N=5852) including 80 psychotherapy conditions, and evaluated a number of predictors of treatment- and study-level dropout rates. Our overall weighted dropout estimates were 19.9% at the study level, and 17.5% for psychotherapy conditions specifically. Therapy orientation did not significantly account for variance in dropout estimates, but estimates were significantly higher in psychotherapy conditions with more patients of minority racial status or with comorbid personality disorders. Treatment duration was also positively associated with dropout rates at trend level. Studies with an inactive control comparison had higher dropout rates than those without such a condition. Limitations include the inability to test certain potential predictors (e.g., socioeconomic status) due to infrequent reporting. Overall, our findings suggest the need to consider how specific patient and study characteristics may influence dropout rates in clinical research on individual therapy for depression. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. MareyMap Online: A User-Friendly Web Application and Database Service for Estimating Recombination Rates Using Physical and Genetic Maps.

    PubMed

    Siberchicot, Aurélie; Bessy, Adrien; Guéguen, Laurent; Marais, Gabriel A B

    2017-10-01

    Given the importance of meiotic recombination in biology, there is a need to develop robust methods to estimate meiotic recombination rates. A popular approach, called the Marey map approach, relies on comparing genetic and physical maps of a chromosome to estimate local recombination rates. In the past, we have implemented this approach in an R package called MareyMap, which includes many functionalities useful to get reliable recombination rate estimates in a semi-automated way. MareyMap has been used repeatedly in studies looking at the effect of recombination on genome evolution. Here, we propose a simpler user-friendly web service version of MareyMap, called MareyMap Online, which allows a user to get recombination rates from her/his own data or from a publicly available database that we offer in a few clicks. When the analysis is done, the user is asked whether her/his curated data can be placed in the database and shared with other users, which we hope will make meta-analysis on recombination rates including many species easy in the future. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  2. A Methodological Approach for Conducting a Business Case Analysis (BCA) of Zephyr Joint Capability Technology Demonstration (JCTD)

    DTIC Science & Technology

    2008-12-01

    Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the...Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1 . AGENCY USE ONLY...on Investment (ROI) of the Zephyr system. This is achieved by ( 1 ) Developing a model to carry out Business Case Analysis (BCA) of JCTDs, including

  3. The cost of a case of subclinical ketosis in Canadian dairy herds

    PubMed Central

    Gohary, Khaled; Overton, Michael W.; Von Massow, Michael; LeBlanc, Stephen J.; Lissemore, Kerry D.; Duffield, Todd F.

    2016-01-01

    The objective of this study was to develop a model to estimate the cost of a case of subclinical ketosis (SCK) in Canadian dairy herds. Costs were derived from the default inputs, and included increased clinical disease incidence attributable to SCK, $76; longer time to pregnancy, $57; culling and death in early lactation attributable to SCK, $26; milk production loss, $44. Given these figures, the cost of 1 case of SCK was estimated to be $203. Sensitivity analysis showed that the estimated cost of a case of SCK was most sensitive to the herd-level incidence of SCK and the cost of 1 day open. In conclusion, SCK negatively impacts dairy herds and losses are dependent on the herd-level incidence and factors included in the calculation. PMID:27429460

  4. The cost of a case of subclinical ketosis in Canadian dairy herds.

    PubMed

    Gohary, Khaled; Overton, Michael W; Von Massow, Michael; LeBlanc, Stephen J; Lissemore, Kerry D; Duffield, Todd F

    2016-07-01

    The objective of this study was to develop a model to estimate the cost of a case of subclinical ketosis (SCK) in Canadian dairy herds. Costs were derived from the default inputs, and included increased clinical disease incidence attributable to SCK, $76; longer time to pregnancy, $57; culling and death in early lactation attributable to SCK, $26; milk production loss, $44. Given these figures, the cost of 1 case of SCK was estimated to be $203. Sensitivity analysis showed that the estimated cost of a case of SCK was most sensitive to the herd-level incidence of SCK and the cost of 1 day open. In conclusion, SCK negatively impacts dairy herds and losses are dependent on the herd-level incidence and factors included in the calculation.

  5. Estimated prevalence of erosive tooth wear in permanent teeth of children and adolescents: an epidemiological systematic review and meta-regression analysis.

    PubMed

    Salas, M M S; Nascimento, G G; Huysmans, M C; Demarco, F F

    2015-01-01

    The main purpose of this systematic review was to estimate the prevalence of dental erosion in permanent teeth of children and adolescents. An electronic search was performed up to and including March 2014. Eligibility criteria included population-based studies in permanent teeth of children and adolescents aged 8-19-year-old reporting the prevalence or data that allowed the calculation of prevalence rates of tooth erosion. Data collection assessed information regarding geographic location, type of index used for clinical examination, sample size, year of publication, age, examined teeth and tissue exposure. The estimated prevalence of erosive wear was determined, followed by a meta-regression analysis. Twenty-two papers were included in the systematic review. The overall estimated prevalence of tooth erosion was 30.4% (95%IC 23.8-37.0). In the multivariate meta-regression model use of the Tooth Wear Index for clinical examination, studies with sample smaller than 1000 subjects and those conducted in the Middle East and Africa remained associated with higher dental erosion prevalence rates. Our results demonstrated that the estimated prevalence of erosive wear in permanent teeth of children and adolescents is 30.4% with high heterogeneity between studies. Additionally, the correct choice of a clinical index for dental erosion detection and the geographic location play an important role for the large variability of erosive tooth wear in permanent teeth of children and adolescents. The prevalence of tooth erosion observed in permanent teeth of children and adolescents was considerable high. Our results demonstrated that prevalence rate of erosive wear was influenced by methodological and diagnosis factors. When tooth erosion is assessed, the clinical index should be considered. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Conditional survival estimates improve over time for patients with advanced melanoma: results from a population-based analysis.

    PubMed

    Xing, Yan; Chang, George J; Hu, Chung-Yuan; Askew, Robert L; Ross, Merrick I; Gershenwald, Jeffrey E; Lee, Jeffrey E; Mansfield, Paul F; Lucci, Anthony; Cormier, Janice N

    2010-05-01

    Conditional survival (CS) has emerged as a clinically relevant measure of prognosis for cancer survivors. The objective of this analysis was to provide melanoma-specific CS estimates to help clinicians promote more informed patient decision making. Patients with melanoma and at least 5 years of follow-up were identified from the Surveillance Epidemiology and End Results registry (1988-2000). By using the methods of Kaplan and Meier, stage-specific, 5-year CS estimates were independently calculated for survivors for each year after diagnosis. Stage-specific multivariate Cox regression models including baseline survivor functions were used to calculate adjusted melanoma-specific CS for different subgroups of patients further stratified by age, gender, race, marital status, anatomic tumor location, and tumor histology. Five-year CS estimates for patients with stage I disease remained constant at 97% annually, while for patients with stages II, III, and IV disease, 5-year CS estimates from time 0 (diagnosis) to 5 years improved from 72% to 86%, 51% to 87%, and 19% to 84%, respectively. Multivariate CS analysis revealed that differences in stages II through IV CS based on age, gender, and race decreased over time. Five-year melanoma-specific CS estimates improve dramatically over time for survivors with advanced stages of disease. These prognostic data are critical to patients for both treatment and nontreatment related life decisions. (c) 2010 American Cancer Society.

  7. Methods for estimating peak-flow frequencies at ungaged sites in Montana based on data through water year 2011: Chapter F in Montana StreamStats

    USGS Publications Warehouse

    Sando, Roy; Sando, Steven K.; McCarthy, Peter M.; Dutton, DeAnn M.

    2016-04-05

    The U.S. Geological Survey (USGS), in cooperation with the Montana Department of Natural Resources and Conservation, completed a study to update methods for estimating peak-flow frequencies at ungaged sites in Montana based on peak-flow data at streamflow-gaging stations through water year 2011. The methods allow estimation of peak-flow frequencies (that is, peak-flow magnitudes, in cubic feet per second, associated with annual exceedance probabilities of 66.7, 50, 42.9, 20, 10, 4, 2, 1, 0.5, and 0.2 percent) at ungaged sites. The annual exceedance probabilities correspond to 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.Regional regression analysis is a primary focus of Chapter F of this Scientific Investigations Report, and regression equations for estimating peak-flow frequencies at ungaged sites in eight hydrologic regions in Montana are presented. The regression equations are based on analysis of peak-flow frequencies and basin characteristics at 537 streamflow-gaging stations in or near Montana and were developed using generalized least squares regression or weighted least squares regression.All of the data used in calculating basin characteristics that were included as explanatory variables in the regression equations were developed for and are available through the USGS StreamStats application (http://water.usgs.gov/osw/streamstats/) for Montana. StreamStats is a Web-based geographic information system application that was created by the USGS to provide users with access to an assortment of analytical tools that are useful for water-resource planning and management. The primary purpose of the Montana StreamStats application is to provide estimates of basin characteristics and streamflow characteristics for user-selected ungaged sites on Montana streams. The regional regression equations presented in this report chapter can be conveniently solved using the Montana StreamStats application.Selected results from this study were compared with results of previous studies. For most hydrologic regions, the regression equations reported for this study had lower mean standard errors of prediction (in percent) than the previously reported regression equations for Montana. The equations presented for this study are considered to be an improvement on the previously reported equations primarily because this study (1) included 13 more years of peak-flow data; (2) included 35 more streamflow-gaging stations than previous studies; (3) used a detailed geographic information system (GIS)-based definition of the regulation status of streamflow-gaging stations, which allowed better determination of the unregulated peak-flow records that are appropriate for use in the regional regression analysis; (4) included advancements in GIS and remote-sensing technologies, which allowed more convenient calculation of basin characteristics and investigation of many more candidate basin characteristics; and (5) included advancements in computational and analytical methods, which allowed more thorough and consistent data analysis.This report chapter also presents other methods for estimating peak-flow frequencies at ungaged sites. Two methods for estimating peak-flow frequencies at ungaged sites located on the same streams as streamflow-gaging stations are described. Additionally, envelope curves relating maximum recorded annual peak flows to contributing drainage area for each of the eight hydrologic regions in Montana are presented and compared to a national envelope curve. In addition to providing general information on characteristics of large peak flows, the regional envelope curves can be used to assess the reasonableness of peak-flow frequency estimates determined using the regression equations.

  8. An oilspill risk analysis for the eastern Gulf of Mexico (proposed sale 65) Outer Continental Shelf lease area

    USGS Publications Warehouse

    Wyant, Timothy; Slack, James R.

    1978-01-01

    An oilspill risk analysis was conducted to determine the relative environmental hazards of developing oil in different regions of the Eastern Gulf of Mexico Outer Continental Shelf lease area. The study analyzed the probability of spill occurrence, likely paths of the spills, and locations in space and time of such objects as recreational and biological resources likely to be vulnerable. These results combined to yield estimates of the overall oilspill risk associated with development of the proposed lease area. This risk is compared to the existing oilspill risk from existing leases in the area. The analysis implicitly includes estimates of weathering rates and slick dispersion and an indication of the possible mitigating effects of cleanups.

  9. Robust Flutter Margin Analysis that Incorporates Flight Data

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Martin J.

    1998-01-01

    An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, mu, computes a stability margin that directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The mu margins are robust margins that indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 Systems Research Aircraft using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.

  10. Mind the Gap! A Multilevel Analysis of Factors Related to Variation in Published Cost-Effectiveness Estimates within and between Countries.

    PubMed

    Boehler, Christian E H; Lord, Joanne

    2016-01-01

    Published cost-effectiveness estimates can vary considerably, both within and between countries. Despite extensive discussion, little is known empirically about factors relating to these variations. To use multilevel statistical modeling to integrate cost-effectiveness estimates from published economic evaluations to investigate potential causes of variation. Cost-effectiveness studies of statins for cardiovascular disease prevention were identified by systematic review. Estimates of incremental costs and effects were extracted from reported base case, sensitivity, and subgroup analyses, with estimates grouped in studies and in countries. Three bivariate models were developed: a cross-classified model to accommodate data from multinational studies, a hierarchical model with multinational data allocated to a single category at country level, and a hierarchical model excluding multinational data. Covariates at different levels were drawn from a long list of factors suggested in the literature. We found 67 studies reporting 2094 cost-effectiveness estimates relating to 23 countries (6 studies reporting for more than 1 country). Data and study-level covariates included patient characteristics, intervention and comparator cost, and some study methods (e.g., discount rates and time horizon). After adjusting for these factors, the proportion of variation attributable to countries was negligible in the cross-classified model but moderate in the hierarchical models (14%-19% of total variance). Country-level variables that improved the fit of the hierarchical models included measures of income and health care finance, health care resources, and population risks. Our analysis suggested that variability in published cost-effectiveness estimates is related more to differences in study methods than to differences in national context. Multinational studies were associated with much lower country-level variation than single-country studies. These findings are for a single clinical question and may be atypical. © The Author(s) 2015.

  11. [Hospital production cost of repetitive transcranial magnetic stimulation (rTMS) in the treatment of depression].

    PubMed

    Etcheverrigaray, F; Bulteau, S; Machon, L O; Riche, V P; Mauduit, N; Tricot, R; Sellal, O; Sauvaget, A

    2015-08-01

    Repetitive transcranial magnetic stimulation (rTMS) is an effective and well-tolerated treatment in resistant depression with mild to moderate intensity. This indication has not yet been approved in France. The cost and medico-economic value of rTMS in psychiatry remains unknown. The aim of this preliminary study was to assess rTMS cost production analysis as an in-hospital treatment for depression. The methodology, derived from analytical accounts, was validated by a multidisciplinary task force (clinicians, public health doctors, pharmacists, administrative officials and health economist). It was pragmatic, based on official and institutional documentary sources and from field practice. It included equipment, staff, and structure costs, to get an estimate as close to reality as possible. First, we estimated the production cost of rTMS session, based on our annual activity. We then estimated the cost of a cure, which includes 15 sessions. A sensitivity analysis was also performed. The hospital production cost of a cure for treating depression was estimated at € 1932.94 (€ 503.55 for equipment, € 1082.75 for the staff, and € 346.65 for structural expenses). This cost-estimate has resulted from an innovative, pragmatic, and cooperative approach. It is slightly higher but more comprehensive than the costs estimated by the few international studies. However, it is limited due to structure-specific problems and activity. This work could be repeated in other circumstances in order to obtain a more general estimate, potentially helpful for determining an official price for the French health care system. Moreover, budgetary constraints and public health choices should be taken into consideration. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  12. Diagnostic Performance of CT for Diagnosis of Fat-Poor Angiomyolipoma in Patients With Renal Masses: A Systematic Review and Meta-Analysis.

    PubMed

    Woo, Sungmin; Suh, Chong Hyun; Cho, Jeong Yeon; Kim, Sang Youn; Kim, Seung Hyup

    2017-11-01

    The purpose of this article is to systematically review and perform a meta-analysis of the diagnostic performance of CT for diagnosis of fat-poor angiomyolipoma (AML) in patients with renal masses. MEDLINE and EMBASE were systematically searched up to February 2, 2017. We included diagnostic accuracy studies that used CT for diagnosis of fat-poor AML in patients with renal masses, using pathologic examination as the reference standard. Two independent reviewers assessed the methodologic quality using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. Sensitivity and specificity of included studies were calculated and were pooled and plotted in a hierarchic summary ROC plot. Sensitivity analyses using several clinically relevant covariates were performed to explore heterogeneity. Fifteen studies (2258 patients) were included. Pooled sensitivity and specificity were 0.67 (95% CI, 0.48-0.81) and 0.97 (95% CI, 0.89-0.99), respectively. Substantial and considerable heterogeneity was present with regard to sensitivity and specificity (I 2 = 91.21% and 78.53%, respectively). At sensitivity analyses, the specificity estimates were comparable and consistently high across all subgroups (0.93-1.00), but sensitivity estimates showed significant variation (0.14-0.82). Studies using pixel distribution analysis (n = 3) showed substantially lower sensitivity estimates (0.14; 95% CI, 0.04-0.40) compared with the remaining 12 studies (0.81; 95% CI, 0.76-0.85). CT shows moderate sensitivity and excellent specificity for diagnosis of fat-poor AML in patients with renal masses. When methods other than pixel distribution analysis are used, better sensitivity can be achieved.

  13. Review of economic studies and budget impact analysis of ocriplasmin as a treatment of vitreomacular traction.

    PubMed

    García-Pérez, L; Abreu-González, R; Pérez-Ramos, J; García-Pérez, S; Serrano-Aguilar, P

    2016-06-01

    To review the evidence on the cost-effectiveness of ocriplasmin as a treatment for vitreomacular traction (VMT), and to estimate the impact on the Spanish National Health System (NHS). 1) Systematic review. The following databases were searched in January 2015: MEDLINE, PREMEDLINE, EMBASE, CRD, the Cochrane Library, and key websites. Selection criteria were: full economic evaluations that compared ocriplasmin with usual care ('watch and wait' and/or vitrectomy) in patients with VMT. The outcomes to extract were costs of the alternatives and the incremental cost-effectiveness ratio. Studies of budget impact analysis were also included. The methodological quality was assessed, and a narrative synthesis of the included studies was carried out. 2) Estimation of budget impact. The impact on the budget as a result of the introduction of ocriplasmin in the NHS was estimated, including data from different sources. Six studies were identified, none of them performed in Spain. The two best studies concluded that ocriplasmin is cost-effective in their respective countries (Canada and United Kingdom), but only in patients with certain conditions (without epiretinal membrane, for example). The results of the budget impact analysis are different between countries. The analysis for Spain showed that the introduction of ocriplasmin would mean a saving over 1 million Euros for the NHS in 5 years. The cost-effectiveness of ocriplasmin has not been demonstrated in Spain. However, good studies performed in other countries found that ocriplasmin is cost-effective in selected patients. Given the current prices in Spain, ocriplasmin could involve a saving for the Spanish NHS. Copyright © 2016 Sociedad Española de Oftalmología. Published by Elsevier España, S.L.U. All rights reserved.

  14. A two-stage approach for estimating a statewide truck trip table.

    DOT National Transportation Integrated Search

    2014-05-01

    Statewide models, including passenger and freight movements, are frequently used for : supporting numerous statewide planning activities. Many states use them for traffic impact : studies, air quality conformity analysis, freight planning, economic d...

  15. Technical support document: Energy conservation standards for consumer products: Dishwashers, clothes washers, and clothes dryers including: Environmental impacts; regulatory impact analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1990-12-01

    The Energy Policy and Conservation Act as amended (P.L. 94-163), establishes energy conservation standards for 12 of the 13 types of consumer products specifically covered by the Act. The legislation requires the Department of Energy (DOE) to consider new or amended standards for these and other types of products at specified times. This Technical Support Document presents the methodology, data and results from the analysis of the energy and economic impacts of standards on dishwashers, clothes washers, and clothes dryers. The economic impact analysis is performed in five major areas: An Engineering Analysis, which establishes technical feasibility and product attributesmore » including costs of design options to improve appliance efficiency. A Consumer Analysis at two levels: national aggregate impacts, and impacts on individuals. The national aggregate impacts include forecasts of appliance sales, efficiencies, energy use, and consumer expenditures. The individual impacts are analyzed by Life-Cycle Cost (LCC), Payback Periods, and Cost of Conserved Energy (CCE), which evaluate the savings in operating expenses relative to increases in purchase price; A Manufacturer Analysis, which provides an estimate of manufacturers' response to the proposed standards. Their response is quantified by changes in several measures of financial performance for a firm. An Industry Impact Analysis shows financial and competitive impacts on the appliance industry. A Utility Analysis that measures the impacts of the altered energy-consumption patterns on electric utilities. A Environmental Effects analysis, which estimates changes in emissions of carbon dioxide, sulfur oxides, and nitrogen oxides, due to reduced energy consumption in the home and at the power plant. A Regulatory Impact Analysis collects the results of all the analyses into the net benefits and costs from a national perspective. 47 figs., 171 tabs. (JF)« less

  16. The Use of Statistically Based Rolling Supply Curves for Electricity Market Analysis: A Preliminary Look

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkin, Thomas J; Larson, Andrew; Ruth, Mark F

    In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market andmore » dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing calibrating insights to PCMs.« less

  17. Molten Salt Power Tower Cost Model for the System Advisor Model (SAM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turchi, C. S.; Heath, G. A.

    2013-02-01

    This report describes a component-based cost model developed for molten-salt power tower solar power plants. The cost model was developed by the National Renewable Energy Laboratory (NREL), using data from several prior studies, including a contracted analysis from WorleyParsons Group, which is included herein as an Appendix. The WorleyParsons' analysis also estimated material composition and mass for the plant to facilitate a life cycle analysis of the molten salt power tower technology. Details of the life cycle assessment have been published elsewhere. The cost model provides a reference plant that interfaces with NREL's System Advisor Model or SAM. The referencemore » plant assumes a nominal 100-MWe (net) power tower running with a nitrate salt heat transfer fluid (HTF). Thermal energy storage is provided by direct storage of the HTF in a two-tank system. The design assumes dry-cooling. The model includes a spreadsheet that interfaces with SAM via the Excel Exchange option in SAM. The spreadsheet allows users to estimate the costs of different-size plants and to take into account changes in commodity prices. This report and the accompanying Excel spreadsheet can be downloaded at https://sam.nrel.gov/cost.« less

  18. Fleeing to Fault Zones: Incorporating Syrian Refugees into Earthquake Risk Analysis along the East Anatolian and Dead Sea Rift Fault Zones

    NASA Astrophysics Data System (ADS)

    Wilson, B.; Paradise, T. R.

    2016-12-01

    The influx of millions of Syrian refugees into Turkey has rapidly changed the population distribution along the Dead Sea Rift and East Anatolian Fault zones. In contrast to other countries in the Middle East where refugees are accommodated in camp environments, the majority of displaced individuals in Turkey are integrated into cities, towns, and villages—placing stress on urban settings and increasing potential exposure to strong shaking. Yet, displaced populations are not traditionally captured in data sources used in earthquake risk analysis or loss estimations. Accordingly, we present a district-level analysis assessing the spatial overlap of earthquake hazards and refugee locations in southeastern Turkey to determine how migration patterns are altering seismic risk in the region. Using migration estimates from the U.S. Humanitarian Information Unit, we create three district-level population scenarios that combine official population statistics, refugee camp populations, and low, median, and high bounds for integrated refugee populations. We perform probabilistic seismic hazard analysis alongside these population scenarios to map spatial variations in seismic risk between 2011 and late 2015. Our results show a significant relative southward increase of seismic risk for this period due to refugee migration. Additionally, we calculate earthquake fatalities for simulated earthquakes using a semi-empirical loss estimation technique to determine degree of under-estimation resulting from forgoing migration data in loss modeling. We find that including refugee populations increased casualties by 11-12% using median population estimates, and upwards of 20% using high population estimates. These results communicate the ongoing importance of placing environmental hazards in their appropriate regional and temporal context which unites physical, political, cultural, and socio-economic landscapes. Keywords: Earthquakes, Hazards, Loss-Estimation, Syrian Crisis, Migration, Refugees

  19. Hubble Space Telescope Angular Velocity Estimation During the Robotic Servicing Mission

    NASA Technical Reports Server (NTRS)

    Thienel, Julie K.; Sanner, Robert M.

    2005-01-01

    In 2004 NASA began investigation of a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would require estimates of the HST attitude and rates in order to achieve a capture by the proposed Hubble robotic vehicle (HRV). HRV was to be equipped with vision-based sensors, capable of estimating the relative attitude between HST and HRV. The inertial HST attitude is derived from the measured relative attitude and the HRV computed inertial attitude. However, the relative rate between HST and HRV cannot be measured directly. Therefore, the HST rate with respect to inertial space is not known. Two approaches are developed to estimate the HST rates. Both methods utilize the measured relative attitude and the HRV inertial attitude and rates. First, a nonlinear estimator is developed. The nonlinear approach estimates the HST rate through an estimation of the inertial angular momentum. The development includes an analysis of the estimator stability given errors in the measured attitude. Second, a linearized approach is developed. The linearized approach is a pseudo-linear Kalman filter. Simulation test results for both methods are given, including scenarios with erroneous measured attitudes. Even though the development began as an application for the HST robotic servicing mission, the methods presented are applicable to any rendezvous/capture mission involving a non-cooperative target spacecraft.

  20. Estimating the Societal Benefits of THA After Accounting for Work Status and Productivity: A Markov Model Approach.

    PubMed

    Koenig, Lane; Zhang, Qian; Austin, Matthew S; Demiralp, Berna; Fehring, Thomas K; Feng, Chaoling; Mather, Richard C; Nguyen, Jennifer T; Saavoss, Asha; Springer, Bryan D; Yates, Adolph J

    2016-12-01

    Demand for total hip arthroplasty (THA) is high and expected to continue to grow during the next decade. Although much of this growth includes working-aged patients, cost-effectiveness studies on THA have not fully incorporated the productivity effects from surgery. We asked: (1) What is the expected effect of THA on patients' employment and earnings? (2) How does accounting for these effects influence the cost-effectiveness of THA relative to nonsurgical treatment? Taking a societal perspective, we used a Markov model to assess the overall cost-effectiveness of THA compared with nonsurgical treatment. We estimated direct medical costs using Medicare claims data and indirect costs (employment status and worker earnings) using regression models and nonparametric simulations. For direct costs, we estimated average spending 1 year before and after surgery. Spending estimates included physician and related services, hospital inpatient and outpatient care, and postacute care. For indirect costs, we estimated the relationship between functional status and productivity, using data from the National Health Interview Survey and regression analysis. Using regression coefficients and patient survey data, we ran a nonparametric simulation to estimate productivity (probability of working multiplied by earnings if working minus the value of missed work days) before and after THA. We used the Australian Orthopaedic Association National Joint Replacement Registry to obtain revision rates because it contained osteoarthritis-specific THA revision rates by age and gender, which were unavailable in other registry reports. Other model assumptions were extracted from a previously published cost-effectiveness analysis that included a comprehensive literature review. We incorporated all parameter estimates into Markov models to assess THA effects on quality-adjusted life years and lifetime costs. We conducted threshold and sensitivity analyses on direct costs, indirect costs, and revision rates to assess the robustness of our Markov model results. Compared with nonsurgical treatments, THA increased average annual productivity of patients by USD 9503 (95% CI, USD 1446-USD 17,812). We found that THA increases average lifetime direct costs by USD 30,365, which were offset by USD 63,314 in lifetime savings from increased productivity. With net societal savings of USD 32,948 per patient, total lifetime societal savings were estimated at almost USD 10 billion from more than 300,000 THAs performed in the United States each year. Using a Markov model approach, we show that THA produces societal benefits that can offset the costs of THA. When comparing THA with other nonsurgical treatments, policymakers should consider the long-term benefits associated with increased productivity from surgery. Level III, economic and decision analysis.

  1. Model diagnostics in reduced-rank estimation

    PubMed Central

    Chen, Kun

    2016-01-01

    Reduced-rank methods are very popular in high-dimensional multivariate analysis for conducting simultaneous dimension reduction and model estimation. However, the commonly-used reduced-rank methods are not robust, as the underlying reduced-rank structure can be easily distorted by only a few data outliers. Anomalies are bound to exist in big data problems, and in some applications they themselves could be of the primary interest. While naive residual analysis is often inadequate for outlier detection due to potential masking and swamping, robust reduced-rank estimation approaches could be computationally demanding. Under Stein's unbiased risk estimation framework, we propose a set of tools, including leverage score and generalized information score, to perform model diagnostics and outlier detection in large-scale reduced-rank estimation. The leverage scores give an exact decomposition of the so-called model degrees of freedom to the observation level, which lead to exact decomposition of many commonly-used information criteria; the resulting quantities are thus named information scores of the observations. The proposed information score approach provides a principled way of combining the residuals and leverage scores for anomaly detection. Simulation studies confirm that the proposed diagnostic tools work well. A pattern recognition example with hand-writing digital images and a time series analysis example with monthly U.S. macroeconomic data further demonstrate the efficacy of the proposed approaches. PMID:28003860

  2. Model diagnostics in reduced-rank estimation.

    PubMed

    Chen, Kun

    2016-01-01

    Reduced-rank methods are very popular in high-dimensional multivariate analysis for conducting simultaneous dimension reduction and model estimation. However, the commonly-used reduced-rank methods are not robust, as the underlying reduced-rank structure can be easily distorted by only a few data outliers. Anomalies are bound to exist in big data problems, and in some applications they themselves could be of the primary interest. While naive residual analysis is often inadequate for outlier detection due to potential masking and swamping, robust reduced-rank estimation approaches could be computationally demanding. Under Stein's unbiased risk estimation framework, we propose a set of tools, including leverage score and generalized information score, to perform model diagnostics and outlier detection in large-scale reduced-rank estimation. The leverage scores give an exact decomposition of the so-called model degrees of freedom to the observation level, which lead to exact decomposition of many commonly-used information criteria; the resulting quantities are thus named information scores of the observations. The proposed information score approach provides a principled way of combining the residuals and leverage scores for anomaly detection. Simulation studies confirm that the proposed diagnostic tools work well. A pattern recognition example with hand-writing digital images and a time series analysis example with monthly U.S. macroeconomic data further demonstrate the efficacy of the proposed approaches.

  3. Hydraulic head applications of flow logs in the study of heterogeneous aquifers

    USGS Publications Warehouse

    Paillet, Frederick L.

    2001-01-01

    Permeability profiles derived from high-resolution flow logs in heterogeneous aquifers provide a limited sample of the most permeable beds or fractures determining the hydraulic properties of those aquifers. This paper demonstrates that flow logs can also be used to infer the large-scale properties of aquifers surrounding boreholes. The analysis is based on the interpretation of the hydraulic head values estimated from the flow log analysis. Pairs of quasi-steady flow profiles obtained under ambient conditions and while either pumping or injecting are used to estimate the hydraulic head in each water-producing zone. Although the analysis yields localized estimates of transmissivity for a few water-producing zones, the hydraulic head estimates apply to the farfield aquifers to which these zones are connected. The hydraulic head data are combined with information from other sources to identify the large-scale structure of heterogeneous aquifers. More complicated cross-borehole flow experiments are used to characterize the pattern of connection between large-scale aquifer units inferred from the hydraulic head estimates. The interpretation of hydraulic heads in situ under steady and transient conditions is illustrated by several case studies, including an example with heterogeneous permeable beds in an unconsolidated aquifer, and four examples with heterogeneous distributions of bedding planes and/or fractures in bedrock aquifers.

  4. Simultaneous bilateral cataract surgery: economic analysis; Helsinki Simultaneous Bilateral Cataract Surgery Study Report 2.

    PubMed

    Leivo, Tiina; Sarikkola, Anna-Ulrika; Uusitalo, Risto J; Hellstedt, Timo; Ess, Sirje-Linda; Kivelä, Tero

    2011-06-01

    To present an economic-analysis comparison of simultaneous and sequential bilateral cataract surgery. Helsinki University Eye Hospital, Helsinki, Finland. Economic analysis. Effects were estimated from data in a study in which patients were randomized to have bilateral cataract surgery on the same day (study group) or sequentially (control group). The main clinical outcomes were corrected distance visual acuity, refraction, complications, Visual Function Index-7 (VF-7) scores, and patient-rated satisfaction with vision. Health-care costs of surgeries and preoperative and postoperative visits were estimated, including the cost of staff, equipment, material, floor space, overhead, and complications. The data were obtained from staff measurements, questionnaires, internal hospital records, and accountancy. Non-health-care costs of travel, home care, and time were estimated based on questionnaires from a random subset of patients. The main economic outcome measures were cost per VF-7 score unit change and cost per patient in simultaneous versus sequential surgery. The study comprised 520 patients (241 patients included non-health-care and time cost analyses). Surgical outcomes and patient satisfaction were similar in both groups. Simultaneous cataract surgery saved 449 Euros (€) per patient in health-care costs and €739 when travel and paid home-care costs were included. The savings added up to €849 per patient when the cost of lost working time was included. Compared with sequential bilateral cataract surgery, simultaneous bilateral cataract surgery provided comparable clinical outcomes with substantial savings in health-care and non-health-care-related costs. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  5. Three-dimensional imaging of aquifer and aquitard heterogeneity via transient hydraulic tomography at a highly heterogeneous field site

    NASA Astrophysics Data System (ADS)

    Zhao, Zhanfeng; Illman, Walter A.

    2018-04-01

    Previous studies have shown that geostatistics-based transient hydraulic tomography (THT) is robust for subsurface heterogeneity characterization through the joint inverse modeling of multiple pumping tests. However, the hydraulic conductivity (K) and specific storage (Ss) estimates can be smooth or even erroneous for areas where pumping/observation densities are low. This renders the imaging of interlayer and intralayer heterogeneity of highly contrasting materials including their unit boundaries difficult. In this study, we further test the performance of THT by utilizing existing and newly collected pumping test data of longer durations that showed drawdown responses in both aquifer and aquitard units at a field site underlain by a highly heterogeneous glaciofluvial deposit. The robust performance of the THT is highlighted through the comparison of different degrees of model parameterization including: (1) the effective parameter approach; (2) the geological zonation approach relying on borehole logs; and (3) the geostatistical inversion approach considering different prior information (with/without geological data). Results reveal that the simultaneous analysis of eight pumping tests with the geostatistical inverse model yields the best results in terms of model calibration and validation. We also find that the joint interpretation of long-term drawdown data from aquifer and aquitard units is necessary in mapping their full heterogeneous patterns including intralayer variabilities. Moreover, as geological data are included as prior information in the geostatistics-based THT analysis, the estimated K values increasingly reflect the vertical distribution patterns of permeameter-estimated K in both aquifer and aquitard units. Finally, the comparison of various THT approaches reveals that differences in the estimated K and Ss tomograms result in significantly different transient drawdown predictions at observation ports.

  6. An evaluation of methods for estimating decadal stream loads

    NASA Astrophysics Data System (ADS)

    Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-11-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.

  7. An evaluation of methods for estimating decadal stream loads

    USGS Publications Warehouse

    Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-01-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.

  8. Generalized Full-Information Item Bifactor Analysis

    PubMed Central

    Cai, Li; Yang, Ji Seung; Hansen, Mark

    2011-01-01

    Full-information item bifactor analysis is an important statistical method in psychological and educational measurement. Current methods are limited to single group analysis and inflexible in the types of item response models supported. We propose a flexible multiple-group item bifactor analysis framework that supports a variety of multidimensional item response theory models for an arbitrary mixing of dichotomous, ordinal, and nominal items. The extended item bifactor model also enables the estimation of latent variable means and variances when data from more than one group are present. Generalized user-defined parameter restrictions are permitted within or across groups. We derive an efficient full-information maximum marginal likelihood estimator. Our estimation method achieves substantial computational savings by extending Gibbons and Hedeker’s (1992) bifactor dimension reduction method so that the optimization of the marginal log-likelihood only requires two-dimensional integration regardless of the dimensionality of the latent variables. We use simulation studies to demonstrate the flexibility and accuracy of the proposed methods. We apply the model to study cross-country differences, including differential item functioning, using data from a large international education survey on mathematics literacy. PMID:21534682

  9. The Relationship Between Speech Production and Speech Perception Deficits in Parkinson's Disease.

    PubMed

    De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet

    2016-10-01

    This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified through a standardized speech intelligibility assessment, acoustic analysis, and speech intensity measurements. Second, an overall estimation task and an intensity estimation task were addressed to evaluate overall speech perception and speech intensity perception, respectively. Finally, correlation analysis was performed between the speech characteristics of the overall estimation task and the corresponding acoustic analysis. The interaction between speech production and speech intensity perception was investigated by an intensity imitation task. Acoustic analysis and speech intensity measurements demonstrated significant differences in speech production between patients with PD and the HCs. A different pattern in the auditory perception of speech and speech intensity was found in the PD group. Auditory perceptual deficits may influence speech production in patients with PD. The present results suggest a disturbed auditory perception related to an automatic monitoring deficit in PD.

  10. Levelized Cost of Energy Calculator | Energy Analysis | NREL

    Science.gov Websites

    Levelized Cost of Energy Calculator Levelized Cost of Energy Calculator Transparent Cost Database Button The levelized cost of energy (LCOE) calculator provides a simple calculator for both utility-scale need to be included for a thorough analysis. To estimate simple cost of energy, use the slider controls

  11. Introduction to LISREL: A Demonstration Using Students' Commitment to an Institution. ASHE 1987 Annual Meeting Paper.

    ERIC Educational Resources Information Center

    Stage, Frances K.

    The nature and use of LISREL (LInear Structural RELationships) analysis are considered, including an examination of college students' commitment to a university. LISREL is a fairly new causal analysis technique that has broad application in the social sciences and that employs structural equation estimation. The application examined in this paper…

  12. Much ado about nothing: the misestimation and overinterpretation of violent video game effects in eastern and western nations: comment on Anderson et al. (2010).

    PubMed

    Ferguson, Christopher J; Kilburn, John

    2010-03-01

    The issue of violent video game influences on youth violence and aggression remains intensely debated in the scholarly literature and among the general public. Several recent meta-analyses, examining outcome measures most closely related to serious aggressive acts, found little evidence for a relationship between violent video games and aggression or violence. In a new meta-analysis, C. A. Anderson et al. (2010) questioned these findings. However, their analysis has several methodological issues that limit the interpretability of their results. In their analysis, C. A. Anderson et al. included many studies that do not relate well to serious aggression, an apparently biased sample of unpublished studies, and a "best practices" analysis that appears unreliable and does not consider the impact of unstandardized aggression measures on the inflation of effect size estimates. They also focused on bivariate correlations rather than better controlled estimates of effects. Despite a number of methodological flaws that all appear likely to inflate effect size estimates, the final estimate of r = .15 is still indicative of only weak effects. Contrasts between the claims of C. A. Anderson et al. (2010) and real-world data on youth violence are discussed.

  13. Lunar base thermal management/power system analysis and design

    NASA Technical Reports Server (NTRS)

    Mcghee, Jerry R.

    1992-01-01

    A compilation of several lunar surface thermal management and power system studies completed under contract and IR&D is presented. The work includes analysis and preliminary design of all major components of an integrated thermal management system, including loads determination, active internal acquisition and transport equipment, external transport systems (active and passive), passive insulation, solar shielding, and a range of lunar surface radiator concepts. Several computer codes were utilized in support of this study, including RADSIM to calculate radiation exchange factors and view factors, RADIATOR (developed in-house) for heat rejection system sizing and performance analysis over a lunar day, SURPWER for power system sizing, and CRYSTORE for cryogenic system performance predictions. Although much of the work was performed in support of lunar rover studies, any or all of the results can be applied to a range of surface applications. Output data include thermal loads summaries, subsystem performance data, mass, and volume estimates (where applicable), integrated and worst-case lunar day radiator size/mass and effective sink temperatures for several concepts (shielded and unshielded), and external transport system performance estimates for both single and two-phase (heat pumped) transport loops. Several advanced radiator concepts are presented, along with brief assessments of possible system benefits and potential drawbacks. System point designs are presented for several cases, executed in support of the contract and IR&D studies, although the parametric nature of the analysis is stressed to illustrate applicability of the analysis procedure to a wide variety of lunar surface systems. The reference configuration(s) derived from the various studies will be presented along with supporting criteria. A preliminary design will also be presented for the reference basing scenario, including qualitative data regarding TPS concerns and issues.

  14. Under-reporting of road traffic mortality in developing countries: application of a capture-recapture statistical model to refine mortality estimates.

    PubMed

    Samuel, Jonathan C; Sankhulani, Edward; Qureshi, Javeria S; Baloyi, Paul; Thupi, Charles; Lee, Clara N; Miller, William C; Cairns, Bruce A; Charles, Anthony G

    2012-01-01

    Road traffic injuries are a major cause of preventable death in sub-Saharan Africa. Accurate epidemiologic data are scarce and under-reporting from primary data sources is common. Our objectives were to estimate the incidence of road traffic deaths in Malawi using capture-recapture statistical analysis and determine what future efforts will best improve upon this estimate. Our capture-recapture model combined primary data from both police and hospital-based registries over a one year period (July 2008 to June 2009). The mortality incidences from the primary data sources were 0.075 and 0.051 deaths/1000 person-years, respectively. Using capture-recapture analysis, the combined incidence of road traffic deaths ranged 0.192-0.209 deaths/1000 person-years. Additionally, police data were more likely to include victims who were male, drivers or pedestrians, and victims from incidents with greater than one vehicle involved. We concluded that capture-recapture analysis is a good tool to estimate the incidence of road traffic deaths, and that capture-recapture analysis overcomes limitations of incomplete data sources. The World Health Organization estimated incidence of road traffic deaths for Malawi utilizing a binomial regression model and survey data and found a similar estimate despite strikingly different methods, suggesting both approaches are valid. Further research should seek to improve capture-recapture data through utilization of more than two data sources and improving accuracy of matches by minimizing missing data, application of geographic information systems, and use of names and civil registration numbers if available.

  15. Under-Reporting of Road Traffic Mortality in Developing Countries: Application of a Capture-Recapture Statistical Model to Refine Mortality Estimates

    PubMed Central

    Samuel, Jonathan C.; Sankhulani, Edward; Qureshi, Javeria S.; Baloyi, Paul; Thupi, Charles; Lee, Clara N.; Miller, William C.; Cairns, Bruce A.; Charles, Anthony G.

    2012-01-01

    Road traffic injuries are a major cause of preventable death in sub-Saharan Africa. Accurate epidemiologic data are scarce and under-reporting from primary data sources is common. Our objectives were to estimate the incidence of road traffic deaths in Malawi using capture-recapture statistical analysis and determine what future efforts will best improve upon this estimate. Our capture-recapture model combined primary data from both police and hospital-based registries over a one year period (July 2008 to June 2009). The mortality incidences from the primary data sources were 0.075 and 0.051 deaths/1000 person-years, respectively. Using capture-recapture analysis, the combined incidence of road traffic deaths ranged 0.192–0.209 deaths/1000 person-years. Additionally, police data were more likely to include victims who were male, drivers or pedestrians, and victims from incidents with greater than one vehicle involved. We concluded that capture-recapture analysis is a good tool to estimate the incidence of road traffic deaths, and that capture-recapture analysis overcomes limitations of incomplete data sources. The World Health Organization estimated incidence of road traffic deaths for Malawi utilizing a binomial regression model and survey data and found a similar estimate despite strikingly different methods, suggesting both approaches are valid. Further research should seek to improve capture-recapture data through utilization of more than two data sources and improving accuracy of matches by minimizing missing data, application of geographic information systems, and use of names and civil registration numbers if available. PMID:22355338

  16. Meeting report: Estimating the benefits of reducing hazardous air pollutants--summary of 2009 workshop and future considerations.

    PubMed

    Gwinn, Maureen R; Craig, Jeneva; Axelrad, Daniel A; Cook, Rich; Dockins, Chris; Fann, Neal; Fegley, Robert; Guinnup, David E; Helfand, Gloria; Hubbell, Bryan; Mazur, Sarah L; Palma, Ted; Smith, Roy L; Vandenberg, John; Sonawane, Babasaheb

    2011-01-01

    Quantifying the benefits of reducing hazardous air pollutants (HAPs, or air toxics) has been limited by gaps in toxicological data, uncertainties in extrapolating results from high-dose animal experiments to estimate human effects at lower doses, limited ambient and personal exposure monitoring data, and insufficient economic research to support valuation of the health impacts often associated with exposure to individual air toxics. To address some of these issues, the U.S. Environmental Protection Agency held the Workshop on Estimating the Benefits of Reducing Hazardous Air Pollutants (HAPs) in Washington, DC, from 30 April to 1 May 2009. Experts from multiple disciplines discussed how best to move forward on air toxics benefits assessment, with a focus on developing near-term capability to conduct quantitative benefits assessment. Proposed methodologies involved analysis of data-rich pollutants and application of this analysis to other pollutants, using dose-response modeling of animal data for estimating benefits to humans, determining dose-equivalence relationships for different chemicals with similar health effects, and analysis similar to that used for criteria pollutants. Limitations and uncertainties in economic valuation of benefits assessment for HAPS were discussed as well. These discussions highlighted the complexities in estimating the benefits of reducing air toxics, and participants agreed that alternative methods for benefits assessment of HAPs are needed. Recommendations included clearly defining the key priorities of the Clean Air Act air toxics program to identify the most effective approaches for HAPs benefits analysis, focusing on susceptible and vulnerable populations, and improving dose-response estimation for quantification of benefits.

  17. Analysis And Augmentation Of Timing Advance Based Geolocation In Lte Cellular Networks

    DTIC Science & Technology

    2016-12-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA DISSERTATION ANALYSIS AND AUGMENTATION OF TIMING ADVANCE-BASED GEOLOCATION IN LTE CELLULAR NETWORKS by...estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining the...AND SUBTITLE ANALYSIS AND AUGMENTATION OF TIMING ADVANCE-BASED GEOLOCA- TION IN LTE CELLULAR NETWORKS 5. FUNDING NUMBERS 6. AUTHOR(S) John D. Roth 7

  18. Data challenges in estimating the capacity value of solar photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul

    We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Lastly, our analysis also suggests that multiple years’ historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less

  19. Data Challenges in Estimating the Capacity Value of Solar Photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul

    We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Our analysis also suggests that multiple years' historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less

  20. Data challenges in estimating the capacity value of solar photovoltaics

    DOE PAGES

    Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul

    2017-04-30

    We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Lastly, our analysis also suggests that multiple years’ historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less

  1. Incorporation of MRI-AIF Information For Improved Kinetic Modelling of Dynamic PET Data

    NASA Astrophysics Data System (ADS)

    Sari, Hasan; Erlandsson, Kjell; Thielemans, Kris; Atkinson, David; Ourselin, Sebastien; Arridge, Simon; Hutton, Brian F.

    2015-06-01

    In the analysis of dynamic PET data, compartmental kinetic analysis methods require an accurate knowledge of the arterial input function (AIF). Although arterial blood sampling is the gold standard of the methods used to measure the AIF, it is usually not preferred as it is an invasive method. An alternative method is the simultaneous estimation method (SIME), where physiological parameters and the AIF are estimated together, using information from different anatomical regions. Due to the large number of parameters to estimate in its optimisation, SIME is a computationally complex method and may sometimes fail to give accurate estimates. In this work, we try to improve SIME by utilising an input function derived from a simultaneously obtained DSC-MRI scan. With the assumption that the true value of one of the six parameter PET-AIF model can be derived from an MRI-AIF, the method is tested using simulated data. The results indicate that SIME can yield more robust results when the MRI information is included with a significant reduction in absolute bias of Ki estimates.

  2. Local Spatial Obesity Analysis and Estimation Using Online Social Network Sensors.

    PubMed

    Sun, Qindong; Wang, Nan; Li, Shancang; Zhou, Hongyi

    2018-03-15

    Recently, the online social networks (OSNs) have received considerable attentions as a revolutionary platform to offer users massive social interaction among users that enables users to be more involved in their own healthcare. The OSNs have also promoted increasing interests in the generation of analytical, data models in health informatics. This paper aims at developing an obesity identification, analysis, and estimation model, in which each individual user is regarded as an online social network 'sensor' that can provide valuable health information. The OSN-based obesity analytic model requires each sensor node in an OSN to provide associated features, including dietary habit, physical activity, integral/incidental emotions, and self-consciousness. Based on the detailed measurements on the correlation of obesity and proposed features, the OSN obesity analytic model is able to estimate the obesity rate in certain urban areas and the experimental results demonstrate a high success estimation rate. The measurements and estimation experimental findings created by the proposed obesity analytic model show that the online social networks could be used in analyzing the local spatial obesity problems effectively. Copyright © 2018. Published by Elsevier Inc.

  3. Outcomes following severe hand foot and mouth disease: A systematic review and meta-analysis.

    PubMed

    Jones, Eben; Pillay, Timesh D; Liu, Fengfeng; Luo, Li; Bazo-Alvarez, Juan Carlos; Yuan, Chen; Zhao, Shanlu; Chen, Qi; Li, Yu; Liao, Qiaohong; Yu, Hongjie; Rogier van Doorn, H; Sabanathan, Saraswathy

    2018-04-20

    Hand, foot and mouth disease (HFMD) caused by enterovirus A71 (EV-A71) is associated with acute neurological disease in children. This study aimed to estimate the burden of long-term sequelae and death following severe HFMD. This systematic review and meta-analysis pooled all reports from English and Chinese databases including MEDLINE and Wangfang on outbreaks of clinically diagnosed HFMD and/or laboratory-confirmed EV-A71 with at least 7 days' follow-up published between 1st January 1966 and 19th October 2015. Two independent reviewers assessed the literature. We used a random effects meta-analysis to estimate cumulative incidence of neurological sequelae or death. Studies were assessed for methodological and reporting quality. PROSPERO registration number: 10.15124/CRD42015021981. 43 studies were included in the review, and 599 children from 9 studies were included in the primary analysis. Estimated cumulative incidence of death or neurological sequelae at maximum follow up was 19.8% (95% CI:10.2%, 31.3%). Heterogeneity (Iˆ2) was 88.57%, partly accounted for by year of data collection and reporting quality of studies. Incidence by acute disease severity was 0.00% (0.00, 0.00) for grade IIa; 17.0% (7.9, 28.2) for grade IIb/III; 81.6% (65.1, 94.5) for grade IV (p = 0.00) disease. HFMD with neurological involvement is associated with a substantial burden of long-term neurological sequelae. Grade of acute disease severity was a strong predictor of outcome. Strengths of this study include its bilingual approach and clinical applicability. Future prospective and interventional studies must use rigorous methodology to assess long-term outcomes in survivors. There was no specific funding for this study. See below for researcher funding. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Visual evaluation of kinetic characteristics of PET probe for neuroreceptors using a two-phase graphic plot analysis.

    PubMed

    Ito, Hiroshi; Ikoma, Yoko; Seki, Chie; Kimura, Yasuyuki; Kawaguchi, Hiroshi; Takuwa, Hiroyuki; Ichise, Masanori; Suhara, Tetsuya; Kanno, Iwao

    2017-05-01

    Objectives In PET studies for neuroreceptors, tracer kinetics are described by the two-tissue compartment model (2-TCM), and binding parameters, including the total distribution volume (V T ), non-displaceable distribution volume (V ND ), and binding potential (BP ND ), can be determined from model parameters estimated by kinetic analysis. The stability of binding parameter estimates depends on the kinetic characteristics of radioligands. To describe these kinetic characteristics, we previously developed a two-phase graphic plot analysis in which V ND and V T can be estimated from the x-intercept of regression lines for early and delayed phases, respectively. In this study, we applied this graphic plot analysis to visual evaluation of the kinetic characteristics of radioligands for neuroreceptors, and investigated a relationship between the shape of these graphic plots and the stability of binding parameters estimated by the kinetic analysis with 2-TCM in simulated brain tissue time-activity curves (TACs) with various binding parameters. Methods 90-min TACs were generated with the arterial input function and assumed kinetic parameters according to 2-TCM. Graphic plot analysis was applied to these simulated TACs, and the curvature of the plot for each TAC was evaluated visually. TACs with several noise levels were also generated with various kinetic parameters, and the bias and variation of binding parameters estimated by kinetic analysis were calculated in each TAC. These bias and variation were compared with the shape of graphic plots. Results The graphic plots showed larger curvature for TACs with higher specific binding and slower dissociation of specific binding. The quartile deviations of V ND and BP ND determined by kinetic analysis were smaller for radioligands with slow dissociation. Conclusions The larger curvature of graphic plots for radioligands with slow dissociation might indicate a stable determination of V ND and BP ND by kinetic analysis. For investigation of the kinetics of radioligands, such kinetic characteristics should be considered.

  5. Advanced statistical methods for improved data analysis of NASA astrophysics missions

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.

    1992-01-01

    The investigators under this grant studied ways to improve the statistical analysis of astronomical data. They looked at existing techniques, the development of new techniques, and the production and distribution of specialized software to the astronomical community. Abstracts of nine papers that were produced are included, as well as brief descriptions of four software packages. The articles that are abstracted discuss analytical and Monte Carlo comparisons of six different linear least squares fits, a (second) paper on linear regression in astronomy, two reviews of public domain software for the astronomer, subsample and half-sample methods for estimating sampling distributions, a nonparametric estimation of survival functions under dependent competing risks, censoring in astronomical data due to nondetections, an astronomy survival analysis computer package called ASURV, and improving the statistical methodology of astronomical data analysis.

  6. Evaluating Technical Efficiency of Nursing Care Using Data Envelopment Analysis and Multilevel Modeling.

    PubMed

    Min, Ari; Park, Chang Gi; Scott, Linda D

    2016-05-23

    Data envelopment analysis (DEA) is an advantageous non-parametric technique for evaluating relative efficiency of performance. This article describes use of DEA to estimate technical efficiency of nursing care and demonstrates the benefits of using multilevel modeling to identify characteristics of efficient facilities in the second stage of analysis. Data were drawn from LTCFocUS.org, a secondary database including nursing home data from the Online Survey Certification and Reporting System and Minimum Data Set. In this example, 2,267 non-hospital-based nursing homes were evaluated. Use of DEA with nurse staffing levels as inputs and quality of care as outputs allowed estimation of the relative technical efficiency of nursing care in these facilities. In the second stage, multilevel modeling was applied to identify organizational factors contributing to technical efficiency. Use of multilevel modeling avoided biased estimation of findings for nested data and provided comprehensive information on differences in technical efficiency among counties and states. © The Author(s) 2016.

  7. Multivariate Copula Analysis Toolbox (MvCAT): Describing dependence and underlying uncertainty using a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Sadegh, Mojtaba; Ragno, Elisa; AghaKouchak, Amir

    2017-06-01

    We present a newly developed Multivariate Copula Analysis Toolbox (MvCAT) which includes a wide range of copula families with different levels of complexity. MvCAT employs a Bayesian framework with a residual-based Gaussian likelihood function for inferring copula parameters and estimating the underlying uncertainties. The contribution of this paper is threefold: (a) providing a Bayesian framework to approximate the predictive uncertainties of fitted copulas, (b) introducing a hybrid-evolution Markov Chain Monte Carlo (MCMC) approach designed for numerical estimation of the posterior distribution of copula parameters, and (c) enabling the community to explore a wide range of copulas and evaluate them relative to the fitting uncertainties. We show that the commonly used local optimization methods for copula parameter estimation often get trapped in local minima. The proposed method, however, addresses this limitation and improves describing the dependence structure. MvCAT also enables evaluation of uncertainties relative to the length of record, which is fundamental to a wide range of applications such as multivariate frequency analysis.

  8. [RS estimation of inventory parameters and carbon storage of moso bamboo forest based on synergistic use of object-based image analysis and decision tree].

    PubMed

    Du, Hua Qiang; Sun, Xiao Yan; Han, Ning; Mao, Fang Jie

    2017-10-01

    By synergistically using the object-based image analysis (OBIA) and the classification and regression tree (CART) methods, the distribution information, the indexes (including diameter at breast, tree height, and crown closure), and the aboveground carbon storage (AGC) of moso bamboo forest in Shanchuan Town, Anji County, Zhejiang Province were investigated. The results showed that the moso bamboo forest could be accurately delineated by integrating the multi-scale ima ge segmentation in OBIA technique and CART, which connected the image objects at various scales, with a pretty good producer's accuracy of 89.1%. The investigation of indexes estimated by regression tree model that was constructed based on the features extracted from the image objects reached normal or better accuracy, in which the crown closure model archived the best estimating accuracy of 67.9%. The estimating accuracy of diameter at breast and tree height was relatively low, which was consistent with conclusion that estimating diameter at breast and tree height using optical remote sensing could not achieve satisfactory results. Estimation of AGC reached relatively high accuracy, and accuracy of the region of high value achieved above 80%.

  9. Large Covariance Estimation by Thresholding Principal Orthogonal Complements

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088

  10. Large Covariance Estimation by Thresholding Principal Orthogonal Complements.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2013-09-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.

  11. Challenges of Developing Design Discharge Estimates with Uncertain Data and Information

    NASA Astrophysics Data System (ADS)

    Senarath, S. U. S.

    2016-12-01

    This study focuses on design discharge estimates obtained for gauged basins through flood flow frequency analysis. Bulletin 17B (B17B) guidelines are widely used in the USA for developing these design estimates, which are required for many water resources engineering design applications. A set of outlier and historical data, and distribution parameter selection options is included in these guidelines. These options are provided in the guidelines as a means of accounting for uncertain data and information, primarily in the flow record. The individual as well as the cumulative effects of each of these preferences on design discharge estimates are evaluated in this study by using data from several gauges that are part of the United States Geological Survey's Hydro-Climatic Data Network. The results of this study show that despite the availability of rigorous and detailed guidelines for flood frequency analysis, the design discharge estimates can still vary substantially, from user to user, based on data and model parameter selection options chosen by each user. Thus, the findings of this study have strong implications for water resources engineers and other professionals who use B17B-based design discharge estimates in their work.

  12. Dual-energy X-ray absorptiometry: analysis of pediatric fat estimate errors due to tissue hydration effects.

    PubMed

    Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B

    2000-12-01

    Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.

  13. Antiplatelet Agents for the Secondary Prevention of Ischemic Stroke or Transient Ischemic Attack: A Network Meta-Analysis.

    PubMed

    Wang, Wen; Zhang, Lu; Liu, Weiming; Zhu, Qin; Lan, Qing; Zhao, Jizong

    2016-05-01

    Stroke can cause high morbidity and mortality, and ischemic stroke (IS) and transient ischemic attack (TIA) patients have a high stroke recurrence rate. Antiplatelet agents are the standard therapy for these patients, but it is often difficult for clinicians to select the best therapy from among the multiple treatment options. We therefore performed a network meta-analysis to estimate the efficacy of antiplatelet agents for secondary prevention of recurrent stroke. We systematically searched 3 databases (PubMed, Embase, and Cochrane) for relevant studies published through August 2015. The primary end points of this meta-analysis were overall stroke, hemorrhagic stroke, and fatal stroke. A total of 30 trials were included in our network meta-analysis and abstracted data. Among the therapies evaluated in the included trials, the estimates for overall stroke and hemorrhagic stroke for cilostazol (Cilo) were significantly better than those for aspirin (odds ratio [OR] = .64, 95% credibility interval [CrI], .45-.91; OR = .23, 95% CrI, .08-.58). The estimate for fatal stroke was highest for Cilo plus aspirin combination therapy, followed by Cilo therapy. The results of our meta-analysis indicate that Cilo significantly improves overall stroke and hemorrhagic stroke in IS or TIA patients and reduces fatal stroke, but with low statistical significance. Our results also show that Cilo was significantly more efficient than other therapies in Asian patients; therefore, future trials should focus on Cilo treatment for secondary prevention of recurrent stroke in non-Asian patients. Copyright © 2016 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  14. Statistical methodology for estimating the mean difference in a meta-analysis without study-specific variance information.

    PubMed

    Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz

    2017-04-30

    Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. MEGA-CC: computing core of molecular evolutionary genetics analysis program for automated and iterative data analysis.

    PubMed

    Kumar, Sudhir; Stecher, Glen; Peterson, Daniel; Tamura, Koichiro

    2012-10-15

    There is a growing need in the research community to apply the molecular evolutionary genetics analysis (MEGA) software tool for batch processing a large number of datasets and to integrate it into analysis workflows. Therefore, we now make available the computing core of the MEGA software as a stand-alone executable (MEGA-CC), along with an analysis prototyper (MEGA-Proto). MEGA-CC provides users with access to all the computational analyses available through MEGA's graphical user interface version. This includes methods for multiple sequence alignment, substitution model selection, evolutionary distance estimation, phylogeny inference, substitution rate and pattern estimation, tests of natural selection and ancestral sequence inference. Additionally, we have upgraded the source code for phylogenetic analysis using the maximum likelihood methods for parallel execution on multiple processors and cores. Here, we describe MEGA-CC and outline the steps for using MEGA-CC in tandem with MEGA-Proto for iterative and automated data analysis. http://www.megasoftware.net/.

  16. Kinetics and mechanism of catalytic hydroprocessing of components of coal-derived liquids. Sixteenth quarterly report, February 16, 1983-May 15, 1983.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gates, B. C.; Olson, H. H.; Schuit, G. C.A.

    1983-08-22

    A new method of structural analysis is applied to a group of hydroliquefied coal samples. The method uses elemental analysis and NMR data to estimate the concentrations of functional groups in the samples. The samples include oil and asphaltene fractions obtained in a series of hydroliquefaction experiments, and a set of 9 fractions separated from a coal-derived oil. The structural characterization of these samples demonstrates that estimates of functional group concentrations can be used to provide detailed structural profiles of complex mixtures and to obtain limited information about reaction pathways. 11 references, 1 figure, 7 tables.

  17. Supersonic through-flow fan assessment

    NASA Technical Reports Server (NTRS)

    Kepler, C. E.; Champagne, G. A.

    1988-01-01

    A study was conducted to assess the performance potential of a supersonic through-flow fan engine for supersonic cruise aircraft. It included a mean-line analysis of fans designed to operate with in-flow velocities ranging from subsonic to high supersonic speeds. The fan performance generated was used to estimate the performance of supersonic fan engines designed for four applications: a Mach 2.3 supersonic transport, a Mach 2.5 fighter, a Mach 3.5 cruise missile, and a Mach 5.0 cruise vehicle. For each application an engine was conceptualized, fan performance and engine performance calculated, weight estimates made, engine installed in a hypothetical vehicle, and mission analysis was conducted.

  18. Reliability generalization study of the Yale-Brown Obsessive-Compulsive Scale for children and adolescents.

    PubMed

    López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Ma; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa

    2015-01-01

    The Yale-Brown Obsessive-Compulsive Scale for children and adolescents (CY-BOCS) is a frequently applied test to assess obsessive-compulsive symptoms. We conducted a reliability generalization meta-analysis on the CY-BOCS to estimate the average reliability, search for reliability moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the CY-BOCS scores. A total of 47 studies reporting a reliability coefficient with the data at hand were included in the meta-analysis. The results showed good reliability and a large variability associated to the standard deviation of total scores and sample size.

  19. Determination of variability in leaf biomass densities of conifers and mixed conifers under different environmental conditions in the San Joaquin Valley air basin. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temple, P.J.; Mutters, R.J.; Adams, C.

    1995-06-01

    Biomass sampling plots were established at 29 locations within the dominant vegetation zones of the study area. Estimates of foliar biomass were made for each plot by three independent methods: regression analysis on the basis of tree diameter, calculation of the amount of light intercepted by the leaf canopy, and extrapolation from branch leaf area. Multivariate regression analysis was used to relate these foliar biomass estimates for oak plots and conifer plots to several independent predictor variables, including elevation, slope, aspect, temperature, precipitation, and soil chemical characteristics.

  20. Sample size re-estimation and other midcourse adjustments with sequential parallel comparison design.

    PubMed

    Silverman, Rachel K; Ivanova, Anastasia

    2017-01-01

    Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.

  1. Retrospective Assessment of Cost Savings From Prevention

    PubMed Central

    Grosse, Scott D.; Berry, Robert J.; Tilford, J. Mick; Kucik, James E.; Waitzman, Norman J.

    2016-01-01

    Introduction Although fortification of food with folic acid has been calculated to be cost saving in the U.S., updated estimates are needed. This analysis calculates new estimates from the societal perspective of net cost savings per year associated with mandatory folic acid fortification of enriched cereal grain products in the U.S. that was implemented during 1997–1998. Methods Estimates of annual numbers of live-born spina bifida cases in 1995–1996 relative to 1999–2011 based on birth defects surveillance data were combined during 2015 with published estimates of the present value of lifetime direct costs updated in 2014 U.S. dollars for a live-born infant with spina bifida to estimate avoided direct costs and net cost savings. Results The fortification mandate is estimated to have reduced the annual number of U.S. live-born spina bifida cases by 767, with a lower-bound estimate of 614. The present value of mean direct lifetime cost per infant with spina bifida is estimated to be $791,900, or $577,000 excluding caregiving costs. Using a best estimate of numbers of avoided live-born spina bifida cases, fortification is estimated to reduce the present value of total direct costs for each year's birth cohort by $603 million more than the cost of fortification. A lower-bound estimate of cost savings using conservative assumptions, including the upper-bound estimate of fortification cost, is $299 million. Conclusions The estimates of cost savings are larger than previously reported, even using conservative assumptions. The analysis can also inform assessments of folic acid fortification in other countries. PMID:26790341

  2. Treatment strategies for pelvic organ prolapse: a cost-effectiveness analysis.

    PubMed

    Hullfish, Kathie L; Trowbridge, Elisa R; Stukenborg, George J

    2011-05-01

    To compare the relative cost effectiveness of treatment decision alternatives for post-hysterectomy pelvic organ prolapse (POP). A Markov decision analysis model was used to assess and compare the relative cost effectiveness of expectant management, use of a pessary, and surgery for obtaining months of quality-adjusted life over 1 year. Sensitivity analysis was conducted to determine whether the results depended on specific estimates of patient utilities for pessary use, probabilities for complications and other events, and estimated costs. Only two treatment alternatives were found to be efficient choices: initial pessary use and vaginal reconstructive surgery (VRS). Pessary use (including patients that eventually transitioned to surgery) achieved 10.4 quality-adjusted months, at a cost of $10,000 per patient, while VRS obtained 11.4 quality-adjusted months, at $15,000 per patient. Sensitivity analysis demonstrated that these baseline results depended on several key estimates in the model. This analysis indicates that pessary use and VRS are the most cost-effective treatment alternatives for treating post-hysterectomy vaginal prolapse. Additional research is needed to standardize POP outcomes and complications, so that healthcare providers can best utilize cost information in balancing the risks and benefits of their treatment decisions.

  3. Prevalence of Orofacial Clefts among Live Births in China: A Systematic Review and Meta-Analysis.

    PubMed

    Wang, Mengying; Yuan, Yuan; Wang, Zifan; Liu, Dongjing; Wang, Zhuqing; Sun, Feng; Wang, Ping; Zhu, Hongping; Li, Jing; Wu, Tao; Beaty, Terri H

    2017-07-17

    Orofacial clefts (OFCs) are common human birth defects in China. However, studies on the prevalence of OFCs present inconsistent results. The overall prevalence and geographic distribution of OFCs are poorly described in China. Thus, we conducted a systematic review and meta-analysis to estimate the prevalence of OFCs. The systematic review and meta-analysis were conducted on the basis of an established protocol (PROSPERO 2015: CRD42015030198). We systematically searched for articles in four electronic databases, including Embase, PubMed, Wanfang Database, and China National Knowledge Infrastructure (CNKI) to identify relevant studies about prevalence of OFCs in China. Meta-analysis, including subgroup analysis, was conducted to estimate the pooled prevalence. A total of 41 studies published between 1986 and 2015 were included in our analysis. The sample size ranged from 2,586 to 4,611,808 live births. The random-effects model of meta-analysis showed that the overall prevalence of OFCs in China was 1.4 per 1000 live births (95% confidence interval [CI], 1.1-1.7). In subgroup analysis based on geographic regions, we found that OFC prevalence in Southwest (2.3 per 1000 live births, 95% CI, 1.1-4.7) was higher than that in other regions of China. There were no significant time trends of OFCs during the study period (p-value = 0.47). The overall prevalence of OFCs in China was 1.4 per 1000 live births. No significant secular trend of prevalence has been found in this analysis. Further studies need to be conducted to explore the etiology of OFC to better control the risk of this common birth defect. Birth Defects Research 109:1011-1019, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    PubMed

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  5. Validating the absolute reliability of a fat free mass estimate equation in hemodialysis patients using near-infrared spectroscopy.

    PubMed

    Kono, Kenichi; Nishida, Yusuke; Moriyama, Yoshihumi; Taoka, Masahiro; Sato, Takashi

    2015-06-01

    The assessment of nutritional states using fat free mass (FFM) measured with near-infrared spectroscopy (NIRS) is clinically useful. This measurement should incorporate the patient's post-dialysis weight ("dry weight"), in order to exclude the effects of any change in water mass. We therefore used NIRS to investigate the regression, independent variables, and absolute reliability of FFM in dry weight. The study included 47 outpatients from the hemodialysis unit. Body weight was measured before dialysis, and FFM was measured using NIRS before and after dialysis treatment. Multiple regression analysis was used to estimate the FFM in dry weight as the dependent variable. The measured FFM before dialysis treatment (Mw-FFM), and the difference between measured and dry weight (Mw-Dw) were independent variables. We performed Bland-Altman analysis to detect errors between the statistically estimated FFM and the measured FFM after dialysis treatment. The multiple regression equation to estimate the FFM in dry weight was: Dw-FFM = 0.038 + (0.984 × Mw-FFM) + (-0.571 × [Mw-Dw]); R(2)  = 0.99). There was no systematic bias between the estimated and the measured values of FFM in dry weight. Using NIRS, FFM in dry weight can be calculated by an equation including FFM in measured weight and the difference between the measured weight and the dry weight. © 2015 The Authors. Therapeutic Apheresis and Dialysis © 2015 International Society for Apheresis.

  6. Estimating the Relative Water Content of Leaves in a Cotton Canopy

    NASA Technical Reports Server (NTRS)

    Vanderbilt, Vern; Daughtry, Craig; Kupinski, Meredith; French, Andrew; Chipman, Russell; Dahlgren, Robert

    2017-01-01

    Remotely sensing plant canopy water status remains a long-term goal of remote sensing research. Established approaches to estimating canopy water status the Crop Water Stress Index, the Water Deficit Index and the Equivalent Water Thickness involve measurements in the thermal or reflective infrared. Here we report plant water status estimates based upon analysis of polarized visible imagery of a cotton canopy measured by ground Multi-Spectral Polarization Imager (MSPI). Such estimators potentially provide access to the plant hydrological photochemistry that manifests scattering and absorption effects in the visible spectral region.Twice during one day, +- 3 hours from solar noon, we collected polarized imagery and relative water content data on a cotton test plot located at the Arid Land Agricultural Research Center, United States Department of Agriculture, Maricopa, AZ. The test plot, a small portion of a large cotton field, contained stressed plants ready for irrigation. The evening prior to data collection we irrigated several rows of plants within the test plot. Thus, ground MSPI imagery from both morning and afternoon included cotton plants with a range of water statuses. Data analysis includes classifying the polarized imagery into sunlit reflecting, sunlit transmitting, shaded foliage and bare soil. We estimate the leaf surface reflection and interior reflection based upon the per pixel polarization and sunview directions. We compare our cotton results with our prior polarization results for corn and soybean leaves measured in the lab and corn leaves measured in the field.

  7. Human Pose Estimation from Monocular Images: A Comprehensive Survey

    PubMed Central

    Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi

    2016-01-01

    Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003

  8. Statistical Estimation of Rollover Risk

    DOT National Transportation Integrated Search

    1989-08-01

    This report describes the results of a statistical analysis to determine the : probability of a rollover in a single vehicle accident. Over 39,000 accidents, : which included 4910 rollovers in the states of Texas, Maryland, and Washington were : exam...

  9. Spectral analysis of groove spacing on Ganymede

    NASA Technical Reports Server (NTRS)

    Grimm, R. E.

    1984-01-01

    The technique used to analyze groove spacing on Ganymede is presented. Data from Voyager images are used determine the surface topography and position of the grooves. Power spectal estimates are statistically analyzed and sample data is included.

  10. Orbit/attitude estimation with LANDSAT Landmark data

    NASA Technical Reports Server (NTRS)

    Hall, D. L.; Waligora, S.

    1979-01-01

    The use of LANDSAT landmark data for orbit/attitude and camera bias estimation was studied. The preliminary results of these investigations are presented. The Goddard Trajectory Determination System (GTDS) error analysis capability was used to perform error analysis studies. A number of questions were addressed including parameter observability and sensitivity, effects on the solve-for parameter errors of data span, density, and distribution an a priori covariance weighting. The use of the GTDS differential correction capability with acutal landmark data was examined. The rms line and element observation residuals were studied as a function of the solve-for parameter set, a priori covariance weighting, force model, attitude model and data characteristics. Sample results are presented. Finally, verfication and preliminary system evaluation of the LANDSAT NAVPAK system for sequential (extended Kalman Filter) estimation of orbit, and camera bias parameters is given.

  11. Influence diagnostics in meta-regression model.

    PubMed

    Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua

    2017-09-01

    This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.

  12. North Dakota's forests, 2005: statistics, methods, and quality assurance

    Treesearch

    Patrick D. Miles; David E. Haugen; Charles J. Barnett

    2011-01-01

    The first full annual inventory of North Dakota's forests was completed in 2005 after 7,622 plots were selected and 164 forested plots were visited and measured. This report includes detailed information on forest inventory methods and data quality estimates. Important resource statistics are included in the tables. A detailed analysis of the North Dakota...

  13. South Dakota's forests, 2005: statistics, methods, and quality assurance

    Treesearch

    Patrick D. Miles; Ronald J. Piva; Charles J. Barnett

    2011-01-01

    The first full annual inventory of South Dakota's forests was completed in 2005 after 8,302 plots were selected and 325 forested plots were visited and measured. This report includes detailed information on forest inventory methods and data quality estimates. Important resource statistics are included in the tables. A detailed analysis of the South Dakota...

  14. Statistical power analysis in wildlife research

    USGS Publications Warehouse

    Steidl, R.J.; Hayes, J.P.

    1997-01-01

    Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.

  15. Seismic risk analysis for the Babcock and Wilcox facility, Leechburg, Pennsylvania

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1977-10-21

    The results of a detailed seismic risk analysis of the Babcock and Wilcox Plutonium Fuel Fabrication facility at Leechburg, Pennsylvania are presented. This report focuses on earthquakes; the other natural hazards, being addressed in separate reports, are severe weather (strong winds and tornados) and floods. The calculational method used is based on Cornell's work (1968); it has been previously applied to safety evaluations of major projects. The historical seismic record was established after a review of available literature, consultation with operators of local seismic arrays and examination of appropriate seismic data bases. Because of the aseismicity of the region aroundmore » the site, an analysis different from the conventional closest approach in a tectonic province was adapted. Earthquakes as far from the site as 1,000 km were included, as were the possibility of earthquakes at the site. In addition, various uncertainties in the input were explicitly considered in the analysis. The results of the risk analysis, which include a Bayesian estimate of the uncertainties, are presented, expressed as return period accelerations. The best estimate curve indicates that the Babcock and Wilcox facility will experience 0.05 g every 220 years and 0.10 g every 1400 years. The bounding curves roughly represent the one standard deviation confidence limits about the best estimate, reflecting the uncertainty in certain of the input. Detailed examination of the results show that the accelerations are very insensitive to the details of the source region geometries or the historical earthquake statistics in each region and that each of the source regions contributes almost equally to the cumulative risk at the site. If required for structural analysis, acceleration response spectra for the site can be constructed by scaling the mean response spectrum for alluvium in WASH 1255 by these peak accelerations.« less

  16. Overview of Recent Flight Flutter Testing Research at NASA Dryden

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Lind, Richard C.; Voracek, David F.

    1997-01-01

    In response to the concerns of the aeroelastic community, NASA Dryden Flight Research Center, Edwards, California, is conducting research into improving the flight flutter (including aeroservoelasticity) test process with more accurate and automated techniques for stability boundary prediction. The important elements of this effort so far include the following: (1) excitation mechanisms for enhanced vibration data to reduce uncertainty levels in stability estimates; (2) investigation of a variety of frequency, time, and wavelet analysis techniques for signal processing, stability estimation, and nonlinear identification; and (3) robust flutter boundary prediction to substantially reduce the test matrix for flutter clearance. These are critical research topics addressing the concerns of a recent AGARD Specialists' Meeting on Advanced Aeroservoelastic Testing and Data Analysis. This paper addresses these items using flight test data from the F/A-18 Systems Research Aircraft and the F/A-18 High Alpha Research Vehicle.

  17. Localising semantic and syntactic processing in spoken and written language comprehension: an Activation Likelihood Estimation meta-analysis.

    PubMed

    Rodd, Jennifer M; Vitello, Sylvia; Woollams, Anna M; Adank, Patti

    2015-02-01

    We conducted an Activation Likelihood Estimation (ALE) meta-analysis to identify brain regions that are recruited by linguistic stimuli requiring relatively demanding semantic or syntactic processing. We included 54 functional MRI studies that explicitly varied the semantic or syntactic processing load, while holding constant demands on earlier stages of processing. We included studies that introduced a syntactic/semantic ambiguity or anomaly, used a priming manipulation that specifically reduced the load on semantic/syntactic processing, or varied the level of syntactic complexity. The results confirmed the critical role of the posterior left Inferior Frontal Gyrus (LIFG) in semantic and syntactic processing. These results challenge models of sentence comprehension highlighting the role of anterior LIFG for semantic processing. In addition, the results emphasise the posterior (but not anterior) temporal lobe for both semantic and syntactic processing. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.

  18. Haloacetic acids in drinking water and risk for stillbirth

    PubMed Central

    King, W; Dodds, L; Allen, A; Armson, B; Fell, D; Nimrod, C

    2005-01-01

    Aims: To investigate the effects of haloacetic acid (HAA) compounds in drinking water on stillbirth risk. Methods: A population based case-control study was conducted in Nova Scotia and Eastern Ontario, Canada. Estimates of daily exposure to total and specific HAAs were based on household water samples and questionnaire information on water consumption at home and work. Results: The analysis included 112 stillbirth cases and 398 live birth controls. In analysis without adjustment for total THM exposure, a relative risk greater than 2 was observed for an intermediate exposure category for total HAA and dichloroacetic acid measures. After adjustment for total THM exposure, the risk estimates for intermediate exposure categories were diminished, the relative risk associated with the highest category was in the direction of a protective effect, and all confidence intervals included the null value. Conclusions: No association was observed between HAA exposures and stillbirth risk after controlling for THM exposures. PMID:15657195

  19. Analysis and meta-analysis of single-case designs: an introduction.

    PubMed

    Shadish, William R

    2014-04-01

    The last 10 years have seen great progress in the analysis and meta-analysis of single-case designs (SCDs). This special issue includes five articles that provide an overview of current work on that topic, including standardized mean difference statistics, multilevel models, Bayesian statistics, and generalized additive models. Each article analyzes a common example across articles and presents syntax or macros for how to do them. These articles are followed by commentaries from single-case design researchers and journal editors. This introduction briefly describes each article and then discusses several issues that must be addressed before we can know what analyses will eventually be best to use in SCD research. These issues include modeling trend, modeling error covariances, computing standardized effect size estimates, assessing statistical power, incorporating more accurate models of outcome distributions, exploring whether Bayesian statistics can improve estimation given the small samples common in SCDs, and the need for annotated syntax and graphical user interfaces that make complex statistics accessible to SCD researchers. The article then discusses reasons why SCD researchers are likely to incorporate statistical analyses into their research more often in the future, including changing expectations and contingencies regarding SCD research from outside SCD communities, changes and diversity within SCD communities, corrections of erroneous beliefs about the relationship between SCD research and statistics, and demonstrations of how statistics can help SCD researchers better meet their goals. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  20. Navy Multiband Terminal (NMT)

    DTIC Science & Technology

    2015-12-01

    AEHF satellites and MILSTAR satellites in the backwards-compatible mode. Mission requirements specific to Navy operations, including threat levels and...Center for Cost Analysis (NCCA) Component Cost Position (CCP) memo dated December 18, 2015 Confidence Level Confidence Level of cost estimate for... Econ Qty Sch Eng Est Oth Spt Total 6.970 0.082 0.637 0.034 0.000 -1.210 0.000 -0.418 -0.875 6.095 Current SAR Baseline to Current Estimate (TY $M) PAUC

  1. Design and Analysis of Low Frequency Communication System in Persian Gulf

    DTIC Science & Technology

    2008-09-01

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources , gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other...other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently

  2. Diallel analysis for sex-linked and maternal effects.

    PubMed

    Zhu, J; Weir, B S

    1996-01-01

    Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.

  3. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  4. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  5. Ancestry Estimation in Forensic Anthropology: Geometric Morphometric versus Standard and Nonstandard Interlandmark Distances.

    PubMed

    Katherine Spradley, M; Jantz, Richard L

    2016-07-01

    Standard cranial measurements are commonly used for ancestry estimation; however, 3D digitizers have made cranial landmark data collection and geometric morphometric (GM) analyses more popular within forensic anthropology. Yet there has been little focus on which data type works best. The goal of the present research is to test the discrimination ability of standard and nonstandard craniometric measurements and data derived from GM analysis. A total of 31 cranial landmarks were used to generate 465 interlandmark distances, including a subset of 20 commonly used measurements, and to generate principal component scores from procrustes coordinates. All were subjected to discriminant function analysis to ascertain which type of data performed best for ancestry estimation of American Black and White and Hispanic males and females. The nonstandard interlandmark distances generated the highest classification rates for females (90.5%) and males (88.2%). Using nonstandard interlandmark distances over more commonly used measurements leads to better ancestry estimates for our current population structure. © 2016 American Academy of Forensic Sciences.

  6. The Yale-Brown Obsessive Compulsive Scale: A Reliability Generalization Meta-Analysis.

    PubMed

    López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Maria; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa

    2015-10-01

    The Yale-Brown Obsessive Compulsive Scale (Y-BOCS) is the most frequently applied test to assess obsessive compulsive symptoms. We conducted a reliability generalization meta-analysis on the Y-BOCS to estimate the average reliability, examine the variability among the reliability estimates, search for moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the Y-BOCS. We included studies where the Y-BOCS was applied to a sample of adults and reliability estimate was reported. Out of the 11,490 references located, 144 studies met the selection criteria. For the total scale, the mean reliability was 0.866 for coefficients alpha, 0.848 for test-retest correlations, and 0.922 for intraclass correlations. The moderator analyses led to a predictive model where the standard deviation of the total test and the target population (clinical vs. nonclinical) explained 38.6% of the total variability among coefficients alpha. Finally, clinical implications of the results are discussed. © The Author(s) 2014.

  7. Classification and area estimation of land covers in Kansas using ground-gathered and LANDSAT digital data

    NASA Technical Reports Server (NTRS)

    May, G. A.; Holko, M. L.; Anderson, J. E.

    1983-01-01

    Ground-gathered data and LANDSAT multispectral scanner (MSS) digital data from 1981 were analyzed to produce a classification of Kansas land areas into specific types called land covers. The land covers included rangeland, forest, residential, commercial/industrial, and various types of water. The analysis produced two outputs: acreage estimates with measures of precision, and map-type or photo products of the classification which can be overlaid on maps at specific scales. State-level acreage estimates were obtained and substate-level land cover classification overlays and estimates were generated for selected geographical areas. These products were found to be of potential use in managing land and water resources.

  8. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  9. Challenges Associated with Estimating Utility in Wet Age-Related Macular Degeneration: A Novel Regression Analysis to Capture the Bilateral Nature of the Disease.

    PubMed

    Hodgson, Robert; Reason, Timothy; Trueman, David; Wickstead, Rose; Kusel, Jeanette; Jasilek, Adam; Claxton, Lindsay; Taylor, Matthew; Pulikottil-Jacob, Ruth

    2017-10-01

    The estimation of utility values for the economic evaluation of therapies for wet age-related macular degeneration (AMD) is a particular challenge. Previous economic models in wet AMD have been criticized for failing to capture the bilateral nature of wet AMD by modelling visual acuity (VA) and utility values associated with the better-seeing eye only. Here we present a de novo regression analysis using generalized estimating equations (GEE) applied to a previous dataset of time trade-off (TTO)-derived utility values from a sample of the UK population that wore contact lenses to simulate visual deterioration in wet AMD. This analysis allows utility values to be estimated as a function of VA in both the better-seeing eye (BSE) and worse-seeing eye (WSE). VAs in both the BSE and WSE were found to be statistically significant (p < 0.05) when regressed separately. When included without an interaction term, only the coefficient for VA in the BSE was significant (p = 0.04), but when an interaction term between VA in the BSE and WSE was included, only the constant term (mean TTO utility value) was significant, potentially a result of the collinearity between the VA of the two eyes. The lack of both formal model fit statistics from the GEE approach and theoretical knowledge to support the superiority of one model over another make it difficult to select the best model. Limitations of this analysis arise from the potential influence of collinearity between the VA of both eyes, and the use of contact lenses to reflect VA states to obtain the original dataset. Whilst further research is required to elicit more accurate utility values for wet AMD, this novel regression analysis provides a possible source of utility values to allow future economic models to capture the quality of life impact of changes in VA in both eyes. Novartis Pharmaceuticals UK Limited.

  10. Risk of colorectal cancer in Asian patients with ulcerative colitis: a systematic review and meta-analysis.

    PubMed

    Bopanna, Sawan; Ananthakrishnan, Ashwin N; Kedia, Saurabh; Yajnik, Vijay; Ahuja, Vineet

    2017-04-01

    The increased risk of colorectal cancer in ulcerative colitis is well known. The risk of sporadic colorectal cancer in Asian populations is considered low and risk estimates of colorectal cancer related to ulcerative colitis from Asia vary. This meta-analysis is an Asian perspective on the risk of colorectal cancer related to ulcerative colitis. We searched PubMed and Embase for terms related to colorectal cancer in ulcerative colitis from inception to July 1, 2016. The search for published articles was done by country for all countries in Asia. We included studies with information on the prevalence and cumulative risk of colorectal cancer at various timepoints. A random-effects meta-analysis was done to calculate the pooled prevalence as well as a cumulative risk at 10 years, 20 years, and 30 years of disease. Our search identified 2575 articles; of which 44 were eligible for inclusion. Our analysis included a total of 31 287 patients with ulcerative colitis with a total of 293 reported colorectal cancers. Using pooled prevalence estimates from various studies, the overall prevalence was 0·85% (95% CI 0·65-1·04). The risks for colorectal cancer were 0·02% (95% CI 0·00-0·04) at 10 years, 4·81% (3·26-6·36) at 20 years, and 13·91% (7·09-20·72) at 30 years. Subgroup analysis by stratifying the studies according to region or period of the study did not reveal any significant differences. We found the risk of colorectal cancer in Asian patients with ulcerative colitis was similar to recent estimates in Europe and North America. Adherence to screening is therefore necessary. Larger population-based, prospective studies are required for better estimates of the risk. Indo-US Science and Technology Forum. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Risk of colorectal cancer in Asian patients with ulcerative colitis: a systematic review and meta-analysis

    PubMed Central

    Bopanna, Sawan; Ananthakrishnan, Ashwin N; Kedia, Saurabh; Yajnik, Vijay; Ahuja, Vineet

    2017-01-01

    Summary Background The increased risk of colorectal cancer in ulcerative colitis is well known. The risk of sporadic colorectal cancer in Asian populations is considered low and risk estimates of colorectal cancer related to ulcerative colitis from Asia vary. This meta-analysis is an Asian perspective on the risk of colorectal cancer related to ulcerative colitis. Methods We searched PubMed and Embase for terms related to colorectal cancer in ulcerative colitis from inception to July 1, 2016. The search for published articles was done by country for all countries in Asia. We included studies with information on the prevalence and cumulative risk of colorectal cancer at various timepoints. A random-effects meta-analysis was done to calculate the pooled prevalence as well as a cumulative risk at 10 years, 20 years, and 30 years of disease. Findings Our search identified 2575 articles; of which 44 were eligible for inclusion. Our analysis included a total of 31 287 patients with ulcerative colitis with a total of 293 reported colorectal cancers. Using pooled prevalence estimates from various studies, the overall prevalence was 0·85% (95% CI 0·65–1·04). The risks for colorectal cancer were 0·02% (95% CI 0·00–0·04) at 10 years, 4·81% (3·26–6·36) at 20 years, and 13·91% (7·09–20·72) at 30 years. Subgroup analysis by stratifying the studies according to region or period of the study did not reveal any significant differences. Interpretation We found the risk of colorectal cancer in Asian patients with ulcerative colitis was similar to recent estimates in Europe and North America. Adherence to screening is therefore necessary. Larger population-based, prospective studies are required for better estimates of the risk. PMID:28404156

  12. Use of Markov Chain Monte Carlo analysis with a physiologically-based pharmacokinetic model of methylmercury to estimate exposures in US women of childbearing age.

    PubMed

    Allen, Bruce C; Hack, C Eric; Clewell, Harvey J

    2007-08-01

    A Bayesian approach, implemented using Markov Chain Monte Carlo (MCMC) analysis, was applied with a physiologically-based pharmacokinetic (PBPK) model of methylmercury (MeHg) to evaluate the variability of MeHg exposure in women of childbearing age in the U.S. population. The analysis made use of the newly available National Health and Nutrition Survey (NHANES) blood and hair mercury concentration data for women of age 16-49 years (sample size, 1,582). Bayesian analysis was performed to estimate the population variability in MeHg exposure (daily ingestion rate) implied by the variation in blood and hair concentrations of mercury in the NHANES database. The measured variability in the NHANES blood and hair data represents the result of a process that includes interindividual variation in exposure to MeHg and interindividual variation in the pharmacokinetics (distribution, clearance) of MeHg. The PBPK model includes a number of pharmacokinetic parameters (e.g., tissue volumes, partition coefficients, rate constants for metabolism and elimination) that can vary from individual to individual within the subpopulation of interest. Using MCMC analysis, it was possible to combine prior distributions of the PBPK model parameters with the NHANES blood and hair data, as well as with kinetic data from controlled human exposures to MeHg, to derive posterior distributions that refine the estimates of both the population exposure distribution and the pharmacokinetic parameters. In general, based on the populations surveyed by NHANES, the results of the MCMC analysis indicate that a small fraction, less than 1%, of the U.S. population of women of childbearing age may have mercury exposures greater than the EPA RfD for MeHg of 0.1 microg/kg/day, and that there are few, if any, exposures greater than the ATSDR MRL of 0.3 microg/kg/day. The analysis also indicates that typical exposures may be greater than previously estimated from food consumption surveys, but that the variability in exposure within the population of U.S. women of childbearing age may be less than previously assumed.

  13. Estimated hospitalizations attributed to norovirus and rotavirus infection in Canada, 2006-2010.

    PubMed

    Morton, V K; Thomas, M K; McEwen, S A

    2015-12-01

    Enteric viruses including norovirus and rotavirus are leading causes of gastroenteritis in Canada. However, only a small number of clinical cases are actually tested for these pathogens leading to systematic underestimation of attributed hospitalizations in administrative databases. The objective of this analysis was to estimate the number of hospitalizations due to norovirus and rotavirus in Canada. Hospitalization records for acute gastroenteritis-associated discharges at all acute-care hospitals in Canada between 2006 and 2011 were analysed. Cause-unspecified gastroenteritis hospitalizations were modelled using age-specific negative binomial models with cause-specified gastroenteritis admissions as predictors. The coefficients from the models were used to estimate the number of norovirus and rotavirus admissions. The total annual hospitalizations for rotavirus were estimated to be between 4500 and 10 000. Total annual hospitalizations for norovirus were estimated to be between 4000 and 11 000. The mean total annual cost associated with these hospitalizations was estimated to be at least $16 million for rotavirus and $21 million for norovirus (all figures in Canadian dollars). This study is the first comprehensive analysis of norovirus and rotavirus hospitalizations in Canada. These estimates provide a more complete assessment of the burden and economic costs of these pathogens to the Canadian healthcare system.

  14. A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.

    PubMed

    Bord, Séverine; Bioche, Christèle; Druilhet, Pierre

    2018-05-01

    We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Population pharmacokinetic characterization of BAY 81-8973, a full-length recombinant factor VIII: lessons learned - importance of including samples with factor VIII levels below the quantitation limit.

    PubMed

    Garmann, D; McLeay, S; Shah, A; Vis, P; Maas Enriquez, M; Ploeger, B A

    2017-07-01

    The pharmacokinetics (PK), safety and efficacy of BAY 81-8973, a full-length, unmodified, recombinant human factor VIII (FVIII), were evaluated in the LEOPOLD trials. The aim of this study was to develop a population PK model based on pooled data from the LEOPOLD trials and to investigate the importance of including samples with FVIII levels below the limit of quantitation (BLQ) to estimate half-life. The analysis included 1535 PK observations (measured by the chromogenic assay) from 183 male patients with haemophilia A aged 1-61 years from the 3 LEOPOLD trials. The limit of quantitation was 1.5 IU dL -1 for the majority of samples. Population PK models that included or excluded BLQ samples were used for FVIII half-life estimations, and simulations were performed using both estimates to explore the influence on the time below a determined FVIII threshold. In the data set used, approximately 16.5% of samples were BLQ, which is not uncommon for FVIII PK data sets. The structural model to describe the PK of BAY 81-8973 was a two-compartment model similar to that seen for other FVIII products. If BLQ samples were excluded from the model, FVIII half-life estimations were longer compared with a model that included BLQ samples. It is essential to assess the importance of BLQ samples when performing population PK estimates of half-life for any FVIII product. Exclusion of BLQ data from half-life estimations based on population PK models may result in an overestimation of half-life and underestimation of time under a predetermined FVIII threshold, resulting in potential underdosing of patients. © 2017 Bayer AG. Haemophilia Published by John Wiley & Sons Ltd.

  16. Using the entire history in the analysis of nested case cohort samples.

    PubMed

    Rivera, C L; Lumley, T

    2016-08-15

    Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Predicting the Magnetic Properties of ICMEs: A Pragmatic View

    NASA Astrophysics Data System (ADS)

    Riley, P.; Linker, J.; Ben-Nun, M.; Torok, T.; Ulrich, R. K.; Russell, C. T.; Lai, H.; de Koning, C. A.; Pizzo, V. J.; Liu, Y.; Hoeksema, J. T.

    2017-12-01

    The southward component of the interplanetary magnetic field plays a crucial role in being able to successfully predict space weather phenomena. Yet, thus far, it has proven extremely difficult to forecast with any degree of accuracy. In this presentation, we describe an empirically-based modeling framework for estimating Bz values during the passage of interplanetary coronal mass ejections (ICMEs). The model includes: (1) an empirically-based estimate of the magnetic properties of the flux rope in the low corona (including helicity and field strength); (2) an empirically-based estimate of the dynamic properties of the flux rope in the high corona (including direction, speed, and mass); and (3) a physics-based estimate of the evolution of the flux rope during its passage to 1 AU driven by the output from (1) and (2). We compare model output with observations for a selection of events to estimate the accuracy of this approach. Importantly, we pay specific attention to the uncertainties introduced by the components within the framework, separating intrinsic limitations from those that can be improved upon, either by better observations or more sophisticated modeling. Our analysis suggests that current observations/modeling are insufficient for this empirically-based framework to provide reliable and actionable prediction of the magnetic properties of ICMEs. We suggest several paths that may lead to better forecasts.

  18. Exposure to traffic-related air pollution and risk of development of childhood asthma: A systematic review and meta-analysis.

    PubMed

    Khreis, Haneen; Kelly, Charlotte; Tate, James; Parslow, Roger; Lucas, Karen; Nieuwenhuijsen, Mark

    2017-03-01

    The question of whether children's exposure to traffic-related air pollution (TRAP) contributes to their development of asthma is unresolved. We conducted a systematic review and performed meta-analyses to analyze the association between TRAP and asthma development in childhood. We systematically reviewed epidemiological studies published until 8 September 2016 and available in the Embase, Ovid MEDLINE (R), and Transport databases. We included studies that examined the association between children's exposure to TRAP metrics and their risk of 'asthma' incidence or lifetime prevalence, from birth to age 18years old. We extracted key characteristics of each included study using a predefined data items template and these were tabulated. We used the Critical Appraisal Skills Programme checklists to assess the validity of each included study. Where four or more independent risk estimates were available for a continuous pollutant exposure, we conducted overall and age-specific meta-analyses, and four sensitivity analyses for each summary meta-analytic exposure-outcome association. Forty-one studies met our eligibility criteria. There was notable variability in asthma definitions, TRAP exposure assessment methods and confounder adjustment. The overall random-effects risk estimates (95% CI) were 1.08 (1.03, 1.14) per 0.5×10 -5 m -1 black carbon (BC), 1.05 (1.02, 1.07) per 4μg/m 3 nitrogen dioxide (NO 2 ), 1.48 (0.89, 2.45) per 30μg/m 3 nitrogen oxides (NO x ), 1.03 (1.01, 1.05) per 1μg/m 3 Particulate Matter <2.5μm in diameter (PM 2.5 ), and 1.05 (1.02, 1.08) per 2μg/m 3 Particulate Matter <10μm in diameter (PM 10 ). Sensitivity analyses supported these findings. Across the main analysis and age-specific analysis, the least heterogeneity was seen for the BC estimates, some heterogeneity for the PM 2.5 and PM 10 estimates and the most heterogeneity for the NO 2 and NO x estimates. The overall risk estimates from the meta-analyses showed statistically significant associations for BC, NO 2 , PM 2.5 , PM 10 exposures and risk of asthma development. Our findings support the hypothesis that childhood exposure to TRAP contributes to their development of asthma. Future meta-analyses would benefit from greater standardization of study methods including exposure assessment harmonization, outcome harmonization, confounders' harmonization and the inclusion of all important confounders in individual studies. PROSPERO 2014: CRD42014015448. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. The Importance of Behavioral Thresholds and Objective Functions in Contaminant Transport Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Kang, M.; Thomson, N. R.

    2007-12-01

    The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.

  20. Medical image segmentation to estimate HER2 gene status in breast cancer

    NASA Astrophysics Data System (ADS)

    Palacios-Navarro, Guillermo; Acirón-Pomar, José Manuel; Vilchez-Sorribas, Enrique; Zambrano, Eddie Galarza

    2016-02-01

    This work deals with the estimation of HER2 Gene status in breast tumour images treated with in situ hybridization techniques (ISH). We propose a simple algorithm to obtain the amplification factor of HER2 gene. The obtained results are very close to those obtained by specialists in a manual way. The developed algorithm is based on colour image segmentation and has been included in a software application tool for breast tumour analysis. The developed tool focus on the estimation of the seriousness of tumours, facilitating the work of pathologists and contributing to a better diagnosis.

  1. Area estimation using multiyear designs and partial crop identification

    NASA Technical Reports Server (NTRS)

    Sielken, R. L., Jr.

    1984-01-01

    Statistical procedures were developed for large area assessments using both satellite and conventional data. Crop acreages, other ground cover indices, and measures of change were the principal characteristics of interest. These characteristics are capable of being estimated from samples collected possibly from several sources at varying times, with different levels of identification. Multiyear analysis techniques were extended to include partially identified samples; the best current year sampling design corresponding to a given sampling history was determined; weights reflecting the precision or confidence in each observation were identified and utilized, and the variation in estimates incorporating partially identified samples were quantified.

  2. Systematic review and meta-analysis estimating association of cysticercosis and neurocysticercosis with epilepsy.

    PubMed

    Debacq, Gabrielle; Moyano, Luz M; Garcia, Héctor H; Boumediene, Farid; Marin, Benoit; Ngoungou, Edgard B; Preux, Pierre-Marie

    2017-03-01

    We reviewed studies that analyzed cysticercosis (CC), neurocysticercosis (NCC) and epilepsy across Latin America, Asia and Sub-Saharan Africa, to estimate the odds ratio and etiologic fraction of epilepsy due to CC in tropical regions. We conducted a systematic review of the literature on cysticercosis and epilepsy in the tropics, collecting data from case-control and cross-sectional studies. Exposure criteria for CC included one or more of the following: serum ELISA or EITB positivity, presence of subcutaneous cysts (both not verified and unverified by histology), histology consistent with calcified cysts, and brain CT scan consistent with NCC. A common odds-ratio was then estimated using meta-analysis. 37 studies from 23 countries were included (n = 24,646 subjects, 14,934 with epilepsy and 9,712 without epilepsy). Of these, 29 were case-control (14 matched). The association between CC and epilepsy was significant in 19 scientific articles. Odds ratios ranged from 0.2 to 25.4 (a posteriori power 4.5-100%) and the common odds ratio was 2.7 (95% CI 2.1-3.6, p <0.001). Three subgroup analyses performed gave odds ratios as: 2.2 (EITB-based studies), 3.2 (CT-based studies), 1.9 (neurologist-confirmed epilepsy; door-to-door survey and at least one matched control per case). Etiologic fraction was estimated to be 63% in the exposed group among the population. Despite differences in findings, this meta-analysis suggests that cysticercosis is a significant contributor to late-onset epilepsy in tropical regions around the world, and its impact may vary depending on transmission intensity.

  3. A systematic review and meta-analysis to determine the effect of sperm DNA damage on in vitro fertilization and intracytoplasmic sperm injection outcome

    PubMed Central

    Simon, Luke; Zini, Armand; Dyachenko, Alina; Ciampi, Antonio; Carrell, Douglas T

    2017-01-01

    Sperm DNA damage is prevalent among infertile men and is known to influence natural reproduction. However, the impact of sperm DNA damage on assisted reproduction outcomes remains controversial. Here, we conducted a meta-analysis of studies on sperm DNA damage (assessed by SCSA, TUNEL, SCD, or Comet assay) and clinical pregnancy after IVF and/or ICSI treatment from MEDLINE, EMBASE, and PUBMED database searches for this analysis. We identified 41 articles (with a total of 56 studies) including 16 IVF studies, 24 ICSI studies, and 16 mixed (IVF + ICSI) studies. These studies measured DNA damage (by one of four assays: 23 SCSA, 18 TUNEL, 8 SCD, and 7 Comet) and included a total of 8068 treatment cycles (3734 IVF, 2282 ICSI, and 2052 mixed IVF + ICSI). The combined OR of 1.68 (95% CI: 1.49–1.89; P < 0.0001) indicates that sperm DNA damage affects clinical pregnancy following IVF and/or ICSI treatment. In addition, the combined OR estimates of IVF (16 estimates, OR = 1.65; 95% CI: 1.34–2.04; P < 0.0001), ICSI (24 estimates, OR = 1.31; 95% CI: 1.08–1.59; P = 0.0068), and mixed IVF + ICSI studies (16 estimates, OR = 2.37; 95% CI: 1.89–2.97; P < 0.0001) were also statistically significant. There is sufficient evidence in the existing literature suggesting that sperm DNA damage has a negative effect on clinical pregnancy following IVF and/or ICSI treatment. PMID:27345006

  4. Cost-effectiveness simulation analysis of schizophrenia at the Instituto Mexicano del Seguro Social: Assessment of typical and atypical antipsychotics.

    PubMed

    Mould-Quevedo, Joaquín; Contreras-Hernández, Iris; Verduzco, Wáscar; Mejía-Aranguré, Juan Manuel; Garduño-Espinosa, Juan

    2009-07-01

    Estimation of the economic costs of schizophrenia is a fundamental tool for a better understanding of the magnitude of this health problem. The aim of this study was to estimate the costs and effectiveness of five antipsychotic treatments (ziprasidone, olanzapine, risperidone, haloperidol and clozapine), which are included in the national formulary at the Instituto Mexicano del Seguro Social, through a simulation model. Type of economic evaluation: complete economic evaluation of cost-effectiveness. direct medical costs. 1 year. Effectiveness measure: number of months free of psychotic symptoms. to estimate cost-effectiveness, a Markov model was constructed and a Monte Carlo simulation was carried out. Effectiveness: the results of the Markov model showed that the antipsychotic with the highest number months free of psychotic symptoms was ziprasidone (mean 9.2 months). The median annual costs for patients using ziprasidone included in the hypothetical cohort was 194,766.6 Mexican pesos (MXP) (95% CI, 26,515.6-363,017.6 MXP), with an exchange rate of 1 € = 17.36 MXP. The highest costs in the probabilistic analysis were estimated for clozapine treatment (260,236.9 MXP). Through a probabilistic analysis, ziprasidone showed the lowest costs and the highest number of months free of psychotic symptoms and was also the most costeffective antipsychotic observed in acceptability curves and net monetary benefits. Copyright © 2009 Sociedad Española de Psiquiatría and Sociedad Española de Psiquiatría Biológica. Published by Elsevier Espana. All rights reserved.

  5. Traumatic Spinal Injury: Global Epidemiology and Worldwide Volume.

    PubMed

    Kumar, Ramesh; Lim, Jaims; Mekary, Rania A; Rattani, Abbas; Dewan, Michael C; Sharif, Salman Y; Osorio-Fonseca, Enrique; Park, Kee B

    2018-05-01

    Traumatic spinal injury (TSI) results from injury to bony, ligamentous, and/or neurologic structures of the spinal column and can cause significant morbidity and mortality. The global burden of TSI is poorly understood, so we performed a systematic review and meta-analysis to estimate the global volume of TSI. We performed a systematic review through PubMed, Embase, and Cochrane Databases on TSI studies reported from 2000 to 2016. Collected data were used to perform a meta-analysis to estimate the annual incidence of TSI across World Health Organization regions and World Bank income groups using random-effect models. Incorporating global population figures, the annual worldwide volume of TSI was estimated. A total of 102 studies were included in the systematic review and 19 studies in the meta-analysis. The overall global incidence of TSI was 10.5 cases per 100,000 persons, resulting in an estimated 768,473 [95% confidence interval, 597,213-939,732] new cases of TSI annually worldwide. The incidence of TSI was higher in low- and middle-income countries (8.72 per 100,000 persons) compared with high-income countries (13.69 per 100,000 persons). Road traffic accidents, followed by falls, were the most common mechanism of TSI worldwide. Overall, 48.8% of patients with TSI required surgery. TSI is a major source of morbidity and mortality throughout the world. Largely preventable mechanisms, including road traffic accidents and falls, are the main causes of TSI globally. Further investigation is needed to delineate local and regional TSI incidences and causes, especially in low- and middle-income countries. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Sex estimation in a modern American osteological sample using a discriminant function analysis from the calcaneus.

    PubMed

    DiMichele, Daniel L; Spradley, M Katherine

    2012-09-10

    Reliable methods for sex estimation during the development of a biological profile are important to the forensic community in instances when the common skeletal elements used to assess sex are absent or damaged. Sex estimation from the calcaneus has potentially significant importance for the forensic community. Specifically, measurements of the calcaneus provide an additional reliable method for sex estimation via discriminant function analysis based on a North American forensic population. Research on a modern American sample was chosen in order to develop up-to-date population specific discriminant functions for sex estimation. The current study addresses this matter, building upon previous research and introduces a new measurement, posterior circumference that promises to advance the accuracy of use of this single, highly resistant bone in future instances of sex determination from partial skeletal remains. Data were collected from The William Bass Skeletal Collection, housed at The University of Tennessee. Sample size includes 320 adult individuals born between the years 1900 and 1985. The sample was comprised of 136 females and 184 males. Skeletons used for measurements were confined to those with fused diaphyses showing no signs of pathology or damage that may have altered measurements, and that also had accompanying records that included information on ancestry, age, and sex. Measurements collected and analyzed include maximum length, load-arm length, load-arm width, and posterior circumference. The sample was used to compute a discriminant function, based on all four variables, and was performed in SAS 9.1.3. The discriminant function obtained an overall cross-validated classification rate of 86.69%. Females were classified correctly in 88.64% of the cases and males were correctly classified in 84.75% of the cases. Due to the increasing heterogeneity of current populations further discussion on this topic will include the importance that the re-evaluation of past studies has on modern forensic populations. Due to secular and micro evolutionary changes among populations, the near future must include additional methods being updated, and new methods being examined, both which should cover a wide population spectrum. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. The study on biomass fraction estimate methodology of municipal solid waste incinerator in Korea.

    PubMed

    Kang, Seongmin; Kim, Seungjin; Lee, Jeongwoo; Yun, Hyunki; Kim, Ki-Hyun; Jeon, Eui-Chan

    2016-10-01

    In Korea, the amount of greenhouse gases released due to waste materials was 14,800,000 t CO2eq in 2012, which increased from 5,000,000 t CO2eq in 2010. This included the amount released due to incineration, which has gradually increased since 2010. Incineration was found to be the biggest contributor to greenhouse gases, with 7,400,000 t CO2eq released in 2012. Therefore, with regards to the trading of greenhouse gases emissions initiated in 2015 and the writing of the national inventory report, it is important to increase the reliability of the measurements related to the incineration of waste materials. This research explored methods for estimating the biomass fraction at Korean MSW incinerator facilities and compared the biomass fractions obtained with the different biomass fraction estimation methods. The biomass fraction was estimated by the method using default values of fossil carbon fraction suggested by IPCC, the method using the solid waste composition, and the method using incinerator flue gas. The highest biomass fractions in Korean municipal solid waste incinerator facilities were estimated by the IPCC Default method, followed by the MSW analysis method and the Flue gas analysis method. Therefore, the difference in the biomass fraction estimate was the greatest between the IPCC Default and the Flue gas analysis methods. The difference between the MSW analysis and the flue gas analysis methods was smaller than the difference with IPCC Default method. This suggested that the use of the IPCC default method cannot reflect the characteristics of Korean waste incinerator facilities and Korean MSW. Incineration is one of most effective methods for disposal of municipal solid waste (MSW). This paper investigates the applicability of using biomass content to estimate the amount of CO2 released, and compares the biomass contents determined by different methods in order to establish a method for estimating biomass in the MSW incinerator facilities of Korea. After analyzing the biomass contents of the collected solid waste samples and the flue gas samples, the results were compared with the Intergovernmental Panel on Climate Change (IPCC) method, and it seems that to calculate the biomass fraction it is better to use the flue gas analysis method than the IPCC method. It is valuable to design and operate a real new incineration power plant, especially for the estimation of greenhouse gas emissions.

  8. Age estimation by pulp-to-tooth area ratio using cone-beam computed tomography: A preliminary analysis

    PubMed Central

    Rai, Arpita; Acharya, Ashith B.; Naikmasur, Venkatesh G.

    2016-01-01

    Background: Age estimation of living or deceased individuals is an important aspect of forensic sciences. Conventionally, pulp-to-tooth area ratio (PTR) measured from periapical radiographs have been utilized as a nondestructive method of age estimation. Cone-beam computed tomography (CBCT) is a new method to acquire three-dimensional images of the teeth in living individuals. Aims: The present study investigated age estimation based on PTR of the maxillary canines measured in three planes obtained from CBCT image data. Settings and Design: Sixty subjects aged 20–85 years were included in the study. Materials and Methods: For each tooth, mid-sagittal, mid-coronal, and three axial sections—cementoenamel junction (CEJ), one-fourth root level from CEJ, and mid-root—were assessed. PTR was calculated using AutoCAD software after outlining the pulp and tooth. Statistical Analysis Used: All statistical analyses were performed using an SPSS 17.0 software program. Results and Conclusions: Linear regression analysis showed that only PTR in axial plane at CEJ had significant age correlation (r = 0.32; P < 0.05). This is probably because of clearer demarcation of pulp and tooth outline at this level. PMID:28123269

  9. Estimating the Diets of Animals Using Stable Isotopes and a Comprehensive Bayesian Mixing Model

    PubMed Central

    Hopkins, John B.; Ferguson, Jake M.

    2012-01-01

    Using stable isotope mixing models (SIMMs) as a tool to investigate the foraging ecology of animals is gaining popularity among researchers. As a result, statistical methods are rapidly evolving and numerous models have been produced to estimate the diets of animals—each with their benefits and their limitations. Deciding which SIMM to use is contingent on factors such as the consumer of interest, its food sources, sample size, the familiarity a user has with a particular framework for statistical analysis, or the level of inference the researcher desires to make (e.g., population- or individual-level). In this paper, we provide a review of commonly used SIMM models and describe a comprehensive SIMM that includes all features commonly used in SIMM analysis and two new features. We used data collected in Yosemite National Park to demonstrate IsotopeR's ability to estimate dietary parameters. We then examined the importance of each feature in the model and compared our results to inferences from commonly used SIMMs. IsotopeR's user interface (in R) will provide researchers a user-friendly tool for SIMM analysis. The model is also applicable for use in paleontology, archaeology, and forensic studies as well as estimating pollution inputs. PMID:22235246

  10. Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets.

    PubMed

    Langer, Raquel D; Borges, Juliano H; Pascoa, Mauro A; Cirolini, Vagner X; Guerra-Júnior, Gil; Gonçalves, Ezequiel M

    2016-03-11

    Bioelectrical Impedance Analysis (BIA) is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM) estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. A total of 396 males, Brazilian Army cadets, aged 17-24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA) as a reference method. Student's t-test (for paired sample), linear regression analysis, and Bland-Altman method were used to test the validity of the BIA equations. Predictive BIA equations showed significant differences in FFM compared to DXA (p < 0.05) and large limits of agreement by Bland-Altman. Predictive BIA equations explained 68% to 88% of FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.

  11. The accuracy of less: Natural bounds explain why quantity decreases are estimated more accurately than quantity increases.

    PubMed

    Chandon, Pierre; Ordabayeva, Nailya

    2017-02-01

    Five studies show that people, including experts such as professional chefs, estimate quantity decreases more accurately than quantity increases. We argue that this asymmetry occurs because physical quantities cannot be negative. Consequently, there is a natural lower bound (zero) when estimating decreasing quantities but no upper bound when estimating increasing quantities, which can theoretically grow to infinity. As a result, the "accuracy of less" disappears (a) when a numerical or a natural upper bound is present when estimating quantity increases, or (b) when people are asked to estimate the (unbounded) ratio of change from 1 size to another for both increasing and decreasing quantities. Ruling out explanations related to loss aversion, symbolic number mapping, and the visual arrangement of the stimuli, we show that the "accuracy of less" influences choice and demonstrate its robustness in a meta-analysis that includes previously published results. Finally, we discuss how the "accuracy of less" may explain asymmetric reactions to the supersizing and downsizing of food portions, some instances of the endowment effect, and asymmetries in the perception of increases and decreases in physical and psychological distance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Analysis of Longitudinal Studies With Repeated Outcome Measures: Adjusting for Time-Dependent Confounding Using Conventional Methods.

    PubMed

    Keogh, Ruth H; Daniel, Rhian M; VanderWeele, Tyler J; Vansteelandt, Stijn

    2018-05-01

    Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.

  13. Estimating abundance of mountain lions from unstructured spatial sampling

    USGS Publications Warehouse

    Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.

    2012-01-01

    Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.

  14. NASA's Carbon Monitoring System Flux-Pilot Project: A Multi-Component Analysis System for Carbon-Cycle Research and Monitoring

    NASA Technical Reports Server (NTRS)

    Pawson, S.; Gunson, M.; Potter, C.; Jucks, K.

    2012-01-01

    The importance of greenhouse gas increases for climate motivates NASA s observing strategy for CO2 from space, including the forthcoming Orbiting Carbon Observatory (OCO-2) mission. Carbon cycle monitoring, including attribution of atmospheric concentrations to regional emissions and uptake, requires a robust modeling and analysis infrastructure to optimally extract information from the observations. NASA's Carbon-Monitoring System Flux-Pilot Project (FPP) is a prototype for such analysis, combining a set of unique tools to facilitate analysis of atmospheric CO2 along with fluxes between the atmosphere and the terrestrial biosphere or ocean. NASA's analysis system is unique, in that it combines information and expertise from the land, oceanic, and atmospheric branches of the carbon cycle and includes some estimates of uncertainty. Numerous existing space-based missions provide information of relevance to the carbon cycle. This study describes the components of the FPP framework, assessing the realism of computed fluxes, thus providing the basis for research and monitoring applications. Fluxes are computed using data-constrained terrestrial biosphere models and physical ocean models, driven by atmospheric observations and assimilating ocean-color information. Use of two estimates provides a measure of uncertainty in the fluxes. Along with inventories of other emissions, these data-derived fluxes are used in transport models to assess their consistency with atmospheric CO2 observations. Closure is achieved by using a four-dimensional data assimilation (inverse) approach that adjusts the terrestrial biosphere fluxes to make them consistent with the atmospheric CO2 observations. Results will be shown, illustrating the year-to-year variations in land biospheric and oceanic fluxes computed in the FPP. The signals of these surface-flux variations on atmospheric CO2 will be isolated using forward modeling tools, which also incorporate estimates of transport error. The results will be discussed in the context of interannual variability of observed atmospheric CO2 distributions.

  15. Galaxy two-point covariance matrix estimation for next generation surveys

    NASA Astrophysics Data System (ADS)

    Howlett, Cullan; Percival, Will J.

    2017-12-01

    We perform a detailed analysis of the covariance matrix of the spherically averaged galaxy power spectrum and present a new, practical method for estimating this within an arbitrary survey without the need for running mock galaxy simulations that cover the full survey volume. The method uses theoretical arguments to modify the covariance matrix measured from a set of small-volume cubic galaxy simulations, which are computationally cheap to produce compared to larger simulations and match the measured small-scale galaxy clustering more accurately than is possible using theoretical modelling. We include prescriptions to analytically account for the window function of the survey, which convolves the measured covariance matrix in a non-trivial way. We also present a new method to include the effects of super-sample covariance and modes outside the small simulation volume which requires no additional simulations and still allows us to scale the covariance matrix. As validation, we compare the covariance matrix estimated using our new method to that from a brute-force calculation using 500 simulations originally created for analysis of the Sloan Digital Sky Survey Main Galaxy Sample. We find excellent agreement on all scales of interest for large-scale structure analysis, including those dominated by the effects of the survey window, and on scales where theoretical models of the clustering normally break down, but the new method produces a covariance matrix with significantly better signal-to-noise ratio. Although only formally correct in real space, we also discuss how our method can be extended to incorporate the effects of redshift space distortions.

  16. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  17. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-01

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  18. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less

  19. Network meta-analysis, electrical networks and graph theory.

    PubMed

    Rücker, Gerta

    2012-12-01

    Network meta-analysis is an active field of research in clinical biostatistics. It aims to combine information from all randomized comparisons among a set of treatments for a given medical condition. We show how graph-theoretical methods can be applied to network meta-analysis. A meta-analytic graph consists of vertices (treatments) and edges (randomized comparisons). We illustrate the correspondence between meta-analytic networks and electrical networks, where variance corresponds to resistance, treatment effects to voltage, and weighted treatment effects to current flows. Based thereon, we then show that graph-theoretical methods that have been routinely applied to electrical networks also work well in network meta-analysis. In more detail, the resulting consistent treatment effects induced in the edges can be estimated via the Moore-Penrose pseudoinverse of the Laplacian matrix. Moreover, the variances of the treatment effects are estimated in analogy to electrical effective resistances. It is shown that this method, being computationally simple, leads to the usual fixed effect model estimate when applied to pairwise meta-analysis and is consistent with published results when applied to network meta-analysis examples from the literature. Moreover, problems of heterogeneity and inconsistency, random effects modeling and including multi-armed trials are addressed. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  20. ANNz2: Photometric Redshift and Probability Distribution Function Estimation using Machine Learning

    NASA Astrophysics Data System (ADS)

    Sadeh, I.; Abdalla, F. B.; Lahav, O.

    2016-10-01

    We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister & Lahav, which now includes generation of full probability distribution functions (PDFs). ANNz2 utilizes multiple machine learning methods, such as artificial neural networks and boosted decision/regression trees. The objective of the algorithm is to optimize the performance of the photo-z estimation, to properly derive the associated uncertainties, and to produce both single-value solutions and PDFs. In addition, estimators are made available, which mitigate possible problems of non-representative or incomplete spectroscopic training samples. ANNz2 has already been used as part of the first weak lensing analysis of the Dark Energy Survey, and is included in the experiment's first public data release. Here we illustrate the functionality of the code using data from the tenth data release of the Sloan Digital Sky Survey and the Baryon Oscillation Spectroscopic Survey. The code is available for download at http://github.com/IftachSadeh/ANNZ.

  1. Robust inference in discrete hazard models for randomized clinical trials.

    PubMed

    Nguyen, Vinh Q; Gillen, Daniel L

    2012-10-01

    Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.

  2. Evaluation of the information content of long-term wastewater characteristics data in relation to activated sludge model parameters.

    PubMed

    Alikhani, Jamal; Takacs, Imre; Al-Omari, Ahmed; Murthy, Sudhir; Massoudieh, Arash

    2017-03-01

    A parameter estimation framework was used to evaluate the ability of observed data from a full-scale nitrification-denitrification bioreactor to reduce the uncertainty associated with the bio-kinetic and stoichiometric parameters of an activated sludge model (ASM). Samples collected over a period of 150 days from the effluent as well as from the reactor tanks were used. A hybrid genetic algorithm and Bayesian inference were used to perform deterministic and parameter estimations, respectively. The main goal was to assess the ability of the data to obtain reliable parameter estimates for a modified version of the ASM. The modified ASM model includes methylotrophic processes which play the main role in methanol-fed denitrification. Sensitivity analysis was also used to explain the ability of the data to provide information about each of the parameters. The results showed that the uncertainty in the estimates of the most sensitive parameters (including growth rate, decay rate, and yield coefficients) decreased with respect to the prior information.

  3. A phylogeny and revised classification of Squamata, including 4161 species of lizards and snakes

    PubMed Central

    2013-01-01

    Background The extant squamates (>9400 known species of lizards and snakes) are one of the most diverse and conspicuous radiations of terrestrial vertebrates, but no studies have attempted to reconstruct a phylogeny for the group with large-scale taxon sampling. Such an estimate is invaluable for comparative evolutionary studies, and to address their classification. Here, we present the first large-scale phylogenetic estimate for Squamata. Results The estimated phylogeny contains 4161 species, representing all currently recognized families and subfamilies. The analysis is based on up to 12896 base pairs of sequence data per species (average = 2497 bp) from 12 genes, including seven nuclear loci (BDNF, c-mos, NT3, PDC, R35, RAG-1, and RAG-2), and five mitochondrial genes (12S, 16S, cytochrome b, ND2, and ND4). The tree provides important confirmation for recent estimates of higher-level squamate phylogeny based on molecular data (but with more limited taxon sampling), estimates that are very different from previous morphology-based hypotheses. The tree also includes many relationships that differ from previous molecular estimates and many that differ from traditional taxonomy. Conclusions We present a new large-scale phylogeny of squamate reptiles that should be a valuable resource for future comparative studies. We also present a revised classification of squamates at the family and subfamily level to bring the taxonomy more in line with the new phylogenetic hypothesis. This classification includes new, resurrected, and modified subfamilies within gymnophthalmid and scincid lizards, and boid, colubrid, and lamprophiid snakes. PMID:23627680

  4. Applications of stable isotope analysis in mammalian ecology.

    PubMed

    Walter, W David; Kurle, Carolyn M; Hopkins, John B

    2014-01-01

    In this editorial, we provide a brief introduction and summarize the 10 research articles included in this Special Issue on Applications of stable isotope analysis in mammalian ecology. The first three articles report correction and discrimination factors that can be used to more accurately estimate the diets of extinct and extant mammals using stable isotope analysis. The remaining seven applied research articles use stable isotope analysis to address a variety of wildlife conservation and management questions from the oceans to the mountains.

  5. Environmental Survey of the B-3 and Ford’s Farm Ranges,

    DTIC Science & Technology

    1983-08-01

    reported have an estimated analytical error of *35% unless noted otherwise. 14 Isotopic Analysis The isotopic uranium analysis procedure used by UST...sulfate buffer and elec- trodeposited on a stainless steel disc, and isotopes of uranium (234U, 23 5U, and 2 38U) were determined by pulse height analysis ...measurements and some environmental sampling. Several special studies were also conducted, including analyses of the isotopic composition of uranium in

  6. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  7. Comparison of yellow poplar growth models on the basis of derived growth analysis variables

    Treesearch

    Keith F. Jensen; Daniel A. Yaussy

    1986-01-01

    Quadratic and cubic polynomials, and Gompertz and Richards asymptotic models were fitted to yellow poplar growth data. These data included height, leaf area, leaf weight and new shoot height for 23 weeks. Seven growth analysis variables were estimated from each function. The Gompertz and Richards models fitted the data best and provided the most accurate derived...

  8. Visual Perception and Visual-Motor Integration in Very Preterm and/or Very Low Birth Weight Children: A Meta-Analysis

    ERIC Educational Resources Information Center

    Geldof, C. J. A.; van Wassenaer, A. G.; de Kieviet, J. F.; Kok, J. H.; Oosterlaan, J.

    2012-01-01

    A range of neurobehavioral impairments, including impaired visual perception and visual-motor integration, are found in very preterm born children, but reported findings show great variability. We aimed to aggregate the existing literature using meta-analysis, in order to provide robust estimates of the effect of very preterm birth on visual…

  9. Application of cause-and-effect analysis to potentiometric titration.

    PubMed

    Kufelnicki, A; Lis, S; Meinrath, G

    2005-08-01

    A first attempt has been made to interpret physicochemical data from potentiometric titration analysis in accordance with the complete measurement-uncertainty budget approach (bottom-up) of ISO and Eurachem. A cause-and-effect diagram is established and discussed. Titration data for arsenazo III are used as a basis for this discussion. The commercial software Superquad is used and applied within a computer-intensive resampling framework. The cause-and-effect diagram is applied to evaluation of seven protonation constants of arsenazo III in the pH range 2-10.7. The data interpretation is based on empirical probability distributions and their analysis by second-order correct confidence estimates. The evaluated data are applied in the calculation of a speciation diagram including uncertainty estimates using the probabilistic speciation software Ljungskile.

  10. The emerging trend of non-operative treatment in paediatric type I open forearm fractures.

    PubMed

    Zhang, H; Fanelli, M; Adams, C; Graham, J; Seeley, M

    2017-08-01

    Open fractures are considered an orthopaedic emergency and are generally an indication for operative debridement. Recent studies have questioned this approach for the management of Gustilo-Anderson Type I open fractures in the paediatric population. This meta-analysis studies the non-operative management of Type I open paediatric forearm fractures. An Ovid MEDLINE and PubMed database literature search was performed for studies that involved a quantified number of Gustilo-Anderson Type I open forearm fractures in the paediatric population, which were treated without operative intervention. A fixed-effect meta-analysis, weighting each study based on the number of patients, and a pooled estimate of infection risk (with 95% confidence interval (CI)) was performed. The search results yielded five studies that were eligible for inclusion. No included patients had operative debridement and all were treated with antibiotics. The number of patients in each study ranged from 3 to 45, with a total of 127 paediatric patients in the meta-analysis. The infection rate was 0% for all patients included. The meta-analysis estimated a pooled infection risk of 0% (95% CI 0 to 2.9). The five included studies had a total of 127 patients with no cases of infection after non-operative management of Type I open paediatric forearm fractures. The infection rate of Type I fractures among operatively managed patients is 1.9%. The trend in literature towards non-operative treatment of paediatric Type I open fractures holds true in this meta-analysis.

  11. Quotation accuracy in medical journal articles-a systematic review and meta-analysis.

    PubMed

    Jergas, Hannah; Baethge, Christopher

    2015-01-01

    Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose-quotation errors-may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress.

  12. Quotation accuracy in medical journal articles—a systematic review and meta-analysis

    PubMed Central

    Jergas, Hannah

    2015-01-01

    Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose—quotation errors—may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress. PMID:26528420

  13. Crossover effect of spouse weekly working hours on estimated 10-years risk of cardiovascular disease.

    PubMed

    Kang, Mo-Yeol; Hong, Yun-Chul

    2017-01-01

    To investigate the association between spouse weekly working hours (SWWH) and the estimated 10-years risk of cardiovascular disease (CVD). This cross-sectional study was based on the data obtained from the Korean National Health and Nutrition Examination Survey 2007-2012. Data of 16,917 participants (8,330 husbands, 8,587 wives) were used for this analysis. The participants' clinical data were collected to estimate the 10-years risk of CVD, as well as weekly working hours. Multiple logistic regression was conducted to investigate the association between SWWH and the estimated 10-years risk of CVD. We also performed a stratified analysis according to each participant's and their spouse's employment status. Compared to those whose spouses worked 30 hours per week, estimated 10-years risk of CVD was significantly higher as SWWH increase among those whose spouses worked >30 hours per week. After adjusting for covariates, the odds ratio for high CVD risk was found to increase as SWWH increased, up to 2.52 among husbands and 2.43 among wives. We also found that the association between SWWH and the estimated 10-years risk of CVD varied according to the employment status. Analysis of each component included in the CVD appraisal model showed that SWWH had close relationship with diabetes in men, and smoking habits in women. Spouse's long working hours are associated with individual's risk of CVD in future, especially among husbands.

  14. Estimation of the water retention curve from the soil hydraulic conductivity and sorptivity in an upward infiltration process

    NASA Astrophysics Data System (ADS)

    Moret-Fernández, David; Angulo, Marta; Latorre, Borja; González-Cebollada, César; López, María Victoria

    2017-04-01

    Determination of the saturated hydraulic conductivity, Ks, and the α and n parameters of the van Genuchten (1980) water retention curve, θ(h), are fundamental to fully understand and predict soil water distribution. This work presents a new procedure to estimate the soil hydraulic properties from the inverse analysis of a single cumulative upward infiltration curve followed by an overpressure step at the end of the wetting process. Firstly, Ks is calculated by the Darcy's law from the overpressure step. The soil sorptivity (S) is then estimated using the Haverkamp et al., (1994) equation. Next, a relationship between α and n, f(α,n), is calculated from the estimated Sand Ks. The α and n values are finally obtained by the inverse analysis of the experimental data after applying the f(α,n) relationship to the HYDRUS-1D model. The method was validated on theoretical synthetic curves for three different soils (sand, loam and clay), and subsequently tested on experimental sieved soils (sand, loam, clay loam and clay) of known hydraulic properties. A robust relationship was observed between the theoretical α and nvalues (R2 > 0.99) of the different synthetic soils and those estimated from inverse analysis of the upward infiltration curve. Consistent results were also obtained for the experimental soils (R2 > 0.85). These results demonstrated that this technique allowed accurate estimates of the soil hydraulic properties for a wide range of textures, including clay soils.

  15. Genetic Parameters and the Impact of Off-Types for Theobroma cacao L. in a Breeding Program in Brazil

    PubMed Central

    DuVal, Ashley; Gezan, Salvador A.; Mustiga, Guiliana; Stack, Conrad; Marelli, Jean-Philippe; Chaparro, José; Livingstone, Donald; Royaert, Stefan; Motamayor, Juan C.

    2017-01-01

    Breeding programs of cacao (Theobroma cacao L.) trees share the many challenges of breeding long-living perennial crops, and genetic progress is further constrained by both the limited understanding of the inheritance of complex traits and the prevalence of technical issues, such as mislabeled individuals (off-types). To better understand the genetic architecture of cacao, in this study, 13 years of phenotypic data collected from four progeny trials in Bahia, Brazil were analyzed jointly in a multisite analysis. Three separate analyses (multisite, single site with and without off-types) were performed to estimate genetic parameters from statistical models fitted on nine important agronomic traits (yield, seed index, pod index, % healthy pods, % pods infected with witches broom, % of pods other loss, vegetative brooms, diameter, and tree height). Genetic parameters were estimated along with variance components and heritabilities from the multisite analysis, and a trial was fingerprinted with low-density SNP markers to determine the impact of off-types on estimations. Heritabilities ranged from 0.37 to 0.64 for yield and its components and from 0.03 to 0.16 for disease resistance traits. A weighted index was used to make selections for clonal evaluation, and breeding values estimated for the parental selection and estimation of genetic gain. The impact of off-types to breeding progress in cacao was assessed for the first time. Even when present at <5% of the total population, off-types altered selections by 48%, and impacted heritability estimations for all nine of the traits analyzed, including a 41% difference in estimated heritability for yield. These results show that in a mixed model analysis, even a low level of pedigree error can significantly alter estimations of genetic parameters and selections in a breeding program. PMID:29250097

  16. SleepMinder: an innovative contact-free device for the estimation of the apnoea-hypopnoea index.

    PubMed

    Zaffaroni, Alberto; de Chazal, Philip; Heneghan, Conor; Boyle, Patricia; Mppm, Patricia Ronayne; McNicholas, Walter T

    2009-01-01

    We describe an innovative sensor technology (SleepMinder) for contact-less and convenient measurement of sleep and breathing in the home. The system is based on a novel non-contact biomotion sensor and proprietary automated analysis software. The biomotion sensor uses an ultra low-power radio-frequency transceiver to sense the movement and respiration of a subject. Proprietary software performs a variety of signal analysis tasks including respiration analysis, sleep quality measurement and sleep apnea assessment. This paper measures the performance of SleepMinder as a device for the monitoring of sleep-disordered breathing (SDB) and the provision of an estimate of the apnoea-hypopnoea index (AHI). The SleepMinder was tested against expert manually scored PSG data of patients gathered in an accredited sleep laboratory. The comparison of SleepMinder to this gold standard was performed across overnight recordings of 129 subjects with suspected SDB. The dataset had a wide demographic profile with the age ranging between 20 and 81 years. Body weight included subjects with normal weight through to the very obese (Body Mass Index: 21-44 kg/m(2)). SDB severity ranged from subjects free of SDB to those with severe SDB (AHI: 0.8-96 events/hours). SleepMinder's AHI estimation has a correlation of 91% and can detect clinically significant SDB (AHI>15) with a sensitivity of 89% and a specificity of 92%.

  17. Methodological Framework for Analysis of Buildings-Related Programs with BEAMS, 2008

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, Douglas B.; Dirks, James A.; Hostick, Donna J.

    The U.S. Department of Energy’s (DOE’s) Office of Energy Efficiency and Renewable Energy (EERE) develops official “benefits estimates” for each of its major programs using its Planning, Analysis, and Evaluation (PAE) Team. PAE conducts an annual integrated modeling and analysis effort to produce estimates of the energy, environmental, and financial benefits expected from EERE’s budget request. These estimates are part of EERE’s budget request and are also used in the formulation of EERE’s performance measures. Two of EERE’s major programs are the Building Technologies Program (BT) and the Weatherization and Intergovernmental Program (WIP). Pacific Northwest National Laboratory (PNNL) supports PAEmore » by developing the program characterizations and other market information necessary to provide input to the EERE integrated modeling analysis as part of PAE’s Portfolio Decision Support (PDS) effort. Additionally, PNNL also supports BT by providing line-item estimates for the Program’s internal use. PNNL uses three modeling approaches to perform these analyses. This report documents the approach and methodology used to estimate future energy, environmental, and financial benefits using one of those methods: the Building Energy Analysis and Modeling System (BEAMS). BEAMS is a PC-based accounting model that was built in Visual Basic by PNNL specifically for estimating the benefits of buildings-related projects. It allows various types of projects to be characterized including whole-building, envelope, lighting, and equipment projects. This document contains an overview section that describes the estimation process and the models used to estimate energy savings. The body of the document describes the algorithms used within the BEAMS software. This document serves both as stand-alone documentation for BEAMS, and also as a supplemental update of a previous document, Methodological Framework for Analysis of Buildings-Related Programs: The GPRA Metrics Effort, (Elliott et al. 2004b). The areas most changed since the publication of that previous document are those discussing the calculation of lighting and HVAC interactive effects (for both lighting and envelope/whole-building projects). This report does not attempt to convey inputs to BEAMS or the methodology of their derivation.« less

  18. NASA/BLM APT, phase 2. Volume 2: Technology demonstration. [Arizona

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Techniques described include: (1) steps in the preprocessing of LANDSAT data; (2) the training of a classifier; (3) maximum likelihood classification and precision; (4) geometric correction; (5) class description; (6) digitizing; (7) digital terrain data; (8) an overview of sample design; (9) allocation and selection of primary sample units; (10) interpretation of secondary sample units; (11) data collection ground plots; (12) data reductions; (13) analysis for productivity estimation and map verification; (14) cost analysis; and (150) LANDSAT digital products. The evaluation of the pre-inventory planning for P.J. is included.

  19. SPS market analysis

    NASA Astrophysics Data System (ADS)

    Goff, H. C.

    1980-05-01

    A market analysis task included personal interviews by GE personnel and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective small solar thermal power systems (SPS) users. Over 500 firms were contacted, including three ownership classes of electric utilities, industrial firms in the top SIC codes for energy consumption, and design engineering firms. A market demand model was developed which utilizes the data base developed by personal interviews and surveys, and projected energy price and consumption data to perform sensitivity analyses and estimate potential markets for SPS.

  20. Estimating the Incidence of Symptomatic Rotavirus Infections: A Systematic Review and Meta-Analysis

    PubMed Central

    Bilcke, Joke; Van Damme, Pierre; Van Ranst, Marc; Hens, Niel; Aerts, Marc; Beutels, Philippe

    2009-01-01

    Background We conducted for the first time a systematic review, including a meta-analysis, of the incidence of symptomatic rotavirus (RV) infections, because (1) it was shown to be an influential factor in estimating the cost-effectiveness of RV vaccination, (2) multiple community-based studies assessed it prospectively, (3) previous studies indicated, inconclusively, it might be similar around the world. Methodology Pubmed (which includes Medline) was searched for surveys assessing prospectively symptomatic (diarrheal) episodes in a general population and situation, which also reported on the number of the episodes being tested RV+ and on the persons and the time period observed. A bias assessment tool was developed and used according to Cochrane guidelines by 4 researchers with different backgrounds. Heterogeneity was explored graphically and by comparing fits of study-homogenous ‘fixed effects’ and -heterogeneous ‘random effects’ models. Data were synthesized using these models. Sensitivity analysis for uncertainty regarding data abstraction, bias assessment and included studies was performed. Principal Findings Variability between the incidences obtained from 20 studies is unlikely to be due to study groups living in different environments (tropical versus temperate climate, slums versus middle-class suburban populations), nor due to the year the study was conducted (from 1967 to 2003). A random effects model was used to incorporate unexplained heterogeneity and resulted in a global incidence estimate of 0.31 [0.19; 0.50] symptomatic RV infections per personyear of observation for children below 2 years of age, and of 0.24 [0.17; 0.34] when excluding the extreme high value of 0.84 reported for Mayan Indians in Guatemala. Apart from the inclusion/exclusion of the latter study, results were robust. Conclusions/Significance Rather than assumptions based on an ad-hoc selection of one or two studies, these pooled estimates (together with the measure for variability between populations) should be used as an input in future cost-effectiveness analyses of RV vaccination. PMID:19557133

Top