Sample records for estimate assumptions document

  1. Residential Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Model Documentation - Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code.

  2. Industrial Demand Module - NEMS Documentation

    EIA Publications

    2014-01-01

    Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Industrial Demand Module. The report catalogues and describes model assumptions, computational methodology, parameter estimation techniques, and model source code.

  3. International Natural Gas Model 2011, Model Documentation Report

    EIA Publications

    2013-01-01

    This report documents the objectives, analytical approach and development of the International Natural Gas Model (INGM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  4. World Energy Projection System Plus Model Documentation: Coal Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Coal Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  5. World Energy Projection System Plus Model Documentation: Transportation Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) International Transportation model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  6. World Energy Projection System Plus Model Documentation: Residential Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Residential Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  7. World Energy Projection System Plus Model Documentation: Refinery Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Refinery Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  8. World Energy Projection System Plus Model Documentation: Main Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Main Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  9. Transportation Sector Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model.

  10. World Energy Projection System Plus Model Documentation: Electricity Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Electricity Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  11. World Energy Projection System Plus Model Documentation: Greenhouse Gases Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Greenhouse Gases Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  12. World Energy Projection System Plus Model Documentation: Natural Gas Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Natural Gas Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  13. World Energy Projection System Plus Model Documentation: District Heat Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) District Heat Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  14. World Energy Projection System Plus Model Documentation: Industrial Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Industrial Model (WIM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  15. Macroeconomic Activity Module - NEMS Documentation

    EIA Publications

    2016-01-01

    Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Macroeconomic Activity Module (MAM) used to develop the Annual Energy Outlook for 2016 (AEO2016). The report catalogues and describes the module assumptions, computations, methodology, parameter estimation techniques, and mainframe source code

  16. PVWatts Version 1 Technical Reference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2013-10-01

    The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation.

  17. Black-White Summer Learning Gaps: Interpreting the Variability of Estimates across Representations

    ERIC Educational Resources Information Center

    Quinn, David M.

    2015-01-01

    The estimation of racial test score gap trends plays an important role in monitoring educational equality. Documenting gap trends is complex, however, and estimates can differ depending on the metric, modeling strategy, and psychometric assumptions. The sensitivity of summer learning gap estimates to these factors has been under-examined. Using…

  18. Commercial Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  19. Methodology for estimating helicopter performance and weights using limited data

    NASA Technical Reports Server (NTRS)

    Baserga, Claudio; Ingalls, Charles; Lee, Henry; Peyran, Richard

    1990-01-01

    Methodology is developed and described for estimating the flight performance and weights of a helicopter for which limited data are available. The methodology is based on assumptions which couple knowledge of the technology of the helicopter under study with detailed data from well documented helicopters thought to be of similar technology. The approach, analysis assumptions, technology modeling, and the use of reference helicopter data are discussed. Application of the methodology is illustrated with an investigation of the Agusta A129 Mangusta.

  20. Model documentation: Electricity Market Module, Electricity Fuel Dispatch Submodule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This report documents the objectives, analytical approach and development of the National Energy Modeling System Electricity Fuel Dispatch Submodule (EFD), a submodule of the Electricity Market Module (EMM). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  1. Coal Market Module - NEMS Documentation

    EIA Publications

    2014-01-01

    Documents the objectives and the conceptual and methodological approach used in the development of the National Energy Modeling System's (NEMS) Coal Market Module (CMM) used to develop the Annual Energy Outlook 2014 (AEO2014). This report catalogues and describes the assumptions, methodology, estimation techniques, and source code of CMM's two submodules. These are the Coal Production Submodule (CPS) and the Coal Distribution Submodule (CDS).

  2. Freight Transportation Energy Use : Volume 3. Freight Network and Operations Database.

    DOT National Transportation Integrated Search

    1979-07-01

    The data sources, procedures, and assumptions used to generate the TSC national freight network and operations database are documented. National rail, highway, waterway, and pipeline networks are presented, and estimates of facility capacity, travel ...

  3. Model documentation Renewable Fuels Module of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-01-01

    This report documents the objectives, analaytical approach and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1996 Annual Energy Outlook forecasts. The report catalogues and describes modeling assumptions, computational methodologies, data inputs, and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described.

  4. Older driver highway design handbook : recommendations and guidelines

    DOT National Transportation Integrated Search

    1996-06-01

    The purpose of this report is to document the preparation of the 1994 Table VM-1, including data sources, assumptions, and estimating procedures. Table VM-1 describes vehicle distance traveled in miles, by highway category and vehicle type. VM-1 depi...

  5. Composition of a dewarped and enhanced document image from two view images.

    PubMed

    Koo, Hyung Il; Kim, Jinho; Cho, Nam Ik

    2009-07-01

    In this paper, we propose an algorithm to compose a geometrically dewarped and visually enhanced image from two document images taken by a digital camera at different angles. Unlike the conventional works that require special equipment or assumptions on the contents of books or complicated image acquisition steps, we estimate the unfolded book or document surface from the corresponding points between two images. For this purpose, the surface and camera matrices are estimated using structure reconstruction, 3-D projection analysis, and random sample consensus-based curve fitting with the cylindrical surface model. Because we do not need any assumption on the contents of books, the proposed method can be applied not only to optical character recognition (OCR), but also to the high-quality digitization of pictures in documents. In addition to the dewarping for a structurally better image, image mosaic is also performed for further improving the visual quality. By finding better parts of images (with less out of focus blur and/or without specular reflections) from either of views, we compose a better image by stitching and blending them. These processes are formulated as energy minimization problems that can be solved using a graph cut method. Experiments on many kinds of book or document images show that the proposed algorithm robustly works and yields visually pleasing results. Also, the OCR rate of the resulting image is comparable to that of document images from a flatbed scanner.

  6. Standard cost elements for technology programs

    NASA Technical Reports Server (NTRS)

    Christensen, Carisa B.; Wagenfuehrer, Carl

    1992-01-01

    The suitable structure for an effective and accurate cost estimate for general purposes is discussed in the context of a NASA technology program. Cost elements are defined for research, management, and facility-construction portions of technology programs. Attention is given to the mechanisms for insuring the viability of spending programs, and the need for program managers is established for effecting timely fund disbursement. Formal, structures, and intuitive techniques are discussed for cost-estimate development, and cost-estimate defensibility can be improved with increased documentation. NASA policies for cash management are examined to demonstrate the importance of the ability to obligate funds and the ability to cost contracted funds. The NASA approach to consistent cost justification is set forth with a list of standard cost-element definitions. The cost elements reflect the three primary concerns of cost estimates: the identification of major assumptions, the specification of secondary analytic assumptions, and the status of program factors.

  7. Using plot experiments to test the validity of mass balance models employed to estimate soil redistribution rates from 137Cs and 210Pb(ex) measurements.

    PubMed

    Porto, Paolo; Walling, Des E

    2012-10-01

    Information on rates of soil loss from agricultural land is a key requirement for assessing both on-site soil degradation and potential off-site sediment problems. Many models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution as a function of the local topography, hydrometeorology, soil type and land management, but empirical data remain essential for validating and calibrating such models and prediction procedures. Direct measurements using erosion plots are, however, costly and the results obtained relate to a small enclosed area, which may not be representative of the wider landscape. In recent years, the use of fallout radionuclides and more particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)) has been shown to provide a very effective means of documenting rates of soil loss and soil and sediment redistribution in the landscape. Several of the assumptions associated with the theoretical conversion models used with such measurements remain essentially unvalidated. This contribution describes the results of a measurement programme involving five experimental plots located in southern Italy, aimed at validating several of the basic assumptions commonly associated with the use of mass balance models for estimating rates of soil redistribution on cultivated land from (137)Cs and (210)Pb(ex) measurements. Overall, the results confirm the general validity of these assumptions and the importance of taking account of the fate of fresh fallout. However, further work is required to validate the conversion models employed in using fallout radionuclide measurements to document soil redistribution in the landscape and this could usefully direct attention to different environments and to the validation of the final estimates of soil redistribution rate as well as the assumptions of the models employed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Model documentation report: Residential sector demand module of the national energy modeling system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This report documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code. This reference document provides a detailed description for energy analysts, other users, and the public. The NEMS Residential Sector Demand Module is currently used for mid-term forecasting purposes and energy policy analysis over the forecast horizon of 1993 through 2020. The model generates forecasts of energy demand for the residential sector by service, fuel, and Census Division. Policy impacts resulting from new technologies,more » market incentives, and regulatory changes can be estimated using the module. 26 refs., 6 figs., 5 tabs.« less

  9. Intelligence/Electronic Warfare (IEW) direction-finding and fix estimation analysis report. Volume 2: Trailblazer

    NASA Technical Reports Server (NTRS)

    Gardner, Robert; Gillis, James W.; Griesel, Ann; Pardo, Bruce

    1985-01-01

    An analysis of the direction finding (DF) and fix estimation algorithms in TRAILBLAZER is presented. The TRAILBLAZER software analyzed is old and not currently used in the field. However, the algorithms analyzed are used in other current IEW systems. The underlying algorithm assumptions (including unmodeled errors) are examined along with their appropriateness for TRAILBLAZER. Coding and documentation problems are then discussed. A detailed error budget is presented.

  10. Annual Vehicle Miles of Travel and Related Data : Procedures Used to Derive the Data Elements of the 1994 Table VM-1

    DOT National Transportation Integrated Search

    1996-06-01

    The purpose of this report is to document the preparation of the 1994 Table VM-1, including data sources, assumptions, and estimating procedures. Table VM-1 describes vehicle distance traveled in miles, by highway category and vehicle type. VM-1 depi...

  11. Enhancing Retrieval with Hyperlinks: A General Model Based on Propositional Argumentation Systems.

    ERIC Educational Resources Information Center

    Picard, Justin; Savoy, Jacques

    2003-01-01

    Discusses the use of hyperlinks for improving information retrieval on the World Wide Web and proposes a general model for using hyperlinks based on Probabilistic Argumentation Systems. Topics include propositional logic, knowledge, and uncertainty; assumptions; using hyperlinks to modify document score and rank; and estimating the popularity of a…

  12. An estimating equation approach to dimension reduction for longitudinal data

    PubMed Central

    Xu, Kelin; Guo, Wensheng; Xiong, Momiao; Zhu, Liping; Jin, Li

    2016-01-01

    Sufficient dimension reduction has been extensively explored in the context of independent and identically distributed data. In this article we generalize sufficient dimension reduction to longitudinal data and propose an estimating equation approach to estimating the central mean subspace. The proposed method accounts for the covariance structure within each subject and improves estimation efficiency when the covariance structure is correctly specified. Even if the covariance structure is misspecified, our estimator remains consistent. In addition, our method relaxes distributional assumptions on the covariates and is doubly robust. To determine the structural dimension of the central mean subspace, we propose a Bayesian-type information criterion. We show that the estimated structural dimension is consistent and that the estimated basis directions are root-\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$n$\\end{document} consistent, asymptotically normal and locally efficient. Simulations and an analysis of the Framingham Heart Study data confirm the effectiveness of our approach. PMID:27017956

  13. Producing good font attribute determination using error-prone information

    NASA Astrophysics Data System (ADS)

    Cooperman, Robert

    1997-04-01

    A method to provide estimates of font attributes in an OCR system, using detectors of individual attributes that are error-prone. For an OCR system to preserve the appearance of a scanned document, it needs accurate detection of font attributes. However, OCR environments have noise and other sources of errors, tending to make font attribute detection unreliable. Certain assumptions about font use can greatly enhance accuracy. Attributes such as boldness and italics are more likely to change between neighboring words, while attributes such as serifness are less likely to change within the same paragraph. Furthermore, the document as a whole, tends to have a limited number of sets of font attributes. These assumptions allow a better use of context than the raw data, or what would be achieved by simpler methods that would oversmooth the data.

  14. Model documentation report: Transportation sector model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity inmore » model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.« less

  15. Trajectory-Oriented Approach to Managing Traffic Complexity: Trajectory Flexibility Metrics and Algorithms and Preliminary Complexity Impact Assessment

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek

    2009-01-01

    This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.

  16. Evaluation of thyroid radioactivity measurement data from Hanford workers, 1944--1946

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ikenberry, T.A.

    1991-05-01

    This report describes the preliminary results of an evaluation conducted in support of the Hanford Environmental Dose Reconstruction (HEDR) Project. The primary objective of the HEDR Project is to estimate the radiation doses that populations could have received from nuclear operations at the Hanford Site since 1944. A secondary objective is to make information that HEDR staff members used in estimate radiation doses available to the public. The objectives of this report to make available thyroid measurement data from Hanford workers for the year 1944 through 1946, and to investigate the suitability of those data for use in the HEDRmore » dose estimation process. An important part of this investigation was to provide a description of the uncertainty associated with the data. Lack of documentation on thyroid measurements from this period required that assumptions be made to perform data evaluations. These assumptions introduce uncertainty into the evaluations that could be significant. It is important to recognize the nature of these assumptions, the inherent uncertainty, and the propagation of this uncertainty, and the propagation of this uncertainty through data evaluations to any conclusions that can be made by using the data. 15 refs., 1 fig., 5 tabs.« less

  17. Independent Review of Simulation of Net Infiltration for Present-Day and Potential Future Climates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Review Panel: Soroosh Sorooshian, Ph.D., Panel Chairperson, University of California, Irvine; Jan M. H. Hendrickx, Ph.D., New Mexico Institute of Mining and Technology; Binayak P. Mohanty, Ph.D., Texas A&M University

    The DOE Office of Civilian Radioactive Waste Management (OCRWM) tasked Oak Ridge Institute for Science and Education (ORISE) with providing an independent expert review of the documented model and prediction results for net infiltration of water into the unsaturated zone at Yucca Mountain. The specific purpose of the model, as documented in the report MDL-NBS-HS-000023, Rev. 01, is “to provide a spatial representation, including epistemic and aleatory uncertainty, of the predicted mean annual net infiltration at the Yucca Mountain site ...” (p. 1-1) The expert review panel assembled by ORISE concluded that the model report does not provide a technicallymore » credible spatial representation of net infiltration at Yucca Mountain. Specifically, the ORISE Review Panel found that: • A critical lack of site-specific meteorological, surface, and subsurface information prevents verification of (i) the net infiltration estimates, (ii) the uncertainty estimates of parameters caused by their spatial variability, and (iii) the assumptions used by the modelers (ranges and distributions) for the characterization of parameters. The paucity of site-specific data used by the modeling team for model implementation and validation is a major deficiency in this effort. • The model does not incorporate at least one potentially important hydrologic process. Subsurface lateral flow is not accounted for by the model, and the assumption that the effect of subsurface lateral flow is negligible is not adequately justified. This issue is especially critical for the wetter climate periods. This omission may be one reason the model results appear to underestimate net infiltration beneath wash environments and therefore imprecisely represent the spatial variability of net infiltration. • While the model uses assumptions consistently, such as uniform soil depths and a constant vegetation rooting depth, such assumptions may not be appropriate for this net infiltration simulation because they oversimplify a complex landscape and associated hydrologic processes, especially since the model assumptions have not been adequately corroborated by field and laboratory observations at Yucca Mountain.« less

  18. Monitored Geologic Repository Project Description Document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. M. Curry

    2001-01-30

    The primary objective of the Monitored Geologic Repository Project Description Document (PDD) is to allocate the functions, requirements, and assumptions to the systems at Level 5 of the Civilian Radioactive Waste Management System (CRWMS) architecture identified in Section 4. It provides traceability of the requirements to those contained in Section 3 of the ''Monitored Geologic Repository Requirements Document'' (MGR RD) (YMP 2000a) and other higher-level requirements documents. In addition, the PDD allocates design related assumptions to work products of non-design organizations. The document provides Monitored Geologic Repository (MGR) technical requirements in support of design and performance assessment in preparing formore » the Site Recommendation (SR) and License Application (LA) milestones. The technical requirements documented in the PDD are to be captured in the System Description Documents (SDDs) which address each of the systems at Level 5 of the CRWMS architecture. The design engineers obtain the technical requirements from the SDDs and by reference from the SDDs to the PDD. The design organizations and other organizations will obtain design related assumptions directly from the PDD. These organizations may establish additional assumptions for their individual activities, but such assumptions are not to conflict with the assumptions in the PDD. The PDD will serve as the primary link between the technical requirements captured in the SDDs and the design requirements captured in US Department of Energy (DOE) documents. The approved PDD is placed under Level 3 baseline control by the CRWMS Management and Operating Contractor (M and O) and the following portions of the PDD constitute the Technical Design Baseline for the MGR: the design characteristics listed in Table 1-1, the MGR Architecture (Section 4.1), the Technical Requirements (Section 5), and the Controlled Project Assumptions (Section 6).« less

  19. Seismicity alert probabilities at Parkfield, California, revisited

    USGS Publications Warehouse

    Michael, A.J.; Jones, L.M.

    1998-01-01

    For a decade, the US Geological Survey has used the Parkfield Earthquake Prediction Experiment scenario document to estimate the probability that earthquakes observed on the San Andreas fault near Parkfield will turn out to be foreshocks followed by the expected magnitude six mainshock. During this time, we have learned much about the seismogenic process at Parkfield, about the long-term probability of the Parkfield mainshock, and about the estimation of these types of probabilities. The probabilities for potential foreshocks at Parkfield are reexamined and revised in light of these advances. As part of this process, we have confirmed both the rate of foreshocks before strike-slip earthquakes in the San Andreas physiographic province and the uniform distribution of foreshocks with magnitude proposed by earlier studies. Compared to the earlier assessment, these new estimates of the long-term probability of the Parkfield mainshock are lower, our estimate of the rate of background seismicity is higher, and we find that the assumption that foreshocks at Parkfield occur in a unique way is not statistically significant at the 95% confidence level. While the exact numbers vary depending on the assumptions that are made, the new alert probabilities are lower than previously estimated. Considering the various assumptions and the statistical uncertainties in the input parameters, we also compute a plausible range for the probabilities. The range is large, partly due to the extra knowledge that exists for the Parkfield segment, making us question the usefulness of these numbers.

  20. PVWatts Version 5 Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2014-09-01

    The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.

  1. RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, S.L.; Miller, L.A.; Monroe, D.K.

    1998-04-01

    This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in themore » quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.« less

  2. Life Support Baseline Values and Assumptions Document

    NASA Technical Reports Server (NTRS)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.

    2018-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.

  3. OpCost: an open-source system for estimating costs of stand-level forest operations

    Treesearch

    Conor K. Bell; Robert F. Keefe; Jeremy S. Fried

    2017-01-01

    This report describes and documents the OpCost forest operations cost model, a key component of the BioSum analysis framework. OpCost is available in two editions: as a callable module for use with BioSum, and in a stand-alone edition that can be run directly from R. OpCost model logic and assumptions for this open-source tool are explained, references to the...

  4. Rectification of curved document images based on single view three-dimensional reconstruction.

    PubMed

    Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang

    2016-10-01

    Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.

  5. An economic analysis comparison of stationary and dual-axis tracking grid-connected photovoltaic systems in the US Upper Midwest

    NASA Astrophysics Data System (ADS)

    Choi, Wongyu; Pate, Michael B.; Warren, Ryan D.; Nelson, Ron M.

    2018-05-01

    This paper presents an economic analysis of stationary and dual-axis tracking photovoltaic (PV) systems installed in the US Upper Midwest in terms of life-cycle costs, payback period, internal rate of return, and the incremental cost of solar energy. The first-year performance and energy savings were experimentally found along with documented initial cost. Future PV performance, savings, and operating and maintenance costs were estimated over 25-year assumed life. Under the given assumptions and discount rates, the life-cycle savings were found to be negative. Neither system was found to have payback periods less than the assumed system life. The lifetime average incremental costs of energy generated by the stationary and dual-axis tracking systems were estimated to be 0.31 and 0.37 per kWh generated, respectively. Economic analyses of different scenarios, each having a unique set of assumptions for costs and metering, showed a potential for economic feasibility under certain conditions when compared to alternative investments with assumed yields.

  6. Using effort information with change-in-ratio data for population estimation

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1995-01-01

    Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.

  7. Global Impact Estimation of ISO 50001 Energy Management System for Industrial and Service Sectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghajanzadeh, Arian; Therkelsen, Peter L.; Rao, Prakash

    A methodology has been developed to determine the impacts of ISO 50001 Energy Management System (EnMS) at a region or country level. The impacts of ISO 50001 EnMS include energy, CO2 emissions, and cost savings. This internationally recognized and transparent methodology has been embodied in a user friendly Microsoft Excel® based tool called ISO 50001 Impact Estimator Tool (IET 50001). However, the tool inputs are critical in order to get accurate and defensible results. This report is intended to document the data sources used and assumptions made to calculate the global impact of ISO 50001 EnMS.

  8. Estimating population trends with a linear model: Technical comments

    USGS Publications Warehouse

    Sauer, John R.; Link, William A.; Royle, J. Andrew

    2004-01-01

    Controversy has sometimes arisen over whether there is a need to accommodate the limitations of survey design in estimating population change from the count data collected in bird surveys. Analyses of surveys such as the North American Breeding Bird Survey (BBS) can be quite complex; it is natural to ask if the complexity is necessary, or whether the statisticians have run amok. Bart et al. (2003) propose a very simple analysis involving nothing more complicated than simple linear regression, and contrast their approach with model-based procedures. We review the assumptions implicit to their proposed method, and document that these assumptions are unlikely to be valid for surveys such as the BBS. One fundamental limitation of a purely design-based approach is the absence of controls for factors that influence detection of birds at survey sites. We show that failure to model observer effects in survey data leads to substantial bias in estimation of population trends from BBS data for the 20 species that Bart et al. (2003) used as the basis of their simulations. Finally, we note that the simulations presented in Bart et al. (2003) do not provide a useful evaluation of their proposed method, nor do they provide a valid comparison to the estimating- equations alternative they consider.

  9. Software for Quantifying and Simulating Microsatellite Genotyping Error

    PubMed Central

    Johnson, Paul C.D.; Haydon, Daniel T.

    2007-01-01

    Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126

  10. Life Support Baseline Values and Assumptions Document

    NASA Technical Reports Server (NTRS)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.; Wagner, Sandra A.

    2015-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. With the ability to accurately compare different technologies' performance for the same function, managers will be able to make better decisions regarding technology development.

  11. Aircraft ground damage and the use of predictive models to estimate costs

    NASA Astrophysics Data System (ADS)

    Kromphardt, Benjamin D.

    Aircraft are frequently involved in ground damage incidents, and repair costs are often accepted as part of doing business. The Flight Safety Foundation (FSF) estimates ground damage to cost operators $5-10 billion annually. Incident reports, documents from manufacturers or regulatory agencies, and other resources were examined to better understand the problem of ground damage in aviation. Major contributing factors were explained, and two versions of a computer-based model were developed to project costs and show what is possible. One objective was to determine if the models could match the FSF's estimate. Another objective was to better understand cost savings that could be realized by efforts to further mitigate the occurrence of ground incidents. Model effectiveness was limited by access to official data, and assumptions were used if data was not available. However, the models were determined to sufficiently estimate the costs of ground incidents.

  12. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    PubMed

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  13. The Cost of Penicillin Allergy Evaluation.

    PubMed

    Blumenthal, Kimberly G; Li, Yu; Banerji, Aleena; Yun, Brian J; Long, Aidan A; Walensky, Rochelle P

    2017-09-22

    Unverified penicillin allergy leads to adverse downstream clinical and economic sequelae. Penicillin allergy evaluation can be used to identify true, IgE-mediated allergy. To estimate the cost of penicillin allergy evaluation using time-driven activity-based costing (TDABC). We implemented TDABC throughout the care pathway for 30 outpatients presenting for penicillin allergy evaluation. The base-case evaluation included penicillin skin testing and a 1-step amoxicillin drug challenge, performed by an allergist. We varied assumptions about the provider type, clinical setting, procedure type, and personnel timing. The base-case penicillin allergy evaluation costs $220 in 2016 US dollars: $98 for personnel, $119 for consumables, and $3 for space. In sensitivity analyses, lower cost estimates were achieved when only a drug challenge was performed (ie, no skin test, $84) and a nurse practitioner provider was used ($170). Adjusting for the probability of anaphylaxis did not result in a changed estimate ($220); although other analyses led to modest changes in the TDABC estimate ($214-$246), higher estimates were identified with changing to a low-demand practice setting ($268), a 50% increase in personnel times ($269), and including clinician documentation time ($288). In a least/most costly scenario analyses, the lowest TDABC estimate was $40 and the highest was $537. Using TDABC, penicillin allergy evaluation costs $220; even with varied assumptions adjusting for operational challenges, clinical setting, and expanded testing, penicillin allergy evaluation still costs only about $540. This modest investment may be offset for patients treated with costly alternative antibiotics that also may result in adverse consequences. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  14. Worldwide estimates and bibliography of net primary productivity derived from pre-1982 publications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esser, G.; Lieth, H.F.H.; Scurlock, J.M.O.

    An extensive compilation of more than 700 field estimates of net primary productivity of natural and agricultural ecosystems worldwide was synthesized in Germany in the 1970s and early 1980s. Although the Osnabrueck data set has not been updated since the 1980s, it represents a wealth of information for use in model development and validation. This report documents the development of this data set, its contents, and its recent availability on the Internet from the Oak Ridge National Laboratory Distributed Active Archive Center for Biogeochemical Dynamics. Caution is advised in using these data, which necessarily include assumptions and conversions that maymore » not be universally applicable to all sites.« less

  15. Where Are We Going? Planning Assumptions for Community Colleges.

    ERIC Educational Resources Information Center

    Maas, Rao, Taylor and Associates, Riverside, CA.

    Designed to provide community college planners with a series of reference assumptions to consider in the planning process, this document sets forth assumptions related to finance (i.e., operational funds, capital funds, alternate funding sources, and campus financial operations); California state priorities; occupational trends; population (i.e.,…

  16. Early‐Stage Capital Cost Estimation of Biorefinery Processes: A Comparative Study of Heuristic Techniques

    PubMed Central

    Couturier, Jean‐Luc; Kokossis, Antonis; Dubois, Jean‐Luc

    2016-01-01

    Abstract Biorefineries offer a promising alternative to fossil‐based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital‐intensive projects that involve state‐of‐the‐art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well‐documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early‐stage capital cost estimation tool suitable for biorefinery processes. PMID:27484398

  17. A latent variable approach to study gene-environment interactions in the presence of multiple correlated exposures.

    PubMed

    Sánchez, Brisa N; Kang, Shan; Mukherjee, Bhramar

    2012-06-01

    Many existing cohort studies initially designed to investigate disease risk as a function of environmental exposures have collected genomic data in recent years with the objective of testing for gene-environment interaction (G × E) effects. In environmental epidemiology, interest in G × E arises primarily after a significant effect of the environmental exposure has been documented. Cohort studies often collect rich exposure data; as a result, assessing G × E effects in the presence of multiple exposure markers further increases the burden of multiple testing, an issue already present in both genetic and environment health studies. Latent variable (LV) models have been used in environmental epidemiology to reduce dimensionality of the exposure data, gain power by reducing multiplicity issues via condensing exposure data, and avoid collinearity problems due to presence of multiple correlated exposures. We extend the LV framework to characterize gene-environment interaction in presence of multiple correlated exposures and genotype categories. Further, similar to what has been done in case-control G × E studies, we use the assumption of gene-environment (G-E) independence to boost the power of tests for interaction. The consequences of making this assumption, or the issue of how to explicitly model G-E association has not been previously investigated in LV models. We postulate a hierarchy of assumptions about the LV model regarding the different forms of G-E dependence and show that making such assumptions may influence inferential results on the G, E, and G × E parameters. We implement a class of shrinkage estimators to data adaptively trade-off between the most restrictive to most flexible form of G-E dependence assumption and note that such class of compromise estimators can serve as a benchmark of model adequacy in LV models. We demonstrate the methods with an example from the Early Life Exposures in Mexico City to Neuro-Toxicants Study of lead exposure, iron metabolism genes, and birth weight. © 2011, The International Biometric Society.

  18. Timing of paleoearthquakes on the northern Hayward Fault: preliminary evidence in El Cerrito, California

    USGS Publications Warehouse

    Lienkaemper, J.J.; Schwartz, D.P.; Kelson, K.I.; Lettis, W.R.; Simpson, Gary D.; Southon, J.R.; Wanket, J.A.; Williams, P.L.

    1999-01-01

    The Working Group on California Earthquake Probabilities estimated that the northern Hayward fault had the highest probability (0.28) of producing a M7 Bay Area earthquake in 30 years (WGCEP, 1990). This probability was based, in part, on the assumption that the last large earthquake occurred on this segment in 1836. However, a recent study of historical documents concludes that the 1836 earthquake did not occur on the northern Hayward fault, thereby extending the elapsed time to at least 220 yr ago, the beginning of the written record. The average recurrence interval for a M7 on the northern Hayward is unknown. WGCEP (1990) assumed an interval of 167 years. The 1996 Working Group on Northern California Earthquake Potential estimated ~210 yr, based on extrapolations from southern Hayward paleoseismological studies and a revised estimate of 1868 slip on the southern Hayward fault. To help constrain the timing of paleoearthquakes on the northern Hayward fault for the 1999 Bay Area probability update, we excavated two trenches that cross the fault and a sag pond on the Mira Vista golf course. As the site is on the second fairway, we were limited to less than ten days to document these trenches. Analysis was aided by rapid C-14 dating of more than 90 samples which gave near real-time results with the trenches still open. A combination of upward fault terminations, disrupted strata, and discordant angular relations indicates at least four, and possibly seven or more, surface faulting earthquakes occurred during a 1630-2130 yr interval. Hence, average recurrence time could be <270 yr, but is no more than 710 yr. The most recent earthquake (MRE) occurred after AD 1640. Preliminary analysis of calibrated dates supports the assumption that no large historical (post-1776) earthquakes have ruptured the surface here, but the youngest dates need more corroboration. Analyses of pollen for presence of non-native species help to constrain the time of the MRE. The earthquake recurrence estimates described in this report are preliminary and should not be used as a basis for hazard estimates. Additional trenching is planned for this location to answer questions raised during the initial phase of trenching.

  19. Capture-Recapture Estimators in Epidemiology with Applications to Pertussis and Pneumococcal Invasive Disease Surveillance

    PubMed Central

    Braeye, Toon; Verheagen, Jan; Mignon, Annick; Flipse, Wim; Pierard, Denis; Huygen, Kris; Schirvel, Carole; Hens, Niel

    2016-01-01

    Introduction Surveillance networks are often not exhaustive nor completely complementary. In such situations, capture-recapture methods can be used for incidence estimation. The choice of estimator and their robustness with respect to the homogeneity and independence assumptions are however not well documented. Methods We investigated the performance of five different capture-recapture estimators in a simulation study. Eight different scenarios were used to detect and combine case-information. The scenarios increasingly violated assumptions of independence of samples and homogeneity of detection probabilities. Belgian datasets on invasive pneumococcal disease (IPD) and pertussis provided motivating examples. Results No estimator was unbiased in all scenarios. Performance of the parametric estimators depended on how much of the dependency and heterogeneity were correctly modelled. Model building was limited by parameter estimability, availability of additional information (e.g. covariates) and the possibilities inherent to the method. In the most complex scenario, methods that allowed for detection probabilities conditional on previous detections estimated the total population size within a 20–30% error-range. Parametric estimators remained stable if individual data sources lost up to 50% of their data. The investigated non-parametric methods were more susceptible to data loss and their performance was linked to the dependence between samples; overestimating in scenarios with little dependence, underestimating in others. Issues with parameter estimability made it impossible to model all suggested relations between samples for the IPD and pertussis datasets. For IPD, the estimates for the Belgian incidence for cases aged 50 years and older ranged from 44 to58/100,000 in 2010. The estimates for pertussis (all ages, Belgium, 2014) ranged from 24.2 to30.8/100,000. Conclusion We encourage the use of capture-recapture methods, but epidemiologists should preferably include datasets for which the underlying dependency structure is not too complex, a priori investigate this structure, compensate for it within the model and interpret the results with the remaining unmodelled heterogeneity in mind. PMID:27529167

  20. Making Sense out of Sex Stereotypes in Advertising: A Feminist Analysis of Assumptions.

    ERIC Educational Resources Information Center

    Ferrante, Karlene

    Sexism and racism in advertising have been well documented, but feminist research aimed at social change must go beyond existing content analyses to ask how advertising is created. Analysis of the "mirror assumption" (advertising reflects society) and the "gender assumption" (advertising speaks in a male voice to female…

  1. Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.

    2002-05-01

    Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.

  2. Field evaluation of distance-estimation error during wetland-dependent bird surveys

    USGS Publications Warehouse

    Nadeau, Christopher P.; Conway, Courtney J.

    2012-01-01

    Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.

  3. Model documentation: Renewable Fuels Module of the National Energy Modeling System

    NASA Astrophysics Data System (ADS)

    1994-04-01

    This report documents the objectives, analytical approach, and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it related to the production of the 1994 Annual Energy Outlook (AEO94) forecasts. The report catalogues and describes modeling assumptions, computational methodologies, data inputs, and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described. This documentation report serves two purposes. First, it is a reference document for model analysts, model users, and the public interested in the construction and application of the RFM. Second, it meets the legal requirement of the Energy Information Administration (EIA) to provide adequate documentation in support of its models. The RFM consists of six analytical submodules that represent each of the major renewable energy resources -- wood, municipal solid waste (MSW), solar energy, wind energy, geothermal energy, and alcohol fuels. Of these six, four are documented in the following chapters: municipal solid waste, wind, solar and biofuels. Geothermal and wood are not currently working components of NEMS. The purpose of the RFM is to define the technological and cost characteristics of renewable energy technologies, and to pass these characteristics to other NEMS modules for the determination of mid-term forecasted renewable energy demand.

  4. Dying for work: The magnitude of US mortality from selected causes of death associated with occupation.

    PubMed

    Steenland, Kyle; Burnett, Carol; Lalich, Nina; Ward, Elizabeth; Hurrell, Joseph

    2003-05-01

    Deaths due to occupational disease and injury place a heavy burden on society in terms of economic costs and human suffering. We estimate the annual deaths due to selected diseases for which an occupational association is reasonably well established and quantifiable, by calculation of attributable fractions (AFs), with full documentation; the deaths due to occupational injury are then added to derive an estimated number of annual deaths due to occupation. Using 1997 US mortality data, the estimated annual burden of occupational disease mortality resulting from selected respiratory diseases, cancers, cardiovascular disease, chronic renal failure, and hepatitis is 49,000, with a range from 26,000 to 72,000. The Bureau of Labor Statistics estimates there are about 6,200 work-related injury deaths annually. Adding disease and injury data, we estimate that there are a total of 55,200 US deaths annually resulting from occupational disease or injury (range 32,200-78,200). Our estimate is in the range reported by previous investigators, although we have restricted ourselves more than others to only those diseases with well-established occupational etiology, biasing our estimates conservatively. The underlying assumptions and data used to generate the estimates are well documented, so our estimates may be updated as new data emerges on occupational risks and exposed populations, providing an advantage over previous studies. We estimate that occupational deaths are the 8th leading cause of death in the US, after diabetes (64,751) but ahead of suicide (30,575), and greater than the annual number of motor vehicle deaths per year (43,501). Copyright 2003 Wiley-Liss, Inc.

  5. Financial impact of emergency department ultrasound.

    PubMed

    Soremekun, Olanrewaju A; Noble, Vicki E; Liteplo, Andrew S; Brown, David F M; Zane, Richard D

    2009-07-01

    There is limited information on the financial implications of an emergency department ultrasound (ED US) program. The authors sought to perform a fiscal analysis of an integrated ED US program. A retrospective review of billing data was performed for fiscal year (FY) 2007 for an urban academic ED with an ED US program. The ED had an annual census of 80,000 visits and 1,101 ED trauma activations. The ED is a core teaching site for a 4-year emergency medicine (EM) residency, has 35 faculty members, and has 24-hour availability of all radiology services including formal US. ED US is utilized as part of evaluation of all trauma activations and for ED procedures. As actual billing charges and reimbursement rates are institution-specific and proprietary information, relative value units (RVUs) and reimbursement based on the Centers for Medicare & Medicaid Services (CMS) 2007 fee schedule (adjusted for fixed diagnosis-related group [DRG] payments and bad debt) was used to determine revenue generated from ED US. To estimate potential volume, assumptions were made on improvement in documentation rate for diagnostic scans (current documentation rates based on billed volume versus diagnostic studies in diagnostic image database), with no improvements assumed for procedural ED US. Expenses consist of three components-capital costs, training costs, and ongoing operational costs-and were determined by institutional experience. Training costs were considered sunken expenses by this institution and were thus not included in the original return on investment (ROI) calculation, although for this article a second ROI calculation was done with training cost estimates included. For the purposes of analysis, certain key assumptions were made. We utilized a collection rate of 45% and hospitalization rates (used to adjust for fixed DRG payments) of 33% for all diagnostic scans, 100% for vascular access, and 10% for needle placement. An optimal documentation rate of 95% was used to estimate potential revenue. In FY 2007, 486 limited echo exams of abdomen (current procedural terminology [CPT] 76705) and 480 limited echo cardiac exams were performed (CPT 93308) while there were 78 exams for US-guided vascular access (CPT 76937) and 36 US-guided needle placements when performing paracentesis, thoracentesis, or location of abscess for drainage (CPT 76492). Applying the 2007 CMS fee schedule and above assumptions, the revenue generated was 578 RVUs and $35,541 ($12,934 in professional physician fees and $22,607 in facility fees). Assuming optimal documentation rates for diagnostic ED US scans, ED US could have generated 1,487 RVUs and $94,593 ($33,953 in professional physician fees and $60,640 in facility fees). Program expenses include an initial capital expense (estimated at $120,000 for two US machines) and ongoing operational costs ($68,640 per year to cover image quality assurance review, continuing education, and program maintenance). Based on current revenue, there would be an annual operating loss, and thus an ROI cannot be calculated. However, if potential revenue is achieved, the annual operating income will be $22,846 per year with an ROI of 4.9 years to break even with initial investment. Determining an ROI is a required procedure for any business plan for establishing an ED US program. Our analysis demonstrates that an ED US program that captures charges for trauma and procedural US and achieves the potential billing volume breaks even in less than 5 years, at which point it would generate a positive margin.

  6. Assumptions made when preparing drug exposure data for analysis have an impact on results: An unreported step in pharmacoepidemiology studies.

    PubMed

    Pye, Stephen R; Sheppard, Thérèse; Joseph, Rebecca M; Lunt, Mark; Girard, Nadyne; Haas, Jennifer S; Bates, David W; Buckeridge, David L; van Staa, Tjeerd P; Tamblyn, Robyn; Dixon, William G

    2018-04-17

    Real-world data for observational research commonly require formatting and cleaning prior to analysis. Data preparation steps are rarely reported adequately and are likely to vary between research groups. Variation in methodology could potentially affect study outcomes. This study aimed to develop a framework to define and document drug data preparation and to examine the impact of different assumptions on results. An algorithm for processing prescription data was developed and tested using data from the Clinical Practice Research Datalink (CPRD). The impact of varying assumptions was examined by estimating the association between 2 exemplar medications (oral hypoglycaemic drugs and glucocorticoids) and cardiovascular events after preparing multiple datasets derived from the same source prescription data. Each dataset was analysed using Cox proportional hazards modelling. The algorithm included 10 decision nodes and 54 possible unique assumptions. Over 11 000 possible pathways through the algorithm were identified. In both exemplar studies, similar hazard ratios and standard errors were found for the majority of pathways; however, certain assumptions had a greater influence on results. For example, in the hypoglycaemic analysis, choosing a different variable to define prescription end date altered the hazard ratios (95% confidence intervals) from 1.77 (1.56-2.00) to 2.83 (1.59-5.04). The framework offers a transparent and efficient way to perform and report drug data preparation steps. Assumptions made during data preparation can impact the results of analyses. Improving transparency regarding drug data preparation would increase the repeatability, reproducibility, and comparability of published results. © 2018 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  7. Non-stationary noise estimation using dictionary learning and Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Hughes, James M.; Rockmore, Daniel N.; Wang, Yang

    2014-02-01

    Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.

  8. Supporting calculations and assumptions for use in WESF safetyanalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hey, B.E.

    This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.

  9. FOSSIL2 energy policy model documentation: FOSSIL2 documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1980-10-01

    This report discusses the structure, derivations, assumptions, and mathematical formulation of the FOSSIL2 model. Each major facet of the model - supply/demand interactions, industry financing, and production - has been designed to parallel closely the actual cause/effect relationships determining the behavior of the United States energy system. The data base for the FOSSIL2 program is large, as is appropriate for a system dynamics simulation model. When possible, all data were obtained from sources well known to experts in the energy field. Cost and resource estimates are based on DOE data whenever possible. This report presents the FOSSIL2 model at severalmore » levels. Volumes II and III of this report list the equations that comprise the FOSSIL2 model, along with variable definitions and a cross-reference list of the model variables. Volume III lists the model equations and a one line definition for equations, in a short, readable format.« less

  10. Early-Stage Capital Cost Estimation of Biorefinery Processes: A Comparative Study of Heuristic Techniques.

    PubMed

    Tsagkari, Mirela; Couturier, Jean-Luc; Kokossis, Antonis; Dubois, Jean-Luc

    2016-09-08

    Biorefineries offer a promising alternative to fossil-based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital-intensive projects that involve state-of-the-art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well-documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early-stage capital cost estimation tool suitable for biorefinery processes. © 2015 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  11. Cost Benefit Analysis Modeling Tool for Electric vs. ICE Airport Ground Support Equipment – Development and Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James Francfort; Kevin Morrow; Dimitri Hochard

    2007-02-01

    This report documents efforts to develop a computer tool for modeling the economic payback for comparative airport ground support equipment (GSE) that are propelled by either electric motors or gasoline and diesel engines. The types of GSE modeled are pushback tractors, baggage tractors, and belt loaders. The GSE modeling tool includes an emissions module that estimates the amount of tailpipe emissions saved by replacing internal combustion engine GSE with electric GSE. This report contains modeling assumptions, methodology, a user’s manual, and modeling results. The model was developed based on the operations of two airlines at four United States airports.

  12. Relevance popularity: A term event model based feature selection scheme for text classification.

    PubMed

    Feng, Guozhong; An, Baiguo; Yang, Fengqin; Wang, Han; Zhang, Libiao

    2017-01-01

    Feature selection is a practical approach for improving the performance of text classification methods by optimizing the feature subsets input to classifiers. In traditional feature selection methods such as information gain and chi-square, the number of documents that contain a particular term (i.e. the document frequency) is often used. However, the frequency of a given term appearing in each document has not been fully investigated, even though it is a promising feature to produce accurate classifications. In this paper, we propose a new feature selection scheme based on a term event Multinomial naive Bayes probabilistic model. According to the model assumptions, the matching score function, which is based on the prediction probability ratio, can be factorized. Finally, we derive a feature selection measurement for each term after replacing inner parameters by their estimators. On a benchmark English text datasets (20 Newsgroups) and a Chinese text dataset (MPH-20), our numerical experiment results obtained from using two widely used text classifiers (naive Bayes and support vector machine) demonstrate that our method outperformed the representative feature selection methods.

  13. On the Use of Rank Tests and Estimates in the Linear Model.

    DTIC Science & Technology

    1982-06-01

    assumption A5, McKean and Hettmansperger (1976) show that 10 w (W(N-c) - W (c+l))/ (2Z /2) (14) where 2Z is the 1-a interpercentile range of the standard...r(.75n) - r(.25n)) (13) The window width h incorporates a resistant estimate of scale, then interquartile range of the residuals, and a normalizing...alternative estimate of i is available with the additional assumption of symmetry of the error distribution. ASSUMPTION: A5. Suppose the underlying error

  14. Assumption-versus data-based approaches to summarizing species' ranges.

    PubMed

    Peterson, A Townsend; Navarro-Sigüenza, Adolfo G; Gordillo, Alejandro

    2018-06-01

    For conservation decision making, species' geographic distributions are mapped using various approaches. Some such efforts have downscaled versions of coarse-resolution extent-of-occurrence maps to fine resolutions for conservation planning. We examined the quality of the extent-of-occurrence maps as range summaries and the utility of refining those maps into fine-resolution distributional hypotheses. Extent-of-occurrence maps tend to be overly simple, omit many known and well-documented populations, and likely frequently include many areas not holding populations. Refinement steps involve typological assumptions about habitat preferences and elevational ranges of species, which can introduce substantial error in estimates of species' true areas of distribution. However, no model-evaluation steps are taken to assess the predictive ability of these models, so model inaccuracies are not noticed. Whereas range summaries derived by these methods may be useful in coarse-grained, global-extent studies, their continued use in on-the-ground conservation applications at fine spatial resolutions is not advisable in light of reliance on assumptions, lack of real spatial resolution, and lack of testing. In contrast, data-driven techniques that integrate primary data on biodiversity occurrence with remotely sensed data that summarize environmental dimensions (i.e., ecological niche modeling or species distribution modeling) offer data-driven solutions based on a minimum of assumptions that can be evaluated and validated quantitatively to offer a well-founded, widely accepted method for summarizing species' distributional patterns for conservation applications. © 2016 Society for Conservation Biology.

  15. Assumable Waters Subcommittee Meeting Documents: October 6-7, 2015

    EPA Pesticide Factsheets

    Documents covering state and tribal assumption of the Clean Water Act section 404 program, legislative history of section 404, Federal Advisory Committee Act (FACA) legal requirements, Michigan and New Jersey CWA section 404 programs.

  16. Influence of critical closing pressure on systemic vascular resistance and total arterial compliance: A clinical invasive study.

    PubMed

    Chemla, Denis; Lau, Edmund M T; Hervé, Philippe; Millasseau, Sandrine; Brahimi, Mabrouk; Zhu, Kaixian; Sattler, Caroline; Garcia, Gilles; Attal, Pierre; Nitenberg, Alain

    2017-12-01

    Systemic vascular resistance (SVR) and total arterial compliance (TAC) modulate systemic arterial load, and their product is the time constant (Tau) of the Windkessel. Previous studies have assumed that aortic pressure decays towards a pressure asymptote (P∞) close to 0mmHg, as right atrial pressure is considered the outflow pressure. Using these assumptions, aortic Tau values of ∼1.5seconds have been documented. However, a zero P∞ may not be physiological because of the high critical closing pressure previously documented in vivo. To calculate precisely the Tau and P∞ of the Windkessel, and to determine the implications for the indices of systemic arterial load. Aortic pressure decay was analysed using high-fidelity recordings in 16 subjects. Tau was calculated assuming P∞=0mmHg, and by two methods that make no assumptions regarding P∞ (the derivative and best-fit methods). Assuming P∞=0mmHg, we documented a Tau value of 1372±308ms, with only 29% of Windkessel function manifested by end-diastole. In contrast, Tau values of 306±109 and 353±106ms were found from the derivative and best-fit methods, with P∞ values of 75±12 and 71±12mmHg, and with ∼80% completion of Windkessel function. The "effective" resistance and compliance were ∼70% and ∼40% less than SVR and TAC (area method), respectively. We did not challenge the Windkessel model, but rather the estimation technique of model variables (Tau, SVR, TAC) that assumes P∞=0. The study favoured a shorter Tau of the Windkessel and a higher P∞ compared with previous studies. This calls for a reappraisal of the quantification of systemic arterial load. Crown Copyright © 2017. Published by Elsevier Masson SAS. All rights reserved.

  17. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    PubMed Central

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  18. Deep Borehole Field Test Requirements and Controlled Assumptions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardin, Ernest

    2015-07-01

    This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientificmore » characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.« less

  19. Documentation of Helicopter Aeroelastic Stability Analysis Computer Program (HASTA)

    DTIC Science & Technology

    1977-12-01

    of the blade phasing assumption for which all blades of the rotor are identical and equally spaced azimuthally allows the size of the T. matrices...to be significantly reduced by the removal of the submatrices associated with blades other than the first blade. With the use of this assumption ...different program representational options such as the type of rotor system, the type of blades, and the use of the blade phasing assumption , the

  20. A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.

    PubMed

    Shriver, K A

    1986-01-01

    Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.

  1. Problematising Mathematics Education

    ERIC Educational Resources Information Center

    Begg, Andy

    2015-01-01

    We assume many things when considering our practice, but our assumptions limit what we do. In this theoretical/philosophical paper I consider some assumptions that relate to our work. My purpose is to stimulate a debate, a search for alternatives, and to help us improve mathematics education by influencing our future curriculum documents and…

  2. Estimation of the Prevalence of Autism Spectrum Disorder in South Korea, Revisited

    ERIC Educational Resources Information Center

    Pantelis, Peter C.; Kennedy, Daniel P.

    2016-01-01

    Two-phase designs in epidemiological studies of autism prevalence introduce methodological complications that can severely limit the precision of resulting estimates. If the assumptions used to derive the prevalence estimate are invalid or if the uncertainty surrounding these assumptions is not properly accounted for in the statistical inference…

  3. Simulation-Extrapolation for Estimating Means and Causal Effects with Mismeasured Covariates

    ERIC Educational Resources Information Center

    Lockwood, J. R.; McCaffrey, Daniel F.

    2015-01-01

    Regression, weighting and related approaches to estimating a population mean from a sample with nonrandom missing data often rely on the assumption that conditional on covariates, observed samples can be treated as random. Standard methods using this assumption generally will fail to yield consistent estimators when covariates are measured with…

  4. On the validity of time-dependent AUC estimators.

    PubMed

    Schmid, Matthias; Kestler, Hans A; Potapov, Sergej

    2015-01-01

    Recent developments in molecular biology have led to the massive discovery of new marker candidates for the prediction of patient survival. To evaluate the predictive value of these markers, statistical tools for measuring the performance of survival models are needed. We consider estimators of discrimination measures, which are a popular approach to evaluate survival predictions in biomarker studies. Estimators of discrimination measures are usually based on regularity assumptions such as the proportional hazards assumption. Based on two sets of molecular data and a simulation study, we show that violations of the regularity assumptions may lead to over-optimistic estimates of prediction accuracy and may therefore result in biased conclusions regarding the clinical utility of new biomarkers. In particular, we demonstrate that biased medical decision making is possible even if statistical checks indicate that all regularity assumptions are satisfied. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  5. Sensitivity analysis of pars-tensa young's modulus estimation using inverse finite-element modeling

    NASA Astrophysics Data System (ADS)

    Rohani, S. Alireza; Elfarnawany, Mai; Agrawal, Sumit K.; Ladak, Hanif M.

    2018-05-01

    Accurate estimates of the pars-tensa (PT) Young's modulus (EPT) are required in finite-element (FE) modeling studies of the middle ear. Previously, we introduced an in-situ EPT estimation technique by optimizing a sample-specific FE model to match experimental eardrum pressurization data. This optimization process requires choosing some modeling assumptions such as PT thickness and boundary conditions. These assumptions are reported with a wide range of variation in the literature, hence affecting the reliability of the models. In addition, the sensitivity of the estimated EPT to FE modeling assumptions has not been studied. Therefore, the objective of this study is to identify the most influential modeling assumption on EPT estimates. The middle-ear cavity extracted from a cadaveric temporal bone was pressurized to 500 Pa. The deformed shape of the eardrum after pressurization was measured using a Fourier transform profilometer (FTP). A base-line FE model of the unpressurized middle ear was created. The EPT was estimated using golden section optimization method, which minimizes the cost function comparing the deformed FE model shape to the measured shape after pressurization. The effect of varying the modeling assumptions on EPT estimates were investigated. This included the change in PT thickness, pars flaccida Young's modulus and possible FTP measurement error. The most influential parameter on EPT estimation was PT thickness and the least influential parameter was pars flaccida Young's modulus. The results of this study provide insight into how different parameters affect the results of EPT optimization and which parameters' uncertainties require further investigation to develop robust estimation techniques.

  6. An assessment of the impact of FIA's default assumptions on the estimates of coarse woody debris volume and biomass

    Treesearch

    Vicente J. Monleon

    2009-01-01

    Currently, Forest Inventory and Analysis estimation procedures use Smalian's formula to compute coarse woody debris (CWD) volume and assume that logs lie horizontally on the ground. In this paper, the impact of those assumptions on volume and biomass estimates is assessed using 7 years of Oregon's Phase 2 data. Estimates of log volume computed using Smalian...

  7. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  8. Regression Analysis of a Disease Onset Distribution Using Diagnosis Data

    PubMed Central

    Young, Jessica G.; Jewell, Nicholas P.; Samuels, Steven J.

    2008-01-01

    Summary We consider methods for estimating the effect of a covariate on a disease onset distribution when the observed data structure consists of right-censored data on diagnosis times and current status data on onset times amongst individuals who have not yet been diagnosed. Dunson and Baird (2001, Biometrics 57, 306–403) approached this problem using maximum likelihood, under the assumption that the ratio of the diagnosis and onset distributions is monotonic nondecreasing. As an alternative, we propose a two-step estimator, an extension of the approach of van der Laan, Jewell, and Petersen (1997, Biometrika 84, 539–554) in the single sample setting, which is computationally much simpler and requires no assumptions on this ratio. A simulation study is performed comparing estimates obtained from these two approaches, as well as that from a standard current status analysis that ignores diagnosis data. Results indicate that the Dunson and Baird estimator outperforms the two-step estimator when the monotonicity assumption holds, but the reverse is true when the assumption fails. The simple current status estimator loses only a small amount of precision in comparison to the two-step procedure but requires monitoring time information for all individuals. In the data that motivated this work, a study of uterine fibroids and chemical exposure to dioxin, the monotonicity assumption is seen to fail. Here, the two-step and current status estimators both show no significant association between the level of dioxin exposure and the hazard for onset of uterine fibroids; the two-step estimator of the relative hazard associated with increasing levels of exposure has the least estimated variance amongst the three estimators considered. PMID:17680832

  9. Operational estimates of areal evapotranspiration and their significance to the science and practice of hydrology

    NASA Astrophysics Data System (ADS)

    Morton, F. I.

    1983-10-01

    Reliable estimates of areal evapotranspiration are essential to significant improvements in the science and practice of hydrology. Direct measurements, such as those provided by lysimeters, eddy flux instrumentation or Bowen-ratio instrumentation, give point values, require constant attendance by skilled personnel and are based on unverified assumptions. A critical review of the methods used for estimating areal evapotranspiration indicates that the conventional conceptual techniques, such as those used in current watershed models, are based on assumptions that are completely divorced from reality; and that causal techniques based on processes and interactions in the soil-plant-atmosphere system are not likely to prove useful for another generation. However, the complementary relationship can do much to fill the gap until such time as causal techniques become practicable because it provides the basis for models that permit areal evapotranspiration to be estimated from its effects on the routine climatological observations needed to estimate potential evapotranspiration. Such models have a realistic conceptual and empirical basis, by-pass the complexity of the soil-plant system and require no local calibration of coefficients. Therefore, they are falsifiable (i.e. can be tested rigorously) so that errors in the associated assumptions and relationships can be detected and corrected by progressive testing over an ever-widening range of environments. Such a methodology uses the entire world as a laboratory and requires that a correction made to obtain agreement between model and river-basin water budget estimates in one environment must be applicable without modification in all other environment. The most recent version of the complementary relationship areal evapotranspiration (CRAE) models is formulated and documented. The reliability of the independent operational estimates of areal evapotranspiration is tested with comparable long-term water-budget estimates for 143 river basins in North America, Africa, Ireland, Australia and New Zealand. The practicality and potential impact of such estimates are demonstrated with examples which show how the availability of such estimates can revitalize the science and practice of hydrology by providing a reliable basis for detailed water-balance studies; for further research on the development of causal models; for hydrological, agricultural and fire hazard forecasts; for detecting the development of errors in hydrometeorological records; for detecting and monitoring the effects of land-use changes; for explaining hydrologic anomalies; and for other better known applications. It is suggested that the collection of the required climatological data by hydrometric agencies could be justified on the grounds that the agencies would gain a technique for quality control and the users would gain by a significant expansion in the information content of the hydrometric data, all at minimal additional expense.

  10. Canister Storage Building (CSB) Design Basis Accident Analysis Documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CROWE, R.D.; PIEPHO, M.G.

    2000-03-23

    This document provided the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report''. All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.

  11. Do group-specific equations provide the best estimates of stature?

    PubMed

    Albanese, John; Osley, Stephanie E; Tuck, Andrew

    2016-04-01

    An estimate of stature can be used by a forensic anthropologist with the preliminary identification of an unknown individual when human skeletal remains are recovered. Fordisc is a computer application that can be used to estimate stature; like many other methods it requires the user to assign an unknown individual to a specific group defined by sex, race/ancestry, and century of birth before an equation is applied. The assumption is that a group-specific equation controls for group differences and should provide the best results most often. In this paper we assess the utility and benefits of using group-specific equations to estimate stature using Fordisc. Using the maximum length of the humerus and the maximum length of the femur from individuals with documented stature, we address the question: Do sex-, race/ancestry- and century-specific stature equations provide the best results when estimating stature? The data for our sample of 19th Century White males (n=28) were entered into Fordisc and stature was estimated using 22 different equation options for a total of 616 trials: 19th and 20th Century Black males, 19th and 20th Century Black females, 19th and 20th Century White females, 19th and 20th Century White males, 19th and 20th Century any, and 20th Century Hispanic males. The equations were assessed for utility in any one case (how many times the estimated range bracketed the documented stature) and in aggregate using 1-way ANOVA and other approaches. This group-specific equation that should have provided the best results was outperformed by several other equations for both the femur and humerus. These results suggest that group-specific equations do not provide better results for estimating stature while at the same time are more difficult to apply because an unknown must be allocated to a given group before stature can be estimated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Statewide Intelligent Transportation Systems As-Is Agency Reports For Minnesota, Volume 6, City Of St. Paul

    DOT National Transportation Integrated Search

    1996-08-01

    KEYWORDS: : TRAFFIC SIGNAL CONTROL/REAL-TIME ADAPTIVE CONTROL, ADVANCED TRAFFIC MANAGEMENT SYSTEMS OR ATMS : THIS DOCUMENT PRESENTS THE METHODS, ASSUMPTIONS AND PROCEDURES USED TO COLLECT THE BASELINE INFORMATION. THE DOCUMENTATION OF SYSTEMS ...

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarrack, A.G.

    The purpose of this report is to document fault tree analyses which have been completed for the Defense Waste Processing Facility (DWPF) safety analysis. Logic models for equipment failures and human error combinations that could lead to flammable gas explosions in various process tanks, or failure of critical support systems were developed for internal initiating events and for earthquakes. These fault trees provide frequency estimates for support systems failures and accidents that could lead to radioactive and hazardous chemical releases both on-site and off-site. Top event frequency results from these fault trees will be used in further APET analyses tomore » calculate accident risk associated with DWPF facility operations. This report lists and explains important underlying assumptions, provides references for failure data sources, and briefly describes the fault tree method used. Specific commitments from DWPF to provide new procedural/administrative controls or system design changes are listed in the ''Facility Commitments'' section. The purpose of the ''Assumptions'' section is to clarify the basis for fault tree modeling, and is not necessarily a list of items required to be protected by Technical Safety Requirements (TSRs).« less

  14. Height and the normal distribution: evidence from Italian military data.

    PubMed

    A'Hearn, Brian; Peracchi, Franco; Vecchi, Giovanni

    2009-02-01

    Researchers modeling historical heights have typically relied on the restrictive assumption of a normal distribution, only the mean of which is affected by age, income, nutrition, disease, and similar influences. To avoid these restrictive assumptions, we develop a new semiparametric approach in which covariates are allowed to affect the entire distribution without imposing any parametric shape. We apply our method to a new database of height distributions for Italian provinces, drawn from conscription records, of unprecedented length and geographical disaggregation. Our method allows us to standardize distributions to a single age and calculate moments of the distribution that are comparable through time. Our method also allows us to generate counterfactual distributions for a range of ages, from which we derive age-height profiles. These profiles reveal how the adolescent growth spurt (AGS) distorts the distribution of stature, and they document the earlier and earlier onset of the AGS as living conditions improved over the second half of the nineteenth century. Our new estimates of provincial mean height also reveal a previously unnoticed "regime switch "from regional convergence to divergence in this period.

  15. The Mediating Effect of World Assumptions on the Relationship between Trauma Exposure and Depression

    ERIC Educational Resources Information Center

    Lilly, Michelle M.; Valdez, Christine E.; Graham-Bermann, Sandra A.

    2011-01-01

    The association between trauma exposure and mental health-related challenges such as depression are well documented in the research literature. The assumptive world theory was used to explore this relationship in 97 female survivors of intimate partner violence (IPV). Participants completed self-report questionnaires that assessed trauma history,…

  16. 77 FR 27080 - Solicitation of Comments on Request for United States Assumption of Concurrent Federal Criminal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-08

    ... DEPARTMENT OF JUSTICE [Docket No. OTJ 100] Solicitation of Comments on Request for United States Assumption of Concurrent Federal Criminal Jurisdiction; Hoopa Valley Tribe Correction In notice document 2012-09731 beginning on page 24517 the issue of Tuesday, April 24, 2012 make the following correction: On...

  17. Theory X and Theory Y in the Organizational Structure.

    ERIC Educational Resources Information Center

    Barry, Thomas J.

    This document defines contrasting assumptions about the labor force--theory X and theory Y--and shows how they apply to the pyramid organizational structure, examines the assumptions of the two theories, and finally, based on a survey and individual interviews, proposes a merger of theories X and Y to produce theory Z. Organizational structures…

  18. FOSSIL2 energy policy model documentation: FOSSIL2 documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1980-10-01

    This report discusses the structure, derivations, assumptions, and mathematical formulation of the FOSSIL2 model. Each major facet of the model - supply/demand interactions, industry financing, and production - has been designed to parallel closely the actual cause/effect relationships determining the behavior of the United States energy system. The data base for the FOSSIL2 program is large, as is appropriate for a system dynamics simulation model. When possible, all data were obtained from sources well known to experts in the energy field. Cost and resource estimates are based on DOE data whenever possible. This report presents the FOSSIL2 model at severalmore » levels. Volumes II and III of this report list the equations that comprise the FOSSIL2 model, along with variable definitions and a cross-reference list of the model variables. Volume II provides the model equations with each of their variables defined, while Volume III lists the equations, and a one line definition for equations, in a shorter, more readable format.« less

  19. Model documentation, Coal Market Module of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This report documents the objectives and the conceptual and methodological approach used in the development of the National Energy Modeling System`s (NEMS) Coal Market Module (CMM) used to develop the Annual Energy Outlook 1998 (AEO98). This report catalogues and describes the assumptions, methodology, estimation techniques, and source code of CMM`s two submodules. These are the Coal Production Submodule (CPS) and the Coal Distribution Submodule (CDS). CMM provides annual forecasts of prices, production, and consumption of coal for NEMS. In general, the CDS integrates the supply inputs from the CPS to satisfy demands for coal from exogenous demand models. The internationalmore » area of the CDS forecasts annual world coal trade flows from major supply to major demand regions and provides annual forecasts of US coal exports for input to NEMS. Specifically, the CDS receives minemouth prices produced by the CPS, demand and other exogenous inputs from other NEMS components, and provides delivered coal prices and quantities to the NEMS economic sectors and regions.« less

  20. Evaluation of assumptions in soil moisture triple collocation analysis

    USDA-ARS?s Scientific Manuscript database

    Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...

  1. Estimating survival of radio-tagged birds

    USGS Publications Warehouse

    Bunck, C.M.; Pollock, K.H.; Lebreton, J.-D.; North, P.M.

    1993-01-01

    Parametric and nonparametric methods for estimating survival of radio-tagged birds are described. The general assumptions of these methods are reviewed. An estimate based on the assumption of constant survival throughout the period is emphasized in the overview of parametric methods. Two nonparametric methods, the Kaplan-Meier estimate of the survival funcrion and the log rank test, are explained in detail The link between these nonparametric methods and traditional capture-recapture models is discussed aloag with considerations in designing studies that use telemetry techniques to estimate survival.

  2. Identification of differences in health impact modelling of salt reduction

    PubMed Central

    Geleijnse, Johanna M.; van Raaij, Joop M. A.; Cappuccio, Francesco P.; Cobiac, Linda C.; Scarborough, Peter; Nusselder, Wilma J.; Jaccard, Abbygail; Boshuizen, Hendriek C.

    2017-01-01

    We examined whether specific input data and assumptions explain outcome differences in otherwise comparable health impact assessment models. Seven population health models estimating the impact of salt reduction on morbidity and mortality in western populations were compared on four sets of key features, their underlying assumptions and input data. Next, assumptions and input data were varied one by one in a default approach (the DYNAMO-HIA model) to examine how it influences the estimated health impact. Major differences in outcome were related to the size and shape of the dose-response relation between salt and blood pressure and blood pressure and disease. Modifying the effect sizes in the salt to health association resulted in the largest change in health impact estimates (33% lower), whereas other changes had less influence. Differences in health impact assessment model structure and input data may affect the health impact estimate. Therefore, clearly defined assumptions and transparent reporting for different models is crucial. However, the estimated impact of salt reduction was substantial in all of the models used, emphasizing the need for public health actions. PMID:29182636

  3. Transportation Sector Model of the National Energy Modeling System. Volume 2 -- Appendices: Part 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The attachments contained within this appendix provide additional details about the model development and estimation process which do not easily lend themselves to incorporation in the main body of the model documentation report. The information provided in these attachments is not integral to the understanding of the model`s operation, but provides the reader with opportunity to gain a deeper understanding of some of the model`s underlying assumptions. There will be a slight degree of replication of materials found elsewhere in the documentation, made unavoidable by the dictates of internal consistency. Each attachment is associated with a specific component of themore » transportation model; the presentation follows the same sequence of modules employed in Volume 1. The following attachments are contained in Appendix F: Fuel Economy Model (FEM)--provides a discussion of the FEM vehicle demand and performance by size class models; Alternative Fuel Vehicle (AFV) Model--describes data input sources and extrapolation methodologies; Light-Duty Vehicle (LDV) Stock Model--discusses the fuel economy gap estimation methodology; Light Duty Vehicle Fleet Model--presents the data development for business, utility, and government fleet vehicles; Light Commercial Truck Model--describes the stratification methodology and data sources employed in estimating the stock and performance of LCT`s; Air Travel Demand Model--presents the derivation of the demographic index, used to modify estimates of personal travel demand; and Airborne Emissions Model--describes the derivation of emissions factors used to associate transportation measures to levels of airborne emissions of several pollutants.« less

  4. Problems in the Definition, Interpretation, and Evaluation of Genetic Heterogeneity

    PubMed Central

    Whittemore, Alice S.; Halpern, Jerry

    2001-01-01

    Suppose that we wish to classify families with multiple cases of disease into one of three categories: those that segregate mutations of a gene of interest, those which segregate mutations of other genes, and those whose disease is due to nonhereditary factors or chance. Among families in the first two categories (the hereditary families), we wish to estimate the proportion, p, of families that segregate mutations of the gene of interest. Although this proportion is a commonly accepted concept, it is well defined only with an unambiguous definition of “family.” Even then, extraneous factors such as family sizes and structures can cause p to vary across different populations and, within a population, to be estimated differently by different studies. Restrictive assumptions about the disease are needed, in order to avoid this undesirable variation. The assumptions require that mutations of all disease-causing genes (i) have no effect on family size, (ii) have very low frequencies, and (iii) have penetrances that satisfy certain constraints. Despite the unverifiability of these assumptions, linkage studies often invoke them to estimate p, using the admixture likelihood introduced by Smith and discussed by Ott. We argue against this common practice, because (1) it also requires the stronger assumption of equal penetrances for all etiologically relevant genes; (2) even if all assumptions are met, estimates of p are sensitive to misspecification of the unknown phenocopy rate; (3) even if all the necessary assumptions are met and the phenocopy rate is correctly specified, estimates of p that are obtained by linkage programs such as HOMOG and GENEHUNTER are based on the wrong likelihood and therefore are biased in the presence of phenocopies. We show how to correct these estimates; but, nevertheless, we do not recommend the use of parametric heterogeneity models in linkage analysis, even merely as a tool for increasing the statistical power to detect linkage. This is because the assumptions required by these models cannot be verified, and their violation could actually decrease power. Instead, we suggest that estimation of p be postponed until the relevant genes have been identified. Then their frequencies and penetrances can be estimated on the basis of population-based samples and can be used to obtain more-robust estimates of p for specific populations. PMID:11170893

  5. Energy Modeling for the Artisan Food Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goel, Supriya

    2013-05-01

    The Artisan Food Center is a 6912 sq.ft food processing plant located in Dayton, Washington. PNNL was contacted by Strecker Engineering to assist with the building’s energy analysis as a part of the project’s U.S. Green Building Council’s Leadership in Energy and Environmental Design (LEED) submittal requirements. The project is aiming for LEED Silver certification, one of the prerequisites to which is a whole building energy model to demonstrate compliance with American Society of Heating Refrigeration and Air Conditioning Engineers (ASHRAE) 90.1 2007 Appendix G, Performance Rating Method. The building incorporates a number of energy efficiency measures as part ofmore » its design and the energy analysis aimed at providing Strecker Engineering with the know-how of developing an energy model for the project as well as an estimate of energy savings of the proposed design over the baseline design, which could be used to document points in the LEED documentation. This report documents the ASHRAE 90.1 2007 baseline model design, the proposed model design, the modeling assumptions and procedures as well as the energy savings results in order to inform the Strecker Engineering team on a possible whole building energy model.« less

  6. Robust estimators for speech enhancement in real environments

    NASA Astrophysics Data System (ADS)

    Sandoval-Ibarra, Yuma; Diaz-Ramirez, Victor H.; Kober, Vitaly

    2015-09-01

    Common statistical estimators for speech enhancement rely on several assumptions about stationarity of speech signals and noise. These assumptions may not always valid in real-life due to nonstationary characteristics of speech and noise processes. We propose new estimators based on existing estimators by incorporation of computation of rank-order statistics. The proposed estimators are better adapted to non-stationary characteristics of speech signals and noise processes. Through computer simulations we show that the proposed estimators yield a better performance in terms of objective metrics than that of known estimators when speech signals are contaminated with airport, babble, restaurant, and train-station noise.

  7. Latent degradation indicators estimation and prediction: A Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin

    2011-01-01

    Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

  8. The "Power" of Value-Added Thinking: Exploring the Implementation of High-Stakes Teacher Accountability Policies in Rio de Janeiro

    ERIC Educational Resources Information Center

    Straubhaar, Rolf

    2017-01-01

    The purpose of this article is to ethnographically document the market-based ideological assumptions of Rio de Janeiro's educational policymakers, and the ways in which those assumptions have informed these policymakers' decision to implement value-added modeling-based teacher evaluation policies. Drawing on the anthropological literature on…

  9. Sensitivity of C-Band Polarimetric Radar-Based Drop Size Distribution Measurements to Maximum Diameter Assumptions

    NASA Technical Reports Server (NTRS)

    Carey, Lawrence D.; Petersen, Walter A.

    2011-01-01

    The estimation of rain drop size distribution (DSD) parameters from polarimetric radar observations is accomplished by first establishing a relationship between differential reflectivity (Z(sub dr)) and the central tendency of the rain DSD such as the median volume diameter (D0). Since Z(sub dr) does not provide a direct measurement of DSD central tendency, the relationship is typically derived empirically from rain drop and radar scattering models (e.g., D0 = F[Z (sub dr)] ). Past studies have explored the general sensitivity of these models to temperature, radar wavelength, the drop shape vs. size relation, and DSD variability. Much progress has been made in recent years in measuring the drop shape and DSD variability using surface-based disdrometers, such as the 2D Video disdrometer (2DVD), and documenting their impact on polarimetric radar techniques. In addition to measuring drop shape, another advantage of the 2DVD over earlier impact type disdrometers is its ability to resolve drop diameters in excess of 5 mm. Despite this improvement, the sampling limitations of a disdrometer, including the 2DVD, make it very difficult to adequately measure the maximum drop diameter (D(sub max)) present in a typical radar resolution volume. As a result, D(sub max) must still be assumed in the drop and radar models from which D0 = F[Z(sub dr)] is derived. Since scattering resonance at C-band wavelengths begins to occur in drop diameters larger than about 5 mm, modeled C-band radar parameters, particularly Z(sub dr), can be sensitive to D(sub max) assumptions. In past C-band radar studies, a variety of D(sub max) assumptions have been made, including the actual disdrometer estimate of D(sub max) during a typical sampling period (e.g., 1-3 minutes), D(sub max) = C (where C is constant at values from 5 to 8 mm), and D(sub max) = M*D0 (where the constant multiple, M, is fixed at values ranging from 2.5 to 3.5). The overall objective of this NASA Global Precipitation Measurement Mission (GPM/PMM Science Team)-funded study is to document the sensitivity of DSD measurements, including estimates of D0, from C-band Z(sub dr) and reflectivity to this range of D(sub max) assumptions. For this study, GPM Ground Validation 2DVD's were operated under the scanning domain of the UAHuntsville ARMOR C-band dual-polarimetric radar. Approximately 7500 minutes of DSD data were collected and processed to create gamma size distribution parameters using a truncated method of moments approach. After creating the gamma parameter datasets the DSD's were then used as input to a T-matrix model for computation of polarimetric radar moments at C-band. All necessary model parameterizations, such as temperature, drop shape, and drop fall mode, were fixed at typically accepted values while the D(sub max) assumption was allowed to vary in sensitivity tests. By hypothesizing a DSD model with D(sub max) (fit) from which the empirical fit to D0 = F[Z(sub dr)] was derived via non-linear least squares regression and a separate reference DSD model with D(sub max) (truth), bias and standard error in D0 retrievals were estimated in the presence of Z(sub dr) measurement error and hypothesized mismatch in D(sub max) assumptions. Although the normalized standard error for D0 = F[Z(sub dr)r] can increase slightly (as much as from 11% to 16% for all 7500 DSDs) when the D(sub max) (fit) does not match D(sub max) (truth), the primary impact of uncertainty in D(sub max) is a potential increase in normalized bias error in D0 (from 0% to as much as 10% over all 7500 DSDs, depending on the extent of the mismatch between D(sub max) (fit) and D(sub max) (truth)). For DSDs characterized by large Z(sub dr) (Z(sub dr) > 1.5 to 2.0 dB), the normalized bias error for D0 estimation at C-band is sometimes unacceptably large (> 10%), again depending on the extent of the hypothesized D(sub max) mismatch. Modeled errors in D0 retrievals from Z(sub dr) at C-band are demonstrated in detail and comparedo similar modeled retrieval errors at S-band and X-band where the sensitivity to D(sub max) is expected to be less. The impact of D(sub max) assumptions to the retrieval of other DSD parameters such as Nw, the liquid water content normalized intercept parameter, are also explored. Likely implications for DSD retrievals using C-band polarimetric radar for GPM are assessed by considering current community knowledge regarding D(sub max) and quantifying the statistical distribution of Z(sub dr) from ARMOR over a large variety of meteorological conditions. Based on these results and the prevalence of C-band polarimetric radars worldwide, a call for more emphasis on constraining our observational estimate of D(sub max) within a typical radar resolution volume is made

  10. 10 CFR 434.520 - Speculative buildings.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... assumed lighting power allowance. 520.5 HVAC Systems and Equipment. If the HVAC system is not completely... construction of future HVAC systems and equipment. These assumptions shall be documented so that future HVAC... calculate the Design Energy Consumption must be documented so that the future installed lighting systems may...

  11. 10 CFR 434.520 - Speculative buildings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... assumed lighting power allowance. 520.5HVAC Systems and Equipment. If the HVAC system is not completely... construction of future HVAC systems and equipment. These assumptions shall be documented so that future HVAC... calculate the Design Energy Consumption must be documented so that the future installed lighting systems may...

  12. 10 CFR 434.520 - Speculative buildings.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... assumed lighting power allowance. 520.5HVAC Systems and Equipment. If the HVAC system is not completely... construction of future HVAC systems and equipment. These assumptions shall be documented so that future HVAC... calculate the Design Energy Consumption must be documented so that the future installed lighting systems may...

  13. 10 CFR 434.520 - Speculative buildings.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... assumed lighting power allowance. 520.5HVAC Systems and Equipment. If the HVAC system is not completely... construction of future HVAC systems and equipment. These assumptions shall be documented so that future HVAC... calculate the Design Energy Consumption must be documented so that the future installed lighting systems may...

  14. 10 CFR 434.520 - Speculative buildings.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... assumed lighting power allowance. 520.5HVAC Systems and Equipment. If the HVAC system is not completely... construction of future HVAC systems and equipment. These assumptions shall be documented so that future HVAC... calculate the Design Energy Consumption must be documented so that the future installed lighting systems may...

  15. The Cost of CAI: A Matter of Assumptions.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    Cost estimates for Computer Assisted Instruction (CAI) depend crucially upon the particular assumptions made about the components of the system to be included in the costs, the expected lifetime of the system and courseware, and the anticipated student utilization of the system/courseware. The cost estimates of three currently operational systems…

  16. The MERMAID project

    NASA Technical Reports Server (NTRS)

    Cowderoy, A. J. C.; Jenkins, John O.; Poulymenakou, A

    1992-01-01

    The tendency for software development projects to be completed over schedule and over budget was documented extensively. Additionally many projects are completed within budgetary and schedule target only as a result of the customer agreeing to accept reduced functionality. In his classic book, The Mythical Man Month, Fred Brooks exposes the fallacy that effort and schedule are freely interchangeable. All current cost models are produced on the assumption that there is very limited scope for schedule compression unless there is a corresponding reduction in delivered functionality. The Metrication and Resources Modeling Aid (MERMAID) project, partially financed by the Commission of the European Communities (CEC) as Project 2046 began in Oct. 1988 and its goal were as follows: (1) improvement of understanding of the relationships between software development productivity and product and process metrics; (2) to facilitate the widespread technology transfer from the Consortium to the European Software Industry; and (3) to facilitate the widespread uptake of cost estimation techniques by the provision of prototype cost estimation tools. MERMAID developed a family of methods for cost estimation, many of which have had tools implemented in prototypes. These prototypes are best considered as toolkits or workbenches.

  17. Worldwide Historical Estimates of Leaf Area Index, 1932-2000

    NASA Technical Reports Server (NTRS)

    Scurlock, J. M. O.; Asner, G. P.; Gower, S. T.

    2001-01-01

    Approximately 1000 published estimates of leaf area index (LAI) from nearly 400 unique field sites, covering the period 1932-2000, have been compiled into a single data set. LA1 is a key parameter for global and regional models of biosphere/atmosphere exchange of carbon dioxide, water vapor, and other materials. It also plays an integral role in determining the energy balance of the land surface. This data set provides a benchmark of typical values and ranges of LA1 for a variety of biomes and land cover types, in support of model development and validation of satellite-derived remote sensing estimates of LA1 and other vegetation parameters. The LA1 data are linked to a bibliography of over 300 originalsource references.This report documents the development of this data set, its contents, and its availability on the Internet from the Oak Ridge National Laboratory Distributed Active Archive Center for Biogeochemical Dynamics. Caution is advised in using these data, which were collected using a wide range of methodologies and assumptions that may not allow comparisons among sites.

  18. Taking the Missing Propensity Into Account When Estimating Competence Scores

    PubMed Central

    Pohl, Steffi; Carstensen, Claus H.

    2014-01-01

    When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically made when using these models: (1) The missing propensity is unidimensional and (2) the missing propensity and the ability are bivariate normally distributed. These assumptions may, however, be violated in real data sets and could, thus, pose a threat to the validity of this approach. The present study focuses on modeling competencies in various domains, using data from a school sample (N = 15,396) and an adult sample (N = 7,256) from the National Educational Panel Study. Our interest was to investigate whether violations of unidimensionality and the normal distribution assumption severely affect the performance of the model-based approach in terms of differences in ability estimates. We propose a model with a competence dimension, a unidimensional missing propensity and a distributional assumption more flexible than a multivariate normal. Using this model for ability estimation results in different ability estimates compared with a model ignoring missing responses. Implications for ability estimation in large-scale assessments are discussed. PMID:29795844

  19. Tax Subsidies for Employer-Sponsored Health Insurance: Updated Microsimulation Estimates and Sensitivity to Alternative Incidence Assumptions

    PubMed Central

    Miller, G Edward; Selden, Thomas M

    2013-01-01

    Objective To estimate 2012 tax expenditures for employer-sponsored insurance (ESI) in the United States and to explore the sensitivity of estimates to assumptions regarding the incidence of employer premium contributions. Data Sources Nationally representative Medical Expenditure Panel Survey data from the 2005–2007 Household Component (MEPS-HC) and the 2009–2010 Insurance Component (MEPS IC). Study Design We use MEPS HC workers to construct synthetic workforces for MEPS IC establishments, applying the workers' marginal tax rates to the establishments' insurance premiums to compute the tax subsidy, in aggregate and by establishment characteristics. Simulation enables us to examine the sensitivity of ESI tax subsidy estimates to a range of scenarios for the within-firm incidence of employer premium contributions when workers have heterogeneous health risks and make heterogeneous plan choices. Principal Findings We simulate the total ESI tax subsidy for all active, civilian U.S. workers to be $257.4 billion in 2012. In the private sector, the subsidy disproportionately flows to workers in large establishments and establishments with predominantly high wage or full-time workforces. The estimates are remarkably robust to alternative incidence assumptions. Conclusions The aggregate value of the ESI tax subsidy and its distribution across firms can be reliably estimated using simplified incidence assumptions. PMID:23398400

  20. Change-in-ratio methods for estimating population size

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.; McCullough, Dale R.; Barrett, Reginald H.

    2002-01-01

    Change-in-ratio (CIR) methods can provide an effective, low cost approach for estimating the size of wildlife populations. They rely on being able to observe changes in proportions of population subclasses that result from the removal of a known number of individuals from the population. These methods were first introduced in the 1940’s to estimate the size of populations with 2 subclasses under the assumption of equal subclass encounter probabilities. Over the next 40 years, closed population CIR models were developed to consider additional subclasses and use additional sampling periods. Models with assumptions about how encounter probabilities vary over time, rather than between subclasses, also received some attention. Recently, all of these CIR models have been shown to be special cases of a more general model. Under the general model, information from additional samples can be used to test assumptions about the encounter probabilities and to provide estimates of subclass sizes under relaxations of these assumptions. These developments have greatly extended the applicability of the methods. CIR methods are attractive because they do not require the marking of individuals, and subclass proportions often can be estimated with relatively simple sampling procedures. However, CIR methods require a carefully monitored removal of individuals from the population, and the estimates will be of poor quality unless the removals induce substantial changes in subclass proportions. In this paper, we review the state of the art for closed population estimation with CIR methods. Our emphasis is on the assumptions of CIR methods and on identifying situations where these methods are likely to be effective. We also identify some important areas for future CIR research.

  1. Evaluating methodological assumptions of a catch-curve survival estimation of unmarked precocial shorebird chickes

    USGS Publications Warehouse

    McGowan, Conor P.; Gardner, Beth

    2013-01-01

    Estimating productivity for precocial species can be difficult because young birds leave their nest within hours or days of hatching and detectability thereafter can be very low. Recently, a method for using a modified catch-curve to estimate precocial chick daily survival for age based count data was presented using Piping Plover (Charadrius melodus) data from the Missouri River. However, many of the assumptions of the catch-curve approach were not fully evaluated for precocial chicks. We developed a simulation model to mimic Piping Plovers, a fairly representative shorebird, and age-based count-data collection. Using the simulated data, we calculated daily survival estimates and compared them with the known daily survival rates from the simulation model. We conducted these comparisons under different sampling scenarios where the ecological and statistical assumptions had been violated. Overall, the daily survival estimates calculated from the simulated data corresponded well with true survival rates of the simulation. Violating the accurate aging and the independence assumptions did not result in biased daily survival estimates, whereas unequal detection for younger or older birds and violating the birth death equilibrium did result in estimator bias. Assuring that all ages are equally detectable and timing data collection to approximately meet the birth death equilibrium are key to the successful use of this method for precocial shorebirds.

  2. Mendelian randomization with Egger pleiotropy correction and weakly informative Bayesian priors.

    PubMed

    Schmidt, A F; Dudbridge, F

    2017-12-15

    The MR-Egger (MRE) estimator has been proposed to correct for directional pleiotropic effects of genetic instruments in an instrumental variable (IV) analysis. The power of this method is considerably lower than that of conventional estimators, limiting its applicability. Here we propose a novel Bayesian implementation of the MR-Egger estimator (BMRE) and explore the utility of applying weakly informative priors on the intercept term (the pleiotropy estimate) to increase power of the IV (slope) estimate. This was a simulation study to compare the performance of different IV estimators. Scenarios differed in the presence of a causal effect, the presence of pleiotropy, the proportion of pleiotropic instruments and degree of 'Instrument Strength Independent of Direct Effect' (InSIDE) assumption violation. Based on empirical plasma urate data, we present an approach to elucidate a prior distribution for the amount of pleiotropy. A weakly informative prior on the intercept term increased power of the slope estimate while maintaining type 1 error rates close to the nominal value of 0.05. Under the InSIDE assumption, performance was unaffected by the presence or absence of pleiotropy. Violation of the InSIDE assumption biased all estimators, affecting the BMRE more than the MRE method. Depending on the prior distribution, the BMRE estimator has more power at the cost of an increased susceptibility to InSIDE assumption violations. As such the BMRE method is a compromise between the MRE and conventional IV estimators, and may be an especially useful approach to account for observed pleiotropy. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association.

  3. Optimal post-experiment estimation of poorly modeled dynamic systems

    NASA Technical Reports Server (NTRS)

    Mook, D. Joseph

    1988-01-01

    Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.

  4. An Assessment of Propensity Score Matching as a Nonexperimental Impact Estimator: Evidence from Mexico's PROGRESA Program

    ERIC Educational Resources Information Center

    Diaz, Juan Jose; Handa, Sudhanshu

    2006-01-01

    Not all policy questions can be addressed by social experiments. Nonexperimental evaluation methods provide an alternative to experimental designs but their results depend on untestable assumptions. This paper presents evidence on the reliability of propensity score matching (PSM), which estimates treatment effects under the assumption of…

  5. Estimation of Species Identification Error: Implications for Raptor Migration Counts and Trend Estimation

    Treesearch

    J.M. Hull; A.M. Fish; J.J. Keane; S.R. Mori; B.J Sacks; A.C. Hull

    2010-01-01

    One of the primary assumptions associated with many wildlife and population trend studies is that target species are correctly identified. This assumption may not always be valid, particularly for species similar in appearance to co-occurring species. We examined size overlap and identification error rates among Cooper's (Accipiter cooperii...

  6. A probabilistic method for the estimation of residual risk in donated blood.

    PubMed

    Bish, Ebru K; Ragavan, Prasanna K; Bish, Douglas R; Slonim, Anthony D; Stramer, Susan L

    2014-10-01

    The residual risk (RR) of transfusion-transmitted infections, including the human immunodeficiency virus and hepatitis B and C viruses, is typically estimated by the incidence[Formula: see text]window period model, which relies on the following restrictive assumptions: Each screening test, with probability 1, (1) detects an infected unit outside of the test's window period; (2) fails to detect an infected unit within the window period; and (3) correctly identifies an infection-free unit. These assumptions need not hold in practice due to random or systemic errors and individual variations in the window period. We develop a probability model that accurately estimates the RR by relaxing these assumptions, and quantify their impact using a published cost-effectiveness study and also within an optimization model. These assumptions lead to inaccurate estimates in cost-effectiveness studies and to sub-optimal solutions in the optimization model. The testing solution generated by the optimization model translates into fewer expected infections without an increase in the testing cost. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Models for the propensity score that contemplate the positivity assumption and their application to missing data and causality.

    PubMed

    Molina, J; Sued, M; Valdora, M

    2018-06-05

    Generalized linear models are often assumed to fit propensity scores, which are used to compute inverse probability weighted (IPW) estimators. To derive the asymptotic properties of IPW estimators, the propensity score is supposed to be bounded away from zero. This condition is known in the literature as strict positivity (or positivity assumption), and, in practice, when it does not hold, IPW estimators are very unstable and have a large variability. Although strict positivity is often assumed, it is not upheld when some of the covariates are unbounded. In real data sets, a data-generating process that violates the positivity assumption may lead to wrong inference because of the inaccuracy in the estimations. In this work, we attempt to conciliate between the strict positivity condition and the theory of generalized linear models by incorporating an extra parameter, which results in an explicit lower bound for the propensity score. An additional parameter is added to fulfil the overlap assumption in the causal framework. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Technical Support Document: 50% Energy Savings Design Technology Packages for Highway Lodging Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Wei; Gowri, Krishnan; Lane, Michael D.

    2009-09-28

    This Technical Support Document (TSD) describes the process, methodology and assumptions for development of the 50% Energy Savings Design Technology Packages for Highway Lodging Buildings, a design guidance document intended to provide recommendations for achieving 50% energy savings in highway lodging properties over the energy-efficiency levels contained in ANSI/ASHRAE/IESNA Standard 90.1-2004, Energy Standard for Buildings Except Low-Rise Residential Buildings.

  9. Quantitative microbial risk assessment model for Legionnaires' disease: assessment of human exposures for selected spa outbreaks.

    PubMed

    Armstrong, Thomas W; Haas, Charles N

    2007-08-01

    Evaluation of a quantitative microbial risk assessment (QMRA) model for Legionnaires' disease (LD) required Legionella exposure estimates for several well-documented LD outbreaks. Reports for a whirlpool spa and two natural spring spa outbreaks provided data for the exposure assessment, as well as rates of infection and mortality. Exposure estimates for the whirlpool spa outbreak employed aerosol generation, water composition, exposure duration data, and building ventilation parameters with a two-zone model. Estimates for the natural hot springs outbreaks used bacterial water to air partitioning coefficients and exposure duration information. The air concentration and dose calculations used input parameter distributions with Monte Carlo simulations to estimate exposures as probability distributions. The assessment considered two sets of assumptions about the transfer of Legionella from the water phase to the aerosol emitted from the whirlpool spa. The estimated air concentration near the whirlpool spa was 5 to 18 colony forming units per cubic meter (CFU/m(3)) and 50 to 180 CFU/m(3) for each of the alternate assumptions. The estimated 95th percentile ranges of Legionella dose for workers within 15 m of the whirlpool spa were 0.13-3.4 CFU and 1.3-34.5 CFU, respectively. The modeling for hot springs Spas 1 and 2 resulted in estimated arithmetic mean air concentrations of 360 and 17 CFU/m(3), respectively, and 95 percentile ranges for Legionella dose of 28 to 67 CFU and 1.1 to 3.7 CFU, respectively. The Legionella air concentration estimates fall in the range of limited reports on air concentrations of Legionella (0.33 to 190 CFU/m(3)) near showers, aerated faucets, and baths during filling with Legionella-contaminated water. These measurements may provide some indication that the estimates are of a reasonable magnitude, but they do not clarify the exposure estimates accuracy, since they were not obtained during LD outbreaks. Further research to improve the data used for the Legionella exposure assessment would strengthen the results. Several of the primary additional data needs include improved data for bacterial water to air partitioning coefficients, better accounting of time-activity-distance patterns and exposure potential in outbreak reports, and data for Legionella-containing aerosol viability decay instead of loss of capability for growth in culture.

  10. Model documentation report: Commercial Sector Demand Module of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-01-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components. The NEMS Commercial Sector Demand Module is a simulation tool based upon economic and engineering relationships that models commercial sector energy demands at the nine Census Division level of detail for eleven distinct categories of commercial buildings. Commercial equipment selections are performed for the major fuels of electricity, natural gas,more » and distillate fuel, for the major services of space heating, space cooling, water heating, ventilation, cooking, refrigeration, and lighting. The algorithm also models demand for the minor fuels of residual oil, liquefied petroleum gas, steam coal, motor gasoline, and kerosene, the renewable fuel sources of wood and municipal solid waste, and the minor services of office equipment. Section 2 of this report discusses the purpose of the model, detailing its objectives, primary input and output quantities, and the relationship of the Commercial Module to the other modules of the NEMS system. Section 3 of the report describes the rationale behind the model design, providing insights into further assumptions utilized in the model development process to this point. Section 3 also reviews alternative commercial sector modeling methodologies drawn from existing literature, providing a comparison to the chosen approach. Section 4 details the model structure, using graphics and text to illustrate model flows and key computations.« less

  11. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

    PubMed

    Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R

    2018-04-30

    Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  12. Satellite Power Systems (SPS) space transportation cost analysis and evaluation

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A picture of Space Power Systems space transportation costs at the present time is given with respect to accuracy as stated, reasonableness of the methods used, assumptions made, and uncertainty associated with the estimates. The approach used consists of examining space transportation costs from several perspectives to perform a variety of sensitivity analyses or reviews and examine the findings in terms of internal consistency and external comparison with analogous systems. These approaches are summarized as a theoretical and historical review including a review of stated and unstated assumptions used to derive the costs, and a performance or technical review. These reviews cover the overall transportation program as well as the individual vehicles proposed. The review of overall cost assumptions is the principal means used for estimating the cost uncertainty derived. The cost estimates used as the best current estimate are included.

  13. Improving inference for aerial surveys of bears: The importance of assumptions and the cost of unnecessary complexity.

    PubMed

    Schmidt, Joshua H; Wilson, Tammy L; Thompson, William L; Reynolds, Joel H

    2017-07-01

    Obtaining useful estimates of wildlife abundance or density requires thoughtful attention to potential sources of bias and precision, and it is widely understood that addressing incomplete detection is critical to appropriate inference. When the underlying assumptions of sampling approaches are violated, both increased bias and reduced precision of the population estimator may result. Bear ( Ursus spp.) populations can be difficult to sample and are often monitored using mark-recapture distance sampling (MRDS) methods, although obtaining adequate sample sizes can be cost prohibitive. With the goal of improving inference, we examined the underlying methodological assumptions and estimator efficiency of three datasets collected under an MRDS protocol designed specifically for bears. We analyzed these data using MRDS, conventional distance sampling (CDS), and open-distance sampling approaches to evaluate the apparent bias-precision tradeoff relative to the assumptions inherent under each approach. We also evaluated the incorporation of informative priors on detection parameters within a Bayesian context. We found that the CDS estimator had low apparent bias and was more efficient than the more complex MRDS estimator. When combined with informative priors on the detection process, precision was increased by >50% compared to the MRDS approach with little apparent bias. In addition, open-distance sampling models revealed a serious violation of the assumption that all bears were available to be sampled. Inference is directly related to the underlying assumptions of the survey design and the analytical tools employed. We show that for aerial surveys of bears, avoidance of unnecessary model complexity, use of prior information, and the application of open population models can be used to greatly improve estimator performance and simplify field protocols. Although we focused on distance sampling-based aerial surveys for bears, the general concepts we addressed apply to a variety of wildlife survey contexts.

  14. Vertebral Bomb Radiocarbon Suggests Extreme Longevity in White Sharks

    PubMed Central

    Hamady, Li Ling; Natanson, Lisa J.; Skomal, Gregory B.; Thorrold, Simon R.

    2014-01-01

    Conservation and management efforts for white sharks (Carcharodon carcharias) remain hampered by a lack of basic demographic information including age and growth rates. Sharks are typically aged by counting growth bands sequentially deposited in their vertebrae, but the assumption of annual deposition of these band pairs requires testing. We compared radiocarbon (Δ14C) values in vertebrae from four female and four male white sharks from the northwestern Atlantic Ocean (NWA) with reference chronologies documenting the marine uptake of 14C produced by atmospheric testing of thermonuclear devices to generate the first radiocarbon age estimates for adult white sharks. Age estimates were up to 40 years old for the largest female (fork length [FL]: 526 cm) and 73 years old for the largest male (FL: 493 cm). Our results dramatically extend the maximum age and longevity of white sharks compared to earlier studies, hint at possible sexual dimorphism in growth rates, and raise concerns that white shark populations are considerably more sensitive to human-induced mortality than previously thought. PMID:24416189

  15. Vertebral bomb radiocarbon suggests extreme longevity in white sharks.

    PubMed

    Hamady, Li Ling; Natanson, Lisa J; Skomal, Gregory B; Thorrold, Simon R

    2014-01-01

    Conservation and management efforts for white sharks (Carcharodon carcharias) remain hampered by a lack of basic demographic information including age and growth rates. Sharks are typically aged by counting growth bands sequentially deposited in their vertebrae, but the assumption of annual deposition of these band pairs requires testing. We compared radiocarbon (Δ(14)C) values in vertebrae from four female and four male white sharks from the northwestern Atlantic Ocean (NWA) with reference chronologies documenting the marine uptake of (14)C produced by atmospheric testing of thermonuclear devices to generate the first radiocarbon age estimates for adult white sharks. Age estimates were up to 40 years old for the largest female (fork length [FL]: 526 cm) and 73 years old for the largest male (FL: 493 cm). Our results dramatically extend the maximum age and longevity of white sharks compared to earlier studies, hint at possible sexual dimorphism in growth rates, and raise concerns that white shark populations are considerably more sensitive to human-induced mortality than previously thought.

  16. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    USGS Publications Warehouse

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  17. 77 FR 49054 - 60-Day Notice of Proposed Information Collection: Request for Commodity Jurisdiction...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-15

    ....regulations.gov . You can search for the document by selecting ``Notice'' under Document Type, entering the... ``Search.'' If necessary, use the ``Narrow by Agency'' option on the Results page. Email: [email protected] the burden of the proposed collection, including the validity of the methodology and assumptions used...

  18. Relative Performance of Rescaling and Resampling Approaches to Model Chi Square and Parameter Standard Error Estimation in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Johnathan; Hancock, Gregory R.

    Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…

  19. Foundations for estimation by the method of least squares

    NASA Technical Reports Server (NTRS)

    Hauck, W. W., Jr.

    1971-01-01

    Least squares estimation is discussed from the point of view of a statistician. Much of the emphasis is on problems encountered in application and, more specifically, on questions involving assumptions: what assumptions are needed, when are they needed, what happens if they are not valid, and if they are invalid, how that fact can be detected.

  20. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  1. Rescaling quality of life values from discrete choice experiments for use as QALYs: a cautionary tale

    PubMed Central

    Flynn, Terry N; Louviere, Jordan J; Marley, Anthony AJ; Coast, Joanna; Peters, Tim J

    2008-01-01

    Background Researchers are increasingly investigating the potential for ordinal tasks such as ranking and discrete choice experiments to estimate QALY health state values. However, the assumptions of random utility theory, which underpin the statistical models used to provide these estimates, have received insufficient attention. In particular, the assumptions made about the decisions between living states and the death state are not satisfied, at least for some people. Estimated values are likely to be incorrectly anchored with respect to death (zero) in such circumstances. Methods Data from the Investigating Choice Experiments for the preferences of older people CAPability instrument (ICECAP) valuation exercise were analysed. The values (previously anchored to the worst possible state) were rescaled using an ordinal model proposed previously to estimate QALY-like values. Bootstrapping was conducted to vary artificially the proportion of people who conformed to the conventional random utility model underpinning the analyses. Results Only 26% of respondents conformed unequivocally to the assumptions of conventional random utility theory. At least 14% of respondents unequivocally violated the assumptions. Varying the relative proportions of conforming respondents in sensitivity analyses led to large changes in the estimated QALY values, particularly for lower-valued states. As a result these values could be either positive (considered to be better than death) or negative (considered to be worse than death). Conclusion Use of a statistical model such as conditional (multinomial) regression to anchor quality of life values from ordinal data to death is inappropriate in the presence of respondents who do not conform to the assumptions of conventional random utility theory. This is clearest when estimating values for that group of respondents observed in valuation samples who refuse to consider any living state to be worse than death: in such circumstances the model cannot be estimated. Only a valuation task requiring respondents to make choices in which both length and quality of life vary can produce estimates that properly reflect the preferences of all respondents. PMID:18945358

  2. FOSSIL2 energy policy model documentation: generic structures of the FOSSIL2 model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1980-10-01

    This report discusses the structure, derivations, assumptions, and mathematical formulation of the FOSSIL2 model. Each major facet of the model - supply/demand interactions, industry financing, and production - has been designed to parallel closely the actual cause/effect relationships determining the behavior of the United States energy system. The data base for the FOSSIL2 program is large. When possible, all data were obtained from sources well known to experts in the energy field. Cost and resource estimates are based on DOE data whenever possible. This report presents the FOSSIL2 model at several levels. In Volume I, an overview of the basicmore » structures, assumptions, and behavior of the FOSSIL2 model is presented so that the reader can understand the results of various policy tests. The discussion covers the three major building blocks, or generic structures, used to construct the model: supply/demand balance; finance and capital formation; and energy production. These structures reflect the components and interactions of the major processes within each energy industry that directly affect the dynamics of fuel supply, demand, and price within the energy system as a whole.« less

  3. Differential molar heat capacities to test ideal solubility estimations.

    PubMed

    Neau, S H; Bhandarkar, S V; Hellmuth, E W

    1997-05-01

    Calculation of the ideal solubility of a crystalline solute in a liquid solvent requires knowledge of the difference in the molar heat capacity at constant pressure of the solid and the supercooled liquid forms of the solute, delta Cp. Since this parameter is not usually known, two assumptions have been used to simplify the expression. The first is that delta Cp can be considered equal to zero; the alternate assumption is that the molar entropy of fusion, delta Sf, is an estimate of delta Cp. Reports claiming the superiority of one assumption over the other, on the basis of calculations done using experimentally determined parameters, have appeared in the literature. The validity of the assumptions in predicting the ideal solubility of five structurally unrelated compounds of pharmaceutical interest, with melting points in the range 420 to 470 K, was evaluated in this study. Solid and liquid heat capacities of each compound near its melting point were determined using differential scanning calorimetry. Linear equations describing the heat capacities were extrapolated to the melting point to generate the differential molar heat capacity. Linear data were obtained for both crystal and liquid heat capacities of sample and test compounds. For each sample, ideal solubility at 298 K was calculated and compared to the two estimates generated using literature equations based on the differential molar heat capacity assumptions. For the compounds studied, delta Cp was not negligible and was closer to delta Sf than to zero. However, neither of the two assumptions was valid for accurately estimating the ideal solubility as given by the full equation.

  4. Improving credibility and transparency of conservation impact evaluations through the partial identification approach.

    PubMed

    McConnachie, Matthew M; Romero, Claudia; Ferraro, Paul J; van Wilgen, Brian W

    2016-04-01

    The fundamental challenge of evaluating the impact of conservation interventions is that researchers must estimate the difference between the outcome after an intervention occurred and what the outcome would have been without it (counterfactual). Because the counterfactual is unobservable, researchers must make an untestable assumption that some units (e.g., organisms or sites) that were not exposed to the intervention can be used as a surrogate for the counterfactual (control). The conventional approach is to make a point estimate (i.e., single number along with a confidence interval) of impact, using, for example, regression. Point estimates provide powerful conclusions, but in nonexperimental contexts they depend on strong assumptions about the counterfactual that often lack transparency and credibility. An alternative approach, called partial identification (PI), is to first estimate what the counterfactual bounds would be if the weakest possible assumptions were made. Then, one narrows the bounds by using stronger but credible assumptions based on an understanding of why units were selected for the intervention and how they might respond to it. We applied this approach and compared it with conventional approaches by estimating the impact of a conservation program that removed invasive trees in part of the Cape Floristic Region. Even when we used our largest PI impact estimate, the program's control costs were 1.4 times higher than previously estimated. PI holds promise for applications in conservation science because it encourages researchers to better understand and account for treatment selection biases; can offer insights into the plausibility of conventional point-estimate approaches; could reduce the problem of advocacy in science; might be easier for stakeholders to agree on a bounded estimate than a point estimate where impacts are contentious; and requires only basic arithmetic skills. © 2015 Society for Conservation Biology.

  5. Interpreting findings from Mendelian randomization using the MR-Egger method.

    PubMed

    Burgess, Stephen; Thompson, Simon G

    2017-05-01

    Mendelian randomization-Egger (MR-Egger) is an analysis method for Mendelian randomization using summarized genetic data. MR-Egger consists of three parts: (1) a test for directional pleiotropy, (2) a test for a causal effect, and (3) an estimate of the causal effect. While conventional analysis methods for Mendelian randomization assume that all genetic variants satisfy the instrumental variable assumptions, the MR-Egger method is able to assess whether genetic variants have pleiotropic effects on the outcome that differ on average from zero (directional pleiotropy), as well as to provide a consistent estimate of the causal effect, under a weaker assumption-the InSIDE (INstrument Strength Independent of Direct Effect) assumption. In this paper, we provide a critical assessment of the MR-Egger method with regard to its implementation and interpretation. While the MR-Egger method is a worthwhile sensitivity analysis for detecting violations of the instrumental variable assumptions, there are several reasons why causal estimates from the MR-Egger method may be biased and have inflated Type 1 error rates in practice, including violations of the InSIDE assumption and the influence of outlying variants. The issues raised in this paper have potentially serious consequences for causal inferences from the MR-Egger approach. We give examples of scenarios in which the estimates from conventional Mendelian randomization methods and MR-Egger differ, and discuss how to interpret findings in such cases.

  6. Estimation of density of mongooses with capture-recapture and distance sampling

    USGS Publications Warehouse

    Corn, J.L.; Conroy, M.J.

    1998-01-01

    We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.

  7. Effects of turbine technology and land use on wind power resource potential

    NASA Astrophysics Data System (ADS)

    Rinne, Erkka; Holttinen, Hannele; Kiviluoma, Juha; Rissanen, Simo

    2018-06-01

    Estimates of wind power potential are relevant for decision-making in energy policy and business. Such estimates are affected by several uncertain assumptions, most significantly related to wind turbine technology and land use. Here, we calculate the technical and economic onshore wind power potentials with the aim to evaluate the impact of such assumptions using the case-study area of Finland as an example. We show that the assumptions regarding turbine technology and land use policy are highly significant for the potential estimate. Modern turbines with lower specific ratings and greater hub heights improve the wind power potential considerably, even though it was assumed that the larger rotors decrease the installation density and increase the turbine investment costs. New technology also decreases the impact of strict land use policies. Uncertainty in estimating the cost of wind power technology limits the accuracy of assessing economic wind power potential.

  8. Multilevel models for estimating incremental net benefits in multinational studies.

    PubMed

    Grieve, Richard; Nixon, Richard; Thompson, Simon G; Cairns, John

    2007-08-01

    Multilevel models (MLMs) have been recommended for estimating incremental net benefits (INBs) in multicentre cost-effectiveness analysis (CEA). However, these models have assumed that the INBs are exchangeable and that there is a common variance across all centres. This paper examines the plausibility of these assumptions by comparing various MLMs for estimating the mean INB in a multinational CEA. The results showed that the MLMs that assumed the INBs were exchangeable and had a common variance led to incorrect inferences. The MLMs that included covariates to allow for systematic differences across the centres, and estimated different variances in each centre, made more plausible assumptions, fitted the data better and led to more appropriate inferences. We conclude that the validity of assumptions underlying MLMs used in CEA need to be critically evaluated before reliable conclusions can be drawn. Copyright 2006 John Wiley & Sons, Ltd.

  9. Economics in "Global Health 2035": a sensitivity analysis of the value of a life year estimates.

    PubMed

    Chang, Angela Y; Robinson, Lisa A; Hammitt, James K; Resch, Stephen C

    2017-06-01

    In "Global health 2035: a world converging within a generation," The Lancet Commission on Investing in Health (CIH) adds the value of increased life expectancy to the value of growth in gross domestic product (GDP) when assessing national well-being. To value changes in life expectancy, the CIH relies on several strong assumptions to bridge gaps in the empirical research. It finds that the value of a life year (VLY) averages 2.3 times GDP per capita for low- and middle-income countries (LMICs) assuming the changes in life expectancy they experienced from 2000 to 2011 are permanent. The CIH VLY estimate is based on a specific shift in population life expectancy and includes a 50 percent reduction for children ages 0 through 4. We investigate the sensitivity of this estimate to the underlying assumptions, including the effects of income, age, and life expectancy, and the sequencing of the calculations. We find that reasonable alternative assumptions regarding the effects of income, age, and life expectancy may reduce the VLY estimates to 0.2 to 2.1 times GDP per capita for LMICs. Removing the reduction for young children increases the VLY, while reversing the sequencing of the calculations reduces the VLY. Because the VLY is sensitive to the underlying assumptions, analysts interested in applying this approach elsewhere must tailor the estimates to the impacts of the intervention and the characteristics of the affected population. Analysts should test the sensitivity of their conclusions to reasonable alternative assumptions. More work is needed to investigate options for improving the approach.

  10. Preliminary Thermal Modeling of HI-STORM 100 Storage Modules at Diablo Canyon Power Plant ISFSI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuta, Judith M.; Adkins, Harold E.

    Thermal analysis is being undertaken at Pacific Northwest National Laboratory (PNNL) in support of inspections of selected storage modules at various locations around the United States, as part of the Used Fuel Disposition Campaign of the U.S. Department of Energy, Office of Nuclear Energy (DOE-NE) Fuel Cycle Research and Development. This report documents pre-inspection predictions of temperatures for two modules at the Diablo Canyon Power Plant ISFSI identified as candidates for inspection. These are HI-STORM 100 modules of a site-specific design for storing PWR 17x17 fuel in MPC-32 canisters. The temperature predictions reported in this document were obtained with detailedmore » COBRA-SFS models of these storage systems, with the following boundary conditions and assumptions. • storage module overpack configuration based on FSAR documentation of HI-STORM100S-218, Version B; due to unavailability of site-specific design data for Diablo Canyon ISFSI modules • Individual assembly and total decay heat loadings for each canister, based on at-loading values provided by PG&E, “aged” to time of inspection using ORIGEN modeling o Special Note: there is an inherent conservatism of unquantified magnitude – informally estimated as up to approximately 20% -- in the utility-supplied values for at-loading assembly decay heat values • Axial decay heat distributions based on a bounding generic profile for PWR fuel. • Axial location of beginning of fuel assumed same as WE 17x17 OFA fuel, due to unavailability of specific data for WE17x17 STD and WE 17x17 Vantage 5 fuel designs • Ambient conditions of still air at 50°F (10°C) assumed for base-case evaluations o Wind conditions at the Diablo Canyon site are unquantified, due to unavailability of site meteorological data o additional still-air evaluations performed at 70°F (21°C), 60°F (16°C), and 40°F (4°C), to cover a range of possible conditions at the time of the inspection. (Calculations were also performed at 80°F (27°C), for comparison with design basis assumptions.) All calculations are for steady-state conditions, on the assumption that the surfaces of the module that are accessible for temperature measurements during the inspection will tend to follow ambient temperature changes relatively closely. Comparisons to the results of the inspections, and post-inspection evaluations of temperature measurements obtained in the specific modules, will be documented in a separate follow-on report, to be issued in a timely manner after the inspection has been performed.« less

  11. Digital discrimination: Political bias in Internet service provision across ethnic groups.

    PubMed

    Weidmann, Nils B; Benitez-Baleato, Suso; Hunziker, Philipp; Glatz, Eduard; Dimitropoulos, Xenofontas

    2016-09-09

    The global expansion of the Internet is frequently associated with increased government transparency, political rights, and democracy. However, this assumption depends on marginalized groups getting access in the first place. Here we document a strong and persistent political bias in the allocation of Internet coverage across ethnic groups worldwide. Using estimates of Internet penetration obtained through network measurements, we show that politically excluded groups suffer from significantly lower Internet penetration rates compared with those in power, an effect that cannot be explained by economic or geographic factors. Our findings underline one of the central impediments to "liberation technology," which is that governments still play a key role in the allocation of the Internet and can, intentionally or not, sabotage its liberating effects. Copyright © 2016, American Association for the Advancement of Science.

  12. Computer program documentation: ISOCLS iterative self-organizing clustering program, program C094

    NASA Technical Reports Server (NTRS)

    Minter, R. T. (Principal Investigator)

    1972-01-01

    The author has identified the following significant results. This program implements an algorithm which, ideally, sorts a given set of multivariate data points into similar groups or clusters. The program is intended for use in the evaluation of multispectral scanner data; however, the algorithm could be used for other data types as well. The user may specify a set of initial estimated cluster means to begin the procedure, or he may begin with the assumption that all the data belongs to one cluster. The procedure is initiatized by assigning each data point to the nearest (in absolute distance) cluster mean. If no initial cluster means were input, all of the data is assigned to cluster 1. The means and standard deviations are calculated for each cluster.

  13. Direct coal liquefaction baseline design and system analysis. Quarterly report, January--March 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-04-01

    The primary objective of the study is to develop a computer model for a base line direct coal liquefaction design based on two stage direct coupled catalytic reactors. This primary objective is to be accomplished by completing the following: a base line design based on previous DOE/PETC results from Wilsonville pilot plant and other engineering evaluations; a cost estimate and economic analysis; a computer model incorporating the above two steps over a wide range of capacities and selected process alternatives; a comprehensive training program for DOE/PETC Staff to understand and use the computer model; a thorough documentation of all underlyingmore » assumptions for baseline economics; and a user manual and training material which will facilitate updating of the model in the future.« less

  14. Direct coal liquefaction baseline design and system analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-04-01

    The primary objective of the study is to develop a computer model for a base line direct coal liquefaction design based on two stage direct coupled catalytic reactors. This primary objective is to be accomplished by completing the following: a base line design based on previous DOE/PETC results from Wilsonville pilot plant and other engineering evaluations; a cost estimate and economic analysis; a computer model incorporating the above two steps over a wide range of capacities and selected process alternatives; a comprehensive training program for DOE/PETC Staff to understand and use the computer model; a thorough documentation of all underlyingmore » assumptions for baseline economics; and a user manual and training material which will facilitate updating of the model in the future.« less

  15. Intertemporal consumption with directly measured welfare functions and subjective expectations

    PubMed Central

    Kapteyn, Arie; Kleinjans, Kristin J.; van Soest, Arthur

    2010-01-01

    Euler equation estimation of intertemporal consumption models requires many, often unverifiable assumptions. These include assumptions on expectations and preferences. We aim at reducing some of these requirements by using direct subjective information on respondents’ preferences and expectations. The results suggest that individually measured welfare functions and expectations have predictive power for the variation in consumption across households. Furthermore, estimates of the intertemporal elasticity of substitution based on the estimated welfare functions are plausible and of a similar order of magnitude as other estimates found in the literature. The model favored by the data only requires cross-section data for estimation. PMID:20442798

  16. Home Energy Saver

    Science.gov Websites

    (heating, cooling, water heating, major appliances, small appliances, and lighting) are included. HES ;black box"-we extensively document all methodologies and assumptions. Users begin their exploration

  17. Problems associated with estimating ground water discharge and recharge from stream-discharge records

    USGS Publications Warehouse

    Halford, K.J.; Mayer, G.C.

    2000-01-01

    Ground water discharge and recharge frequently have been estimated with hydrograph-separation techniques, but the critical assumptions of the techniques have not been investigated. The critical assumptions are that the hydraulic characteristics of the contributing aquifer (recession index) can be estimated from stream-discharge records; that periods of exclusively ground water discharge can be reliably identified; and that stream-discharge peaks approximate the magnitude and tinting of recharge events. The first assumption was tested by estimating the recession index from st earn-discharge hydrographs, ground water hydrographs, and hydraulic diffusivity estimates from aquifer tests in basins throughout the eastern United States and Montana. The recession index frequently could not be estimated reliably from stream-discharge records alone because many of the estimates of the recession index were greater than 1000 days. The ratio of stream discharge during baseflow periods was two to 36 times greater than the maximum expected range of ground water discharge at 12 of the 13 field sites. The identification of the ground water component of stream-discharge records was ambiguous because drainage from bank-storage, wetlands, surface water bodies, soils, and snowpacks frequently exceeded ground water discharge and also decreased exponentially during recession periods. The timing and magnitude of recharge events could not be ascertained from stream-discharge records at any of the sites investigated because recharge events were not directly correlated with stream peaks. When used alone, the recession-curve-displacement method and other hydrograph-separation techniques are poor tools for estimating ground water discharge or recharge because the major assumptions of the methods are commonly and grossly violated. Multiple, alternative methods of estimating ground water discharge and recharge should be used because of the uncertainty associated with any one technique.

  18. A method for estimating abundance of mobile populations using telemetry and counts of unmarked animals

    USGS Publications Warehouse

    Clement, Matthew; O'Keefe, Joy M; Walters, Brianne

    2015-01-01

    While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.

  19. Sufficiency and Necessity Assumptions in Causal Structure Induction

    ERIC Educational Resources Information Center

    Mayrhofer, Ralf; Waldmann, Michael R.

    2016-01-01

    Research on human causal induction has shown that people have general prior assumptions about causal strength and about how causes interact with the background. We propose that these prior assumptions about the parameters of causal systems do not only manifest themselves in estimations of causal strength or the selection of causes but also when…

  20. The economic burden of schizophrenia in Canada in 2004.

    PubMed

    Goeree, R; Farahati, F; Burke, N; Blackhouse, G; O'Reilly, D; Pyne, J; Tarride, J-E

    2005-12-01

    To estimate the financial burden of schizophrenia in Canada in 2004. A prevalence-based cost-of-illness (COI) approach was used. The primary sources of information for the study included a review of the published literature, a review of published reports and documents, secondary analysis of administrative datasets, and information collected directly from various federal and provincial government programs and services. The literature review included publications up to April 2005 reported in MedLine, EMBASE and PsychINFO. Where specific information from a province was not available, the method of mean substitution from other provinces was used. Costs incurred by various levels/departments of government were separated into healthcare and non-healthcare costs. Also included in the analysis was the value of lost productivity for premature mortality and morbidity associated with schizophrenia. Sensitivity analysis was used to test major cost assumptions used in the analysis. Where possible, all resource utilization estimates for the financial burden of schizophrenia were obtained for 2004 and are expressed in 2004 Canadian dollars (CAN dollars). The estimated number of persons with schizophrenia in Canada in 2004 was 234 305 (95% CI, 136 201-333 402). The direct healthcare and non-healthcare costs were estimated to be 2.02 billion CAN dollars in 2004. There were 374 deaths attributed to schizophrenia. This combined with the high unemployment rate due to schizophrenia resulted in an additional productivity morbidity and mortality loss estimate of 4.83 billion CAN dollars, for a total cost estimate in 2004 of 6.85 billion CAN dollars. By far the largest component of the total cost estimate was for productivity losses associated with morbidity in schizophrenia (70% of total costs) and the results showed that total cost estimates were most sensitive to alternative assumptions regarding the additional unemployment due to schizophrenia in Canada. Despite significant improvements in the past decade in pharmacotherapy, programs and services available for patients with schizophrenia, the economic burden of schizophrenia in Canada remains high. The most significant factor affecting the cost of schizophrenia in Canada is lost productivity due to morbidity. Programs targeted at improving patient symptoms and functioning to increase workforce participation has the potential to make a significant contribution in reducing the cost of this severe mental illness in Canada.

  1. Estimating psychiatric manpower requirements based on patients' needs.

    PubMed

    Faulkner, L R; Goldman, C R

    1997-05-01

    To provide a better understanding of the complexities of estimating psychiatric manpower requirements, the authors describe several approaches to estimation and present a method based on patients' needs. A five-step method for psychiatric manpower estimation is used, with estimates of data pertinent to each step, to calculate the total psychiatric manpower requirements for the United States. The method is also used to estimate the hours of psychiatric service per patient per year that might be available under current psychiatric practice and under a managed care scenario. Depending on assumptions about data at each step in the method, the total psychiatric manpower requirements for the U.S. population range from 2,989 to 358,696 full-time-equivalent psychiatrists. The number of available hours of psychiatric service per patient per year is 14.1 hours under current psychiatric practice and 2.8 hours under the managed care scenario. The key to psychiatric manpower estimation lies in clarifying the assumptions that underlie the specific method used. Even small differences in assumptions mean large differences in estimates. Any credible manpower estimation process must include discussions and negotiations between psychiatrists, other clinicians, administrators, and patients and families to clarify the treatment needs of patients and the roles, responsibilities, and job description of psychiatrists.

  2. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  3. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  4. Density, distribution, and genetic structure of grizzly bears in the Cabinet-Yaak Ecosystem

    USGS Publications Warehouse

    Macleod, Amy C.; Boyd, Kristina L.; Boulanger, John; Royle, J. Andrew; Kasworm, Wayne F.; Paetkau, David; Proctor, Michael F.; Annis, Kim; Graves, Tabitha A.

    2016-01-01

    The conservation status of the 2 threatened grizzly bear (Ursus arctos) populations in the Cabinet-Yaak Ecosystem (CYE) of northern Montana and Idaho had remained unchanged since designation in 1975; however, the current demographic status of these populations was uncertain. No rigorous data on population density and distribution or analysis of recent population genetic structure were available to measure the effectiveness of conservation efforts. We used genetic detection data from hair corral, bear rub, and opportunistic sampling in traditional and spatial capture–recapture models to generate estimates of abundance and density of grizzly bears in the CYE. We calculated mean bear residency on our sampling grid from telemetry data using Huggins and Pledger models to estimate the average number of bears present and to correct our superpopulation estimates for lack of geographic closure. Estimated grizzly bear abundance (all sex and age classes) in the CYE in 2012 was 48–50 bears, approximately half the population recovery goal. Grizzly bear density in the CYE (4.3–4.5 grizzly bears/1,000 km2) was among the lowest of interior North American populations. The sizes of the Cabinet (n = 22–24) and Yaak (n = 18–22) populations were similar. Spatial models produced similar estimates of abundance and density with comparable precision without requiring radio-telemetry data to address assumptions of geographic closure. The 2 populations in the CYE were demographically and reproductively isolated from each other and the Cabinet population was highly inbred. With parentage analysis, we documented natural migrants to the Cabinet and Yaak populations by bears born to parents in the Selkirk and Northern Continental Divide populations. These events supported data from other sources suggesting that the expansion of neighboring populations may eventually help sustain the CYE populations. However, the small size, isolation, and inbreeding documented by this study demonstrate the need for comprehensive management designed to support CYE population growth and increased connectivity and gene flow with other populations.

  5. Pilot production system cost/benefit analysis: Digital document storage project

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The Digital Document Storage (DDS)/Pilot Production System (PPS) will provide cost effective electronic document storage, retrieval, hard copy reproduction, and remote access for users of NASA Technical Reports. The DDS/PPS will result in major benefits, such as improved document reproduction quality within a shorter time frame than is currently possible. In addition, the DDS/PPS will provide an important strategic value through the construction of a digital document archive. It is highly recommended that NASA proceed with the DDS Prototype System and a rapid prototyping development methodology in order to validate recent working assumptions upon which the success of the DDS/PPS is dependent.

  6. Estimating the Global Prevalence of Inadequate Zinc Intake from National Food Balance Sheets: Effects of Methodological Assumptions

    PubMed Central

    Wessells, K. Ryan; Singh, Gitanjali M.; Brown, Kenneth H.

    2012-01-01

    Background The prevalence of inadequate zinc intake in a population can be estimated by comparing the zinc content of the food supply with the population’s theoretical requirement for zinc. However, assumptions regarding the nutrient composition of foods, zinc requirements, and zinc absorption may affect prevalence estimates. These analyses were conducted to: (1) evaluate the effect of varying methodological assumptions on country-specific estimates of the prevalence of dietary zinc inadequacy and (2) generate a model considered to provide the best estimates. Methodology and Principal Findings National food balance data were obtained from the Food and Agriculture Organization of the United Nations. Zinc and phytate contents of these foods were estimated from three nutrient composition databases. Zinc absorption was predicted using a mathematical model (Miller equation). Theoretical mean daily per capita physiological and dietary requirements for zinc were calculated using recommendations from the Food and Nutrition Board of the Institute of Medicine and the International Zinc Nutrition Consultative Group. The estimated global prevalence of inadequate zinc intake varied between 12–66%, depending on which methodological assumptions were applied. However, country-specific rank order of the estimated prevalence of inadequate intake was conserved across all models (r = 0.57–0.99, P<0.01). A “best-estimate” model, comprised of zinc and phytate data from a composite nutrient database and IZiNCG physiological requirements for absorbed zinc, estimated the global prevalence of inadequate zinc intake to be 17.3%. Conclusions and Significance Given the multiple sources of uncertainty in this method, caution must be taken in the interpretation of the estimated prevalence figures. However, the results of all models indicate that inadequate zinc intake may be fairly common globally. Inferences regarding the relative likelihood of zinc deficiency as a public health problem in different countries can be drawn based on the country-specific rank order of estimated prevalence of inadequate zinc intake. PMID:23209781

  7. Super learning to hedge against incorrect inference from arbitrary parametric assumptions in marginal structural modeling.

    PubMed

    Neugebauer, Romain; Fireman, Bruce; Roy, Jason A; Raebel, Marsha A; Nichols, Gregory A; O'Connor, Patrick J

    2013-08-01

    Clinical trials are unlikely to ever be launched for many comparative effectiveness research (CER) questions. Inferences from hypothetical randomized trials may however be emulated with marginal structural modeling (MSM) using observational data, but success in adjusting for time-dependent confounding and selection bias typically relies on parametric modeling assumptions. If these assumptions are violated, inferences from MSM may be inaccurate. In this article, we motivate the application of a data-adaptive estimation approach called super learning (SL) to avoid reliance on arbitrary parametric assumptions in CER. Using the electronic health records data from adults with new-onset type 2 diabetes, we implemented MSM with inverse probability weighting (IPW) estimation to evaluate the effect of three oral antidiabetic therapies on the worsening of glomerular filtration rate. Inferences from IPW estimation were noticeably sensitive to the parametric assumptions about the associations between both the exposure and censoring processes and the main suspected source of confounding, that is, time-dependent measurements of hemoglobin A1c. SL was successfully implemented to harness flexible confounding and selection bias adjustment from existing machine learning algorithms. Erroneous IPW inference about clinical effectiveness because of arbitrary and incorrect modeling decisions may be avoided with SL. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Camera traps and mark-resight models: The value of ancillary data for evaluating assumptions

    USGS Publications Warehouse

    Parsons, Arielle W.; Simons, Theodore R.; Pollock, Kenneth H.; Stoskopf, Michael K.; Stocking, Jessica J.; O'Connell, Allan F.

    2015-01-01

    Unbiased estimators of abundance and density are fundamental to the study of animal ecology and critical for making sound management decisions. Capture–recapture models are generally considered the most robust approach for estimating these parameters but rely on a number of assumptions that are often violated but rarely validated. Mark-resight models, a form of capture–recapture, are well suited for use with noninvasive sampling methods and allow for a number of assumptions to be relaxed. We used ancillary data from continuous video and radio telemetry to evaluate the assumptions of mark-resight models for abundance estimation on a barrier island raccoon (Procyon lotor) population using camera traps. Our island study site was geographically closed, allowing us to estimate real survival and in situ recruitment in addition to population size. We found several sources of bias due to heterogeneity of capture probabilities in our study, including camera placement, animal movement, island physiography, and animal behavior. Almost all sources of heterogeneity could be accounted for using the sophisticated mark-resight models developed by McClintock et al. (2009b) and this model generated estimates similar to a spatially explicit mark-resight model previously developed for this population during our study. Spatially explicit capture–recapture models have become an important tool in ecology and confer a number of advantages; however, non-spatial models that account for inherent individual heterogeneity may perform nearly as well, especially where immigration and emigration are limited. Non-spatial models are computationally less demanding, do not make implicit assumptions related to the isotropy of home ranges, and can provide insights with respect to the biological traits of the local population.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogen, K.T.; Conrado, C.L.; Robison, W.L.

    A detailed analysis of uncertainty and interindividual variability in estimated doses was conducted for a rehabilitation scenario for Bikini Island at Bikini Atoll, in which the top 40 cm of soil would be removed in the housing and village area, and the rest of the island is treated with potassium fertilizer, prior to an assumed resettlement date of 1999. Predicted doses were considered for the following fallout-related exposure pathways: ingested Cesium-137 and Strontium-90, external gamma exposure, and inhalation and ingestion of Americium-241 + Plutonium-239+240. Two dietary scenarios were considered: (1) imported foods are available (IA), and (2) imported foods aremore » unavailable (only local foods are consumed) (IUA). Corresponding calculations of uncertainty in estimated population-average dose showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to uncertainty in this dose are estimated to be approximately 2-fold higher and lower than its population-average value, respectively (under both IA and IUA assumptions). Corresponding calculations of interindividual variability in the expected value of dose with respect to uncertainty showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to interindividual variability in this dose are estimated to be approximately 2-fold higher and lower than its expected value, respectively (under both IA and IUA assumptions). For reference, the expected values of population-average dose at age 70 were estimated to be 1.6 and 5.2 cSv under the IA and IUA dietary assumptions, respectively. Assuming that 200 Bikini resettlers would be exposed to local foods (under both IA and IUA assumptions), the maximum 1-y dose received by any Bikini resident is most likely to be approximately 2 and 8 mSv under the IA and IUA assumptions, respectively.« less

  10. Estimation of the incubation period of influenza A (H1N1-2009) among imported cases: addressing censoring using outbreak data at the origin of importation.

    PubMed

    Nishiura, Hiroshi; Inaba, Hisashi

    2011-03-07

    Empirical estimates of the incubation period of influenza A (H1N1-2009) have been limited. We estimated the incubation period among confirmed imported cases who traveled to Japan from Hawaii during the early phase of the 2009 pandemic (n=72). We addressed censoring and employed an infection-age structured argument to explicitly model the daily frequency of illness onset after departure. We assumed uniform and exponential distributions for the frequency of exposure in Hawaii, and the hazard rate of infection for the latter assumption was retrieved, in Hawaii, from local outbreak data. The maximum likelihood estimates of the median incubation period range from 1.43 to 1.64 days according to different modeling assumptions, consistent with a published estimate based on a New York school outbreak. The likelihood values of the different modeling assumptions do not differ greatly from each other, although models with the exponential assumption yield slightly shorter incubation periods than those with the uniform exposure assumption. Differences between our proposed approach and a published method for doubly interval-censored analysis highlight the importance of accounting for the dependence of the frequency of exposure on the survival function of incubating individuals among imported cases. A truncation of the density function of the incubation period due to an absence of illness onset during the exposure period also needs to be considered. When the data generating process is similar to that among imported cases, and when the incubation period is close to or shorter than the length of exposure, accounting for these aspects is critical for long exposure times. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    PubMed

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Economics in “Global Health 2035”: a sensitivity analysis of the value of a life year estimates

    PubMed Central

    Chang, Angela Y; Robinson, Lisa A; Hammitt, James K; Resch, Stephen C

    2017-01-01

    Background In “Global health 2035: a world converging within a generation,” The Lancet Commission on Investing in Health (CIH) adds the value of increased life expectancy to the value of growth in gross domestic product (GDP) when assessing national well–being. To value changes in life expectancy, the CIH relies on several strong assumptions to bridge gaps in the empirical research. It finds that the value of a life year (VLY) averages 2.3 times GDP per capita for low– and middle–income countries (LMICs) assuming the changes in life expectancy they experienced from 2000 to 2011 are permanent. Methods The CIH VLY estimate is based on a specific shift in population life expectancy and includes a 50 percent reduction for children ages 0 through 4. We investigate the sensitivity of this estimate to the underlying assumptions, including the effects of income, age, and life expectancy, and the sequencing of the calculations. Findings We find that reasonable alternative assumptions regarding the effects of income, age, and life expectancy may reduce the VLY estimates to 0.2 to 2.1 times GDP per capita for LMICs. Removing the reduction for young children increases the VLY, while reversing the sequencing of the calculations reduces the VLY. Conclusion Because the VLY is sensitive to the underlying assumptions, analysts interested in applying this approach elsewhere must tailor the estimates to the impacts of the intervention and the characteristics of the affected population. Analysts should test the sensitivity of their conclusions to reasonable alternative assumptions. More work is needed to investigate options for improving the approach. PMID:28400950

  13. Effects of Model Formulation on Estimates of Health in Individual Right Whales (Eubalaena glacialis).

    PubMed

    Schick, Robert S; Kraus, Scott D; Rolland, Rosalind M; Knowlton, Amy R; Hamilton, Philip K; Pettis, Heather M; Thomas, Len; Harwood, John; Clark, James S

    2016-01-01

    Right whales are vulnerable to many sources of anthropogenic disturbance including ship strikes, entanglement with fishing gear, and anthropogenic noise. The effect of these factors on individual health is unclear. A statistical model using photographic evidence of health was recently built to infer the true or hidden health of individual right whales. However, two important prior assumptions about the role of missing data and unexplained variance on the estimates were not previously assessed. Here we tested these factors by varying prior assumptions and model formulation. We found sensitivity to each assumption and used the output to make guidelines on future model formulation.

  14. CONTROL FUNCTION ASSISTED IPW ESTIMATION WITH A SECONDARY OUTCOME IN CASE-CONTROL STUDIES.

    PubMed

    Sofer, Tamar; Cornelis, Marilyn C; Kraft, Peter; Tchetgen Tchetgen, Eric J

    2017-04-01

    Case-control studies are designed towards studying associations between risk factors and a single, primary outcome. Information about additional, secondary outcomes is also collected, but association studies targeting such secondary outcomes should account for the case-control sampling scheme, or otherwise results may be biased. Often, one uses inverse probability weighted (IPW) estimators to estimate population effects in such studies. IPW estimators are robust, as they only require correct specification of the mean regression model of the secondary outcome on covariates, and knowledge of the disease prevalence. However, IPW estimators are inefficient relative to estimators that make additional assumptions about the data generating mechanism. We propose a class of estimators for the effect of risk factors on a secondary outcome in case-control studies that combine IPW with an additional modeling assumption: specification of the disease outcome probability model. We incorporate this model via a mean zero control function. We derive the class of all regular and asymptotically linear estimators corresponding to our modeling assumption, when the secondary outcome mean is modeled using either the identity or the log link. We find the efficient estimator in our class of estimators and show that it reduces to standard IPW when the model for the primary disease outcome is unrestricted, and is more efficient than standard IPW when the model is either parametric or semiparametric.

  15. ESTIMATING TREATMENT EFFECTS ON HEALTHCARE COSTS UNDER EXOGENEITY: IS THERE A ‘MAGIC BULLET’?

    PubMed Central

    Polsky, Daniel; Manning, Willard G.

    2011-01-01

    Methods for estimating average treatment effects, under the assumption of no unmeasured confounders, include regression models; propensity score adjustments using stratification, weighting, or matching; and doubly robust estimators (a combination of both). Researchers continue to debate about the best estimator for outcomes such as health care cost data, as they are usually characterized by an asymmetric distribution and heterogeneous treatment effects,. Challenges in finding the right specifications for regression models are well documented in the literature. Propensity score estimators are proposed as alternatives to overcoming these challenges. Using simulations, we find that in moderate size samples (n= 5000), balancing on propensity scores that are estimated from saturated specifications can balance the covariate means across treatment arms but fails to balance higher-order moments and covariances amongst covariates. Therefore, unlike regression model, even if a formal model for outcomes is not required, propensity score estimators can be inefficient at best and biased at worst for health care cost data. Our simulation study, designed to take a ‘proof by contradiction’ approach, proves that no one estimator can be considered the best under all data generating processes for outcomes such as costs. The inverse-propensity weighted estimator is most likely to be unbiased under alternate data generating processes but is prone to bias under misspecification of the propensity score model and is inefficient compared to an unbiased regression estimator. Our results show that there are no ‘magic bullets’ when it comes to estimating treatment effects in health care costs. Care should be taken before naively applying any one estimator to estimate average treatment effects in these data. We illustrate the performance of alternative methods in a cost dataset on breast cancer treatment. PMID:22199462

  16. Robustness of location estimators under t-distributions: a literature review

    NASA Astrophysics Data System (ADS)

    Sumarni, C.; Sadik, K.; Notodiputro, K. A.; Sartono, B.

    2017-03-01

    The assumption of normality is commonly used in estimation of parameters in statistical modelling, but this assumption is very sensitive to outliers. The t-distribution is more robust than the normal distribution since the t-distributions have longer tails. The robustness measures of location estimators under t-distributions are reviewed and discussed in this paper. For the purpose of illustration we use the onion yield data which includes outliers as a case study and showed that the t model produces better fit than the normal model.

  17. Home Energy Saver

    Science.gov Websites

    possible reasons. Want information on the technical assumptions and methods behind the site ? - Documentation on appliances, heating/cooling methods, and the tariff analysis methods is all available here

  18. Validation of abundance estimates from mark-recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    Treesearch

    Amanda E. Rosenberger; Jason B. Dunham

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln–Peterson mark–recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams....

  19. Estimation of the energy loss at the blades in rowing: common assumptions revisited.

    PubMed

    Hofmijster, Mathijs; De Koning, Jos; Van Soest, A J

    2010-08-01

    In rowing, power is inevitably lost as kinetic energy is imparted to the water during push-off with the blades. Power loss is estimated from reconstructed blade kinetics and kinematics. Traditionally, it is assumed that the oar is completely rigid and that force acts strictly perpendicular to the blade. The aim of the present study was to evaluate how reconstructed blade kinematics, kinetics, and average power loss are affected by these assumptions. A calibration experiment with instrumented oars and oarlocks was performed to establish relations between measured signals and oar deformation and blade force. Next, an on-water experiment was performed with a single female world-class rower rowing at constant racing pace in an instrumented scull. Blade kinematics, kinetics, and power loss under different assumptions (rigid versus deformable oars; absence or presence of a blade force component parallel to the oar) were reconstructed. Estimated power losses at the blades are 18% higher when parallel blade force is incorporated. Incorporating oar deformation affects reconstructed blade kinematics and instantaneous power loss, but has no effect on estimation of power losses at the blades. Assumptions on oar deformation and blade force direction have implications for the reconstructed blade kinetics and kinematics. Neglecting parallel blade forces leads to a substantial underestimation of power losses at the blades.

  20. International Organisations and the Construction of the Learning Active Citizen: An Analysis of Adult Learning Policy Documents from a Durkheimian Perspective

    ERIC Educational Resources Information Center

    Field, John; Schemmann, Michael

    2017-01-01

    The article analyses how citizenship is conceptualised in policy documents of four key international organisations. The basic assumption is that public policy has not turned away from adult learning for active citizenship, but that there are rather new ways in which international governmental organisations conceptualise and in some cases seek to…

  1. Analyses of Assumptions and Erros in the Calculation of Stomatal Conductance from Sap Flux Measurements

    Treesearch

    Brent E. Ewers; Ram Oren

    2000-01-01

    We analyzed assumptions and measurement errors in estimating canopy transpiration (EL) from sap flux (JS) measured with Granier-type sensors, and in calculating canopy stomata1 conductance (GS) from EL...

  2. Evolution of product lifespan and implications for environmental assessment and management: a case study of personal computers in higher education.

    PubMed

    Babbitt, Callie W; Kahhat, Ramzy; Williams, Eric; Babbitt, Gregory A

    2009-07-01

    Product lifespan is a fundamental variable in understanding the environmental impacts associated with the life cycle of products. Existing life cycle and materials flow studies of products, almost without exception, consider lifespan to be constant over time. To determine the validity of this assumption, this study provides an empirical documentation of the long-term evolution of personal computer lifespan, using a major U.S. university as a case study. Results indicate that over the period 1985-2000, computer lifespan (purchase to "disposal") decreased steadily from a mean of 10.7 years in 1985 to 5.5 years in 2000. The distribution of lifespan also evolved, becoming narrower over time. Overall, however, lifespan distribution was broader than normally considered in life cycle assessments or materials flow forecasts of electronic waste management for policy. We argue that these results suggest that at least for computers, the assumption of constant lifespan is problematic and that it is important to work toward understanding the dynamics of use patterns. We modify an age-structured model of population dynamics from biology as a modeling approach to describe product life cycles. Lastly, the purchase share and generation of obsolete computers from the higher education sector is estimated using different scenarios for the dynamics of product lifespan.

  3. Complex Adaptive System Models and the Genetic Analysis of Plasma HDL-Cholesterol Concentration

    PubMed Central

    Rea, Thomas J.; Brown, Christine M.; Sing, Charles F.

    2006-01-01

    Despite remarkable advances in diagnosis and therapy, ischemic heart disease (IHD) remains a leading cause of morbidity and mortality in industrialized countries. Recent efforts to estimate the influence of genetic variation on IHD risk have focused on predicting individual plasma high-density lipoprotein cholesterol (HDL-C) concentration. Plasma HDL-C concentration (mg/dl), a quantitative risk factor for IHD, has a complex multifactorial etiology that involves the actions of many genes. Single gene variations may be necessary but are not individually sufficient to predict a statistically significant increase in risk of disease. The complexity of phenotype-genotype-environment relationships involved in determining plasma HDL-C concentration has challenged commonly held assumptions about genetic causation and has led to the question of which combination of variations, in which subset of genes, in which environmental strata of a particular population significantly improves our ability to predict high or low risk phenotypes. We document the limitations of inferences from genetic research based on commonly accepted biological models, consider how evidence for real-world dynamical interactions between HDL-C determinants challenges the simplifying assumptions implicit in traditional linear statistical genetic models, and conclude by considering research options for evaluating the utility of genetic information in predicting traits with complex etiologies. PMID:17146134

  4. On the Capacity of Attention: Its Estimation and Its Role in Working Memory and Cognitive Aptitudes

    PubMed Central

    Cowan, Nelson; Elliott, Emily M.; Saults, J. Scott; Morey, Candice C.; Mattox, Sam; Hismjatullina, Anna; Conway, Andrew R.A.

    2008-01-01

    Working memory (WM) is the set of mental processes holding limited information in a temporarily accessible state in service of cognition. We provide a theoretical framework to understand the relation between WM and aptitude measures. The WM measures that have yielded high correlations with aptitudes include separate storage and processing task components, on the assumption that WM involves both storage and processing. We argue that the critical aspect of successful WM measures is that rehearsal and grouping processes are prevented, allowing a clearer estimate of how many separate chunks of information the focus of attention circumscribes at once. Storage-and-processing tasks correlate with aptitudes, according to this view, largely because the processing task prevents rehearsal and grouping of items to be recalled. In a developmental study, we document that several scope-of-attention measures that do not include a separate processing component, but nevertheless prevent efficient rehearsal or grouping, also correlate well with aptitudes and with storage-and-processing measures. So does digit span in children too young to rehearse. PMID:16039935

  5. Validation of Growth Layer Group (GLG) depositional rate using daily incremental growth lines in the dentin of beluga (Delphinapterus leucas (Pallas, 1776)) teeth

    PubMed Central

    Suydam, Robert S.; Ortiz, Joseph D.; Thewissen, J. G. M.

    2018-01-01

    Counts of Growth Layer Groups (GLGs) in the dentin of marine mammal teeth are widely used as indicators of age. In most marine mammals, observations document that GLGs are deposited yearly, but in beluga whales, some studies have supported the view that two GLGs are deposited each year. Our understanding of beluga life-history differs substantially depending on assumptions regarding the timing of GLG deposition; therefore, resolving this issue has important considerations for population assessments. In this study, we used incremental lines that represent daily pulses of dentin mineralization to test the hypothesis that GLGs in beluga dentin are deposited on a yearly basis. Our estimate of the number of daily growth lines within one GLG is remarkably close to 365 days within error, supporting the hypothesis that GLGs are deposited annually in beluga. We show that measurement of daily growth increments can be used to validate the time represented by GLGs in beluga. Furthermore, we believe this methodology may have broader applications to age estimation in other taxa. PMID:29338011

  6. Validation of Growth Layer Group (GLG) depositional rate using daily incremental growth lines in the dentin of beluga (Delphinapterus leucas (Pallas, 1776)) teeth.

    PubMed

    Waugh, David A; Suydam, Robert S; Ortiz, Joseph D; Thewissen, J G M

    2018-01-01

    Counts of Growth Layer Groups (GLGs) in the dentin of marine mammal teeth are widely used as indicators of age. In most marine mammals, observations document that GLGs are deposited yearly, but in beluga whales, some studies have supported the view that two GLGs are deposited each year. Our understanding of beluga life-history differs substantially depending on assumptions regarding the timing of GLG deposition; therefore, resolving this issue has important considerations for population assessments. In this study, we used incremental lines that represent daily pulses of dentin mineralization to test the hypothesis that GLGs in beluga dentin are deposited on a yearly basis. Our estimate of the number of daily growth lines within one GLG is remarkably close to 365 days within error, supporting the hypothesis that GLGs are deposited annually in beluga. We show that measurement of daily growth increments can be used to validate the time represented by GLGs in beluga. Furthermore, we believe this methodology may have broader applications to age estimation in other taxa.

  7. Adaptive windowing and windowless approaches to estimate dynamic functional brain connectivity

    NASA Astrophysics Data System (ADS)

    Yaesoubi, Maziar; Calhoun, Vince D.

    2017-08-01

    In this work, we discuss estimation of dynamic dependence of a multi-variate signal. Commonly used approaches are often based on a locality assumption (e.g. sliding-window) which can miss spontaneous changes due to blurring with local but unrelated changes. We discuss recent approaches to overcome this limitation including 1) a wavelet-space approach, essentially adapting the window to the underlying frequency content and 2) a sparse signal-representation which removes any locality assumption. The latter is especially useful when there is no prior knowledge of the validity of such assumption as in brain-analysis. Results on several large resting-fMRI data sets highlight the potential of these approaches.

  8. Estimating disease prevalence from two-phase surveys with non-response at the second phase

    PubMed Central

    Gao, Sujuan; Hui, Siu L.; Hall, Kathleen S.; Hendrie, Hugh C.

    2010-01-01

    SUMMARY In this paper we compare several methods for estimating population disease prevalence from data collected by two-phase sampling when there is non-response at the second phase. The traditional weighting type estimator requires the missing completely at random assumption and may yield biased estimates if the assumption does not hold. We review two approaches and propose one new approach to adjust for non-response assuming that the non-response depends on a set of covariates collected at the first phase: an adjusted weighting type estimator using estimated response probability from a response model; a modelling type estimator using predicted disease probability from a disease model; and a regression type estimator combining the adjusted weighting type estimator and the modelling type estimator. These estimators are illustrated using data from an Alzheimer’s disease study in two populations. Simulation results are presented to investigate the performances of the proposed estimators under various situations. PMID:10931514

  9. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  10. Global cost of child survival: estimates from country-level validation

    PubMed Central

    van Ekdom, Liselore; Scherpbier, Robert W; Niessen, Louis W

    2011-01-01

    Abstract Objective To cross-validate the global cost of scaling up child survival interventions to achieve the fourth Millennium Development Goal (MDG4) as estimated by the World Health Organization (WHO) in 2007 by using the latest country-provided data and new assumptions. Methods After the main cost categories for each country were identified, validation questionnaires were sent to 32 countries with high child mortality. Publicly available estimates for disease incidence, intervention coverage, prices and resources for individual-level and programme-level activities were validated against local data. Nine updates to the 2007 WHO model were generated using revised assumptions. Finally, estimates were extrapolated to 75 countries and combined with cost estimates for immunization and malaria programmes and for programmes for the prevention of mother-to-child transmission of the human immunodeficiency virus (HIV). Findings Twenty-six countries responded. Adjustments were largest for system- and programme-level data and smallest for patient data. Country-level validation caused a 53% increase in original cost estimates (i.e. 9 billion 2004 United States dollars [US$]) for 26 countries owing to revised system and programme assumptions, especially surrounding community health worker costs. The additional effect of updated population figures was small; updated epidemiologic figures increased costs by US$ 4 billion (+15%). New unit prices in the 26 countries that provided data increased estimates by US$ 4.3 billion (+16%). Extrapolation to 75 countries increased the original price estimate by US$ 33 billion (+80%) for 2010–2015. Conclusion Country-level validation had a significant effect on the cost estimate. Price adaptations and programme-related assumptions contributed substantially. An additional 74 billion US$ 2005 (representing a 12% increase in total health expenditure) would be needed between 2010 and 2015. Given resource constraints, countries will need to prioritize health activities within their national resource envelope. PMID:21479091

  11. Estimation of treatment efficacy with complier average causal effects (CACE) in a randomized stepped wedge trial.

    PubMed

    Gruber, Joshua S; Arnold, Benjamin F; Reygadas, Fermin; Hubbard, Alan E; Colford, John M

    2014-05-01

    Complier average causal effects (CACE) estimate the impact of an intervention among treatment compliers in randomized trials. Methods used to estimate CACE have been outlined for parallel-arm trials (e.g., using an instrumental variables (IV) estimator) but not for other randomized study designs. Here, we propose a method for estimating CACE in randomized stepped wedge trials, where experimental units cross over from control conditions to intervention conditions in a randomized sequence. We illustrate the approach with a cluster-randomized drinking water trial conducted in rural Mexico from 2009 to 2011. Additionally, we evaluated the plausibility of assumptions required to estimate CACE using the IV approach, which are testable in stepped wedge trials but not in parallel-arm trials. We observed small increases in the magnitude of CACE risk differences compared with intention-to-treat estimates for drinking water contamination (risk difference (RD) = -22% (95% confidence interval (CI): -33, -11) vs. RD = -19% (95% CI: -26, -12)) and diarrhea (RD = -0.8% (95% CI: -2.1, 0.4) vs. RD = -0.1% (95% CI: -1.1, 0.9)). Assumptions required for IV analysis were probably violated. Stepped wedge trials allow investigators to estimate CACE with an approach that avoids the stronger assumptions required for CACE estimation in parallel-arm trials. Inclusion of CACE estimates in stepped wedge trials with imperfect compliance could enhance reporting and interpretation of the results of such trials.

  12. Discussion of examination of a cored hydraulic fracture in a deep gas well

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nolte, K.G.

    Warpinski et al. document information found from a core through a formation after a hydraulic fracture treatment. As they indicate, the core provides the first detailed evaluation of an actual propped hydraulic fracture away from the well and at a significant depth, and this evaluation leads to findings that deviate substantially from the assumptions incorporated into current fracturing models. In this discussion, a defense of current fracture design assumptions is developed. The affirmation of current assumptions, for general industry applications, is based on an assessment of the global impact of the local complexity found in the core. The assessment leadsmore » to recommendations for the evolution of fracture design practice.« less

  13. Modelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?

    PubMed

    Thorlund, Kristian; Thabane, Lehana; Mills, Edward J

    2013-01-11

    Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.

  14. 2010 Cost of Wind Energy Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tegen, S.; Hand, M.; Maples, B.

    2012-04-01

    This document provides a detailed description of NREL's levelized cost of wind energy equation, assumptions and results in 2010, including historical cost trends and future projections for land-based and offshore utility-scale wind.

  15. 2010 Cost of Wind Energy Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tegen, S.; Hand, M.; Maples, B.

    2012-04-01

    This document provides a detailed description of NREL's levelized cost of wind energy equation, assumptions, and results in 2010, including historical cost trends and future projections for land-based and offshore utility-scale wind.

  16. Clarifying Objectives and Results of Equivalent System Mass Analyses for Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Levri, Julie A.; Drysdale, Alan E.

    2003-01-01

    This paper discusses some of the analytical decisions that an investigator must make during the course of a life support system trade study. Equivalent System Mass (ESM) is often applied to evaluate trade study options in the Advanced Life Support (ALS) Program. ESM can be used to identify which of several options that meet all requirements are most likely to have lowest cost. It can also be used to identify which of the many interacting parts of a life support system have the greatest impact and sensitivity to assumptions. This paper summarizes recommendations made in the newly developed ALS ESM Guidelines Document and expands on some of the issues relating to trade studies that involve ESM. In particular, the following three points are expounded: 1) The importance of objectives: Analysis objectives drive the approach to any trade study, including identification of assumptions, selection of characteristics to compare in the analysis, and the most appropriate techniques for reflecting those characteristics. 2) The importance of results inferprefafion: The accuracy desired in the results depends upon the analysis objectives, whereas the realized accuracy is determined by the data quality and degree of detail in analysis methods. 3) The importance of analysis documentation: Documentation of assumptions and data modifications is critical for effective peer evaluation of any trade study. ESM results are analysis-specific and should always be reported in context, rather than as solitary values. For this reason, results reporting should be done with adequate rigor to allow for verification by other researchers.

  17. The Robustness of LOGIST and BILOG IRT Estimation Programs to Violations of Local Independence.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    One of the important underlying assumptions of all item response theory (IRT) models is that of local independence. This assumption requires that the response to an item on a test not be influenced by the response to any other items. This assumption is often taken for granted, with little or no scrutiny of the response process required to answer…

  18. Evaluating growth assumptions using diameter or radial increments in natural even-aged longleaf pine

    Treesearch

    John C. Gilbert; Ralph S. Meldahl; Jyoti N. Rayamajhi; John S. Kush

    2010-01-01

    When using increment cores to predict future growth, one often assumes future growth is identical to past growth for individual trees. Once this assumption is accepted, a decision has to be made between which growth estimate should be used, constant diameter growth or constant basal area growth. Often, the assumption of constant diameter growth is used due to the ease...

  19. Occupancy estimation and the closure assumption

    USGS Publications Warehouse

    Rota, Christopher T.; Fletcher, Robert J.; Dorazio, Robert M.; Betts, Matthew G.

    2009-01-01

    1. Recent advances in occupancy estimation that adjust for imperfect detection have provided substantial improvements over traditional approaches and are receiving considerable use in applied ecology. To estimate and adjust for detectability, occupancy modelling requires multiple surveys at a site and requires the assumption of 'closure' between surveys, i.e. no changes in occupancy between surveys. Violations of this assumption could bias parameter estimates; however, little work has assessed model sensitivity to violations of this assumption or how commonly such violations occur in nature. 2. We apply a modelling procedure that can test for closure to two avian point-count data sets in Montana and New Hampshire, USA, that exemplify time-scales at which closure is often assumed. These data sets illustrate different sampling designs that allow testing for closure but are currently rarely employed in field investigations. Using a simulation study, we then evaluate the sensitivity of parameter estimates to changes in site occupancy and evaluate a power analysis developed for sampling designs that is aimed at limiting the likelihood of closure. 3. Application of our approach to point-count data indicates that habitats may frequently be open to changes in site occupancy at time-scales typical of many occupancy investigations, with 71% and 100% of species investigated in Montana and New Hampshire respectively, showing violation of closure across time periods of 3 weeks and 8 days respectively. 4. Simulations suggest that models assuming closure are sensitive to changes in occupancy. Power analyses further suggest that the modelling procedure we apply can effectively test for closure. 5. Synthesis and applications. Our demonstration that sites may be open to changes in site occupancy over time-scales typical of many occupancy investigations, combined with the sensitivity of models to violations of the closure assumption, highlights the importance of properly addressing the closure assumption in both sampling designs and analysis. Furthermore, inappropriately applying closed models could have negative consequences when monitoring rare or declining species for conservation and management decisions, because violations of closure typically lead to overestimates of the probability of occurrence.

  20. Full-scale system impact analysis: Digital document storage project

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The Digital Document Storage Full Scale System can provide cost effective electronic document storage, retrieval, hard copy reproduction, and remote access for users of NASA Technical Reports. The desired functionality of the DDS system is highly dependent on the assumed requirements for remote access used in this Impact Analysis. It is highly recommended that NASA proceed with a phased, communications requirement analysis to ensure that adequate communications service can be supplied at a reasonable cost in order to validate recent working assumptions upon which the success of the DDS Full Scale System is dependent.

  1. The composite dynamic method as evidence for age-specific waterfowl mortality

    USGS Publications Warehouse

    Burnham, Kenneth P.; Anderson, David R.

    1979-01-01

    For the past 25 years estimation of mortality rates for waterfowl has been based almost entirely on the composite dynamic life table. We examined the specific assumptions for this method and derived a valid goodness of fit test. We performed this test on 45 data sets representing a cross section of banded sampled for various waterfowl species, geographic areas, banding periods, and age/sex classes. We found that: (1) the composite dynamic method was rejected (P <0.001) in 37 of the 45 data sets (in fact, 29 were rejected at P <0.00001) and (2) recovery and harvest rates are year-specific (a critical violation of the necessary assumptions). We conclude that the restrictive assumptions required for the composite dynamic method to produce valid estimates of mortality rates are not met in waterfowl data. Also we demonstrate that even when the required assumptions are met, the method produces very biased estimates of age-specific mortality rates. We believe the composite dynamic method should not be used in the analysis of waterfowl banding data. Furthermore, the composite dynamic method does not provide valid evidence for age-specific mortality rates in waterfowl.

  2. Flood return level analysis of Peaks over Threshold series under changing climate

    NASA Astrophysics Data System (ADS)

    Li, L.; Xiong, L.; Hu, T.; Xu, C. Y.; Guo, S.

    2016-12-01

    Obtaining insights into future flood estimation is of great significance for water planning and management. Traditional flood return level analysis with the stationarity assumption has been challenged by changing environments. A method that takes into consideration the nonstationarity context has been extended to derive flood return levels for Peaks over Threshold (POT) series. With application to POT series, a Poisson distribution is normally assumed to describe the arrival rate of exceedance events, but this distribution assumption has at times been reported as invalid. The Negative Binomial (NB) distribution is therefore proposed as an alternative to the Poisson distribution assumption. Flood return levels were extrapolated in nonstationarity context for the POT series of the Weihe basin, China under future climate scenarios. The results show that the flood return levels estimated under nonstationarity can be different with an assumption of Poisson and NB distribution, respectively. The difference is found to be related to the threshold value of POT series. The study indicates the importance of distribution selection in flood return level analysis under nonstationarity and provides a reference on the impact of climate change on flood estimation in the Weihe basin for the future.

  3. Actuarial calculation for PSAK-24 purposes post-employment benefit using market-consistent approach

    NASA Astrophysics Data System (ADS)

    Effendie, Adhitya Ronnie

    2015-12-01

    In this paper we use a market-consistent approach to calculate present value of obligation of a companies' post-employment benefit in accordance with PSAK-24 (the Indonesian accounting standard). We set some actuarial assumption such as Indonesian TMI 2011 mortality tables for mortality assumptions, accumulated salary function for wages assumption, a scaled (to mortality) disability assumption and a pre-defined turnover rate for termination assumption. For economic assumption, we use binomial tree method with estimated discount rate as its average movement. In accordance with PSAK-24, the Projected Unit Credit method has been adapted to determine the present value of obligation (actuarial liability), so we use this method with a modification in its discount function.

  4. A simple approach to nonlinear estimation of physical systems

    USGS Publications Warehouse

    Christakos, G.

    1988-01-01

    Recursive algorithms for estimating the states of nonlinear physical systems are developed. This requires some key hypotheses regarding the structure of the underlying processes. Members of this class of random processes have several desirable properties for the nonlinear estimation of random signals. An assumption is made about the form of the estimator, which may then take account of a wide range of applications. Under the above assumption, the estimation algorithm is mathematically suboptimal but effective and computationally attractive. It may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. To link theory with practice, some numerical results for a simulated system are presented, in which the responses from the proposed and the extended Kalman algorithms are compared. ?? 1988.

  5. Assessing the Impact of Agricultural Pressures on N and P Loads and Potential Eutrophication Risk at Regional Scales

    NASA Astrophysics Data System (ADS)

    Dupas, R.; Gascuel-odoux, C.; Delmas, M.; Moatar, F.

    2014-12-01

    Excessive nutrient loading of freshwater bodies results in increased eutrophication risk worldwide. The processes controlling N/P transfer in agricultural landscapes are well documented through scientific studies conducted in intensively monitored catchments. However, managers need tools to assess water quality and evaluate the contribution of agriculture to eutrophication at regional scales, including unmonitored or poorly monitored areas. To this end, we present an assessment framework which includes: i) a mass-balance model to estimate diffuse N/P transfer and retention and ii) indicators based on N:P:Si molar ratios to assess potential eutrophication risk from external loads. The model, called Nutting (Dupas et al., 2013), integrates variables for both detailed description of agricultural pressures (N surplus, soil P content) and characterisation of physical attributes of catchments (including spatial attributes). It was calibrated on 160 catchments, and applied to 2210 unmonitored headwater bodies in France (Dupas et al., under review). N and P retention represented 53% and 95% of soil N and P surplus, respectively, and was mainly controlled by runoff and an index characterising infiltration/runoff properties. According to our estimates, diffuse agricultural sources represented a mean of 97% of N loads and N exceeded Si in 93% of the catchments, whilst they represented 46% of P loads and P exceeded Si in 26-65% of the catchments. Estimated eutrophication risk was highly sensitive to assumptions about P bioavailability, hence the range of headwaters potentially at risk spanned 26-63% of the catchments, depending on assumptions. To reduce this uncertainty, we recommend introducing P bioavailability tests in water monitoring programs, especially in sensitive areas. Dupas R et al. Assessing N emissions in surface water at the national level: comparison of country-wide vs. regionalized models. Sci Total Environ 2013; 443: 152-62. Dupas R et al. Assessing the impact of agricultural pressures on N and P loads and eutrophication risk (under review).

  6. Hankin and Reeves' approach to estimating fish abundance in small streams: Limitations and alternatives

    USGS Publications Warehouse

    Thompson, W.L.

    2003-01-01

    Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream fish studies across North America. However, their population estimator relies on two key assumptions: (1) removal estimates are equal to the true numbers of fish, and (2) removal estimates are highly correlated with snorkel counts within a subset of sampled stream units. Violations of these assumptions may produce suspect results. To determine possible sources of the assumption violations, I used data on the abundance of steelhead Oncorhynchus mykiss from Hankin and Reeves' (1988) in a simulation composed of 50,000 repeated, stratified systematic random samples from a spatially clustered distribution. The simulation was used to investigate effects of a range of removal estimates, from 75% to 100% of true fish abundance, on overall stream fish population estimates. The effects of various categories of removal-estimates-to-snorkel-count correlation levels (r = 0.75-1.0) on fish population estimates were also explored. Simulation results indicated that Hankin and Reeves' approach may produce poor results unless removal estimates exceed at least 85% of the true number of fish within sampled units and unless correlations between removal estimates and snorkel counts are at least 0.90. A potential modification to Hankin and Reeves' approach is the inclusion of environmental covariates that affect detection rates of fish into the removal model or other mark-recapture model. A potential alternative approach is to use snorkeling combined with line transect sampling to estimate fish densities within stream units. As with any method of population estimation, a pilot study should be conducted to evaluate its usefulness, which requires a known (or nearly so) population of fish to serve as a benchmark for evaluating bias and precision of estimators.

  7. Propensity score estimation: machine learning and classification methods as alternatives to logistic regression

    PubMed Central

    Westreich, Daniel; Lessler, Justin; Funk, Michele Jonsson

    2010-01-01

    Summary Objective Propensity scores for the analysis of observational data are typically estimated using logistic regression. Our objective in this Review was to assess machine learning alternatives to logistic regression which may accomplish the same goals but with fewer assumptions or greater accuracy. Study Design and Setting We identified alternative methods for propensity score estimation and/or classification from the public health, biostatistics, discrete mathematics, and computer science literature, and evaluated these algorithms for applicability to the problem of propensity score estimation, potential advantages over logistic regression, and ease of use. Results We identified four techniques as alternatives to logistic regression: neural networks, support vector machines, decision trees (CART), and meta-classifiers (in particular, boosting). Conclusion While the assumptions of logistic regression are well understood, those assumptions are frequently ignored. All four alternatives have advantages and disadvantages compared with logistic regression. Boosting (meta-classifiers) and to a lesser extent decision trees (particularly CART) appear to be most promising for use in the context of propensity score analysis, but extensive simulation studies are needed to establish their utility in practice. PMID:20630332

  8. Evolution of Requirements and Assumptions for Future Exploration Missions

    NASA Technical Reports Server (NTRS)

    Anderson, Molly; Sargusingh, Miriam; Perry, Jay

    2017-01-01

    NASA programs are maturing technologies, systems, and architectures to enabling future exploration missions. To increase fidelity as technologies mature, developers must make assumptions that represent the requirements of a future program. Multiple efforts have begun to define these requirements, including team internal assumptions, planning system integration for early demonstrations, and discussions between international partners planning future collaborations. For many detailed life support system requirements, existing NASA documents set limits of acceptable values, but a future vehicle may be constrained in other ways, and select a limited range of conditions. Other requirements are effectively set by interfaces or operations, and may be different for the same technology depending on whether the hard-ware is a demonstration system on the International Space Station, or a critical component of a future vehicle. This paper highlights key assumptions representing potential life support requirements and explanations of the driving scenarios, constraints, or other issues that drive them.

  9. Designing occupancy studies when false-positive detections occur

    USGS Publications Warehouse

    Clement, Matthew

    2016-01-01

    1.Recently, estimators have been developed to estimate occupancy probabilities when false-positive detections occur during presence-absence surveys. Some of these estimators combine different types of survey data to improve estimates of occupancy. With these estimators, there is a tradeoff between the number of sample units surveyed, and the number and type of surveys at each sample unit. Guidance on efficient design of studies when false positives occur is unavailable. 2.For a range of scenarios, I identified survey designs that minimized the mean square error of the estimate of occupancy. I considered an approach that uses one survey method and two observation states and an approach that uses two survey methods. For each approach, I used numerical methods to identify optimal survey designs when model assumptions were met and parameter values were correctly anticipated, when parameter values were not correctly anticipated, and when the assumption of no unmodelled detection heterogeneity was violated. 3.Under the approach with two observation states, false positive detections increased the number of recommended surveys, relative to standard occupancy models. If parameter values could not be anticipated, pessimism about detection probabilities avoided poor designs. Detection heterogeneity could require more or fewer repeat surveys, depending on parameter values. If model assumptions were met, the approach with two survey methods was inefficient. However, with poor anticipation of parameter values, with detection heterogeneity, or with removal sampling schemes, combining two survey methods could improve estimates of occupancy. 4.Ignoring false positives can yield biased parameter estimates, yet false positives greatly complicate the design of occupancy studies. Specific guidance for major types of false-positive occupancy models, and for two assumption violations common in field data, can conserve survey resources. This guidance can be used to design efficient monitoring programs and studies of species occurrence, species distribution, or habitat selection, when false positives occur during surveys.

  10. Assessing Gaussian Assumption of PMU Measurement Error Using Field Data

    DOE PAGES

    Wang, Shaobu; Zhao, Junbo; Huang, Zhenyu; ...

    2017-10-13

    Gaussian PMU measurement error has been assumed for many power system applications, such as state estimation, oscillatory modes monitoring, voltage stability analysis, to cite a few. This letter proposes a simple yet effective approach to assess this assumption by using the stability property of a probability distribution and the concept of redundant measurement. Extensive results using field PMU data from WECC system reveal that the Gaussian assumption is questionable.

  11. Normality of Residuals Is a Continuous Variable, and Does Seem to Influence the Trustworthiness of Confidence Intervals: A Response to, and Appreciation of, Williams, Grajales, and Kurkiewicz (2013)

    ERIC Educational Resources Information Center

    Osborne, Jason W.

    2013-01-01

    Osborne and Waters (2002) focused on checking some of the assumptions of multiple linear regression. In a critique of that paper, Williams, Grajales, and Kurkiewicz correctly clarify that regression models estimated using ordinary least squares require the assumption of normally distributed errors, but not the assumption of normally distributed…

  12. Partially Identified Treatment Effects for Generalizability

    ERIC Educational Resources Information Center

    Chan, Wendy

    2017-01-01

    Recent methods to improve generalizations from nonrandom samples typically invoke assumptions such as the strong ignorability of sample selection, which is challenging to meet in practice. Although researchers acknowledge the difficulty in meeting this assumption, point estimates are still provided and used without considering alternative…

  13. Advance market commitments for vaccines against neglected diseases: estimating costs and effectiveness.

    PubMed

    Berndt, Ernst R; Glennerster, Rachel; Kremer, Michael R; Lee, Jean; Levine, Ruth; Weizsäcker, Georg; Williams, Heidi

    2007-05-01

    The G8 is considering committing to purchase vaccines against diseases concentrated in low-income countries (if and when desirable vaccines are developed) as a way to spur research and development on vaccines for these diseases. Under such an 'advance market commitment,' one or more sponsors would commit to a minimum price to be paid per person immunized for an eligible product, up to a certain number of individuals immunized. For additional purchases, the price would eventually drop to close to marginal cost. If no suitable product were developed, no payments would be made. We estimate the offer size which would make revenues similar to the revenues realized from investments in typical existing commercial pharmaceutical products, as well as the degree to which various model contracts and assumptions would affect the cost-effectiveness of such a commitment. We make adjustments for lower marketing costs under an advance market commitment and the risk that a developer may have to share the market with subsequent developers. We also show how this second risk could be reduced, and money saved, by introducing a superiority clause to a commitment. Under conservative assumptions, we document that a commitment comparable in value to sales earned by the average of a sample of recently launched commercial products (adjusted for lower marketing costs) would be a highly cost-effective way to address HIV/AIDS, malaria, and tuberculosis. Sensitivity analyses suggest most characteristics of a hypothetical vaccine would have little effect on the cost-effectiveness, but that the duration of protection conferred by a vaccine strongly affects potential cost-effectiveness. Readers can conduct their own sensitivity analyses employing a web-based spreadsheet tool. Copyright (c) 2006 John Wiley & Sons, Ltd.

  14. Strategies for defining traits when calculating economic values for livestock breeding: a review.

    PubMed

    Wolfová, M; Wolf, J

    2013-09-01

    The objective of the present review was (i) to survey different approaches for choosing the complex of traits for which economic values (EVs) are calculated, (ii) to call attention to the proper definition of traits and (iii) to discuss the manner and extent to which relationships among traits have been considered in the calculation of EVs. For this purpose, papers dealing with the estimation of EVs of traits in livestock were reviewed. The most important reasons for incompatibility of EVs for similar traits estimated in different countries and by different authors were found to be inconsistencies in trait definitions and in assumptions being made about relationships among traits. An important problem identified was how to choose the most appropriate criterion to characterise production or functional ability for a particular class of animals. Accordingly, the review covered the following three topics: (i) which trait(s) would best characterise the growth ability of an animal; (ii) how to define traits expressed repeatedly in subsequent reproductive cycles of breeding females and (iii) how to deal with traits that differ in average value between sexes or among animal groups. Various approaches that have been used to solve these problems were discussed. Furthermore, the manner in which diverse authors chose one or more traits from a group of alternatives for describing a specific biological potential were reviewed and commented on. The consequences of including or excluding relationships among economically important traits when estimating the EV for a specific trait were also examined. An important conclusion of the review is that, for a better comparability and interpretability of estimated EVs in the literature, it is desirable that clear and unique definitions of the traits, complete information on assumptions used in analytical models and details on inter-relationships between traits are documented. Furthermore, the method and the model used for the genetic evaluation of specific traits in a certain breeding organisation are important for the exact definition of traits, for which the economic values will be calculated, and for the inclusion or exclusion of relationships among traits in the calculation of the EVs in livestock breeding.

  15. Estimation of kinematic parameters in CALIFA galaxies: no-assumption on internal dynamics

    NASA Astrophysics Data System (ADS)

    García-Lorenzo, B.; Barrera-Ballesteros, J.; CALIFA Team

    2016-06-01

    We propose a simple approach to homogeneously estimate kinematic parameters of a broad variety of galaxies (elliptical, spirals, irregulars or interacting systems). This methodology avoids the use of any kinematical model or any assumption on internal dynamics. This simple but novel approach allows us to determine: the frequency of kinematic distortions, systemic velocity, kinematic center, and kinematic position angles which are directly measured from the two dimensional-distributions of radial velocities. We test our analysis tools using the CALIFA Survey

  16. Is herpes zoster vaccination likely to be cost-effective in Canada?

    PubMed

    Peden, Alexander D; Strobel, Stephenson B; Forget, Evelyn L

    2014-05-30

    To synthesize the current literature detailing the cost-effectiveness of the herpes zoster (HZ) vaccine, and to provide Canadian policy-makers with cost-effectiveness measurements in a Canadian context. This article builds on an existing systematic review of the HZ vaccine that offers a quality assessment of 11 recent articles. We first replicated this study, and then two assessors reviewed the articles and extracted information on vaccine effectiveness, cost of HZ, other modelling assumptions and QALY estimates. Then we transformed the results into a format useful for Canadian policy decisions. Results expressed in different currencies from different years were converted into 2012 Canadian dollars using Bank of Canada exchange rates and a Consumer Price Index deflator. Modelling assumptions that varied between studies were synthesized. We tabled the results for comparability. The Szucs systematic review presented a thorough methodological assessment of the relevant literature. However, the various studies presented results in a variety of currencies, and based their analyses on disparate methodological assumptions. Most of the current literature uses Markov chain models to estimate HZ prevalence. Cost assumptions, discount rate assumptions, assumptions about vaccine efficacy and waning and epidemiological assumptions drove variation in the outcomes. This article transforms the results into a table easily understood by policy-makers. The majority of the current literature shows that HZ vaccination is cost-effective at the price of $100,000 per QALY. Few studies showed that vaccination cost-effectiveness was higher than this threshold, and only under conservative assumptions. Cost-effectiveness was sensitive to vaccine price and discount rate.

  17. Data on fossil fuel availability for Shared Socioeconomic Pathways.

    PubMed

    Bauer, Nico; Hilaire, Jérôme; Brecha, Robert J; Edmonds, Jae; Jiang, Kejun; Kriegler, Elmar; Rogner, Hans-Holger; Sferra, Fabio

    2017-02-01

    The data files contain the assumptions and results for the construction of cumulative availability curves for coal, oil and gas for the five Shared Socioeconomic Pathways. The files include the maximum availability (also known as cumulative extraction cost curves) and the assumptions that are applied to construct the SSPs. The data is differentiated into twenty regions. The resulting cumulative availability curves are plotted and the aggregate data as well as cumulative availability curves are compared across SSPs. The methodology, the data sources and the assumptions are documented in a related article (N. Bauer, J. Hilaire, R.J. Brecha, J. Edmonds, K. Jiang, E. Kriegler, H.-H. Rogner, F. Sferra, 2016) [1] under DOI: http://dx.doi.org/10.1016/j.energy.2016.05.088.

  18. Robust estimation approach for blind denoising.

    PubMed

    Rabie, Tamer

    2005-11-01

    This work develops a new robust statistical framework for blind image denoising. Robust statistics addresses the problem of estimation when the idealized assumptions about a system are occasionally violated. The contaminating noise in an image is considered as a violation of the assumption of spatial coherence of the image intensities and is treated as an outlier random variable. A denoised image is estimated by fitting a spatially coherent stationary image model to the available noisy data using a robust estimator-based regression method within an optimal-size adaptive window. The robust formulation aims at eliminating the noise outliers while preserving the edge structures in the restored image. Several examples demonstrating the effectiveness of this robust denoising technique are reported and a comparison with other standard denoising filters is presented.

  19. Comparing Mapped Plot Estimators

    Treesearch

    Paul C. Van Deusen

    2006-01-01

    Two alternative derivations of estimators for mean and variance from mapped plots are compared by considering the models that support the estimators and by simulation. It turns out that both models lead to the same estimator for the mean but lead to very different variance estimators. The variance estimators based on the least valid model assumptions are shown to...

  20. Connecticut Highlands Technical Report - Documentation of the Regional Rainfall-Runoff Model

    USGS Publications Warehouse

    Ahearn, Elizabeth A.; Bjerklie, David M.

    2010-01-01

    This report provides the supporting data and describes the data sources, methodologies, and assumptions used in the assessment of existing and potential water resources of the Highlands of Connecticut and Pennsylvania (referred to herein as the “Highlands”). Included in this report are Highlands groundwater and surface-water use data and the methods of data compilation. Annual mean streamflow and annual mean base-flow estimates from selected U.S. Geological Survey (USGS) gaging stations were computed using data for the period of record through water year 2005. The methods of watershed modeling are discussed and regional and sub-regional water budgets are provided. Information on Highlands surface-water-quality trends is presented. USGS web sites are provided as sources for additional information on groundwater levels, streamflow records, and ground- and surface-water-quality data. Interpretation of these data and the findings are summarized in the Highlands study report.

  1. Water content of latent fingerprints - Dispelling the myth.

    PubMed

    Kent, Terry

    2016-09-01

    Changing procedures in the handling of rare and precious documents in museums and elsewhere, based on assumptions about constituents of latent fingerprints, have led the author to an examination of available data. These changes appear to have been triggered by one paper using general biological data regarding eccrine sweat production to infer that deposited fingerprints are mostly water. Searching the fingerprint literature has revealed a number of reference works similarly quoting figures for average water content of deposited fingerprints of 98% or more. Whilst accurate estimation is difficult there is no evidence that the residue on fingers could be anything like 98% water, even if there were no contamination from sebaceous glands. Consideration of published analytical data of real fingerprints, and several theoretical considerations regarding evaporation and replenishment rates, indicates a probable initial average water content of a fingerprint, soon after deposition, of 20% or less. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Balancing Authority Cooperation Concepts - Intra-Hour Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunsaker, Matthew; Samaan, Nader; Milligan, Michael

    2013-03-29

    The overall objective of this study was to understand, on an Interconnection-wide basis, the effects intra-hour scheduling compared to hourly scheduling. Moreover, the study sought to understand how the benefits of intra-hour scheduling would change by altering the input assumptions in different scenarios. This report describes results of three separate scenarios with differing key assumptions and comparing the production costs between hourly scheduling and 10-minute scheduling performance. The different scenarios were chosen to provide insight into how the estimated benefits might change by altering input assumptions. Several key assumptions were different in the three scenarios, however most assumptions were similarmore » and/or unchanged among the scenarios.« less

  3. Efficient Estimation of the Standardized Value

    ERIC Educational Resources Information Center

    Longford, Nicholas T.

    2009-01-01

    We derive an estimator of the standardized value which, under the standard assumptions of normality and homoscedasticity, is more efficient than the established (asymptotically efficient) estimator and discuss its gains for small samples. (Contains 1 table and 3 figures.)

  4. Analyzing the Effects of the Weapon Systems Acquisition Reform Act

    DTIC Science & Technology

    2014-05-28

    an ICD is developed. The ICD is the first key document that JCIDS contributes to the acquisition system. This document feeds into the MSA and the...make an assumption that the WSARA is the initiator of bottom-line change, if not the catalyst for changes that occur. The bottom line in the...force sustainment, petroleum and water, sets, kits, outfits and tools, test measurement and diagnostic equipment, and contingency basing infrastructure

  5. WalkThrough Example Procedures for MAMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruggiero, Christy E.; Gaschen, Brian Keith; Bloch, Jeffrey Joseph

    This documentation is a growing set of walk through examples of analyses using the MAMA V2.0 software. It does not cover all the features or possibilities with the MAMA software, but will address using many of the basic analysis tools to quantify particle size and shape in an image. This document will continue to evolve as additional procedures and examples are added. The starting assumption is that the MAMA software has been successfully installed.

  6. The cost-effectiveness of rotavirus vaccination: Comparative analyses for five European countries and transferability in Europe.

    PubMed

    Jit, Mark; Bilcke, Joke; Mangen, Marie-Josée J; Salo, Heini; Melliez, Hugues; Edmunds, W John; Yazdan, Yazdanpanah; Beutels, Philippe

    2009-10-19

    Cost-effectiveness analyses are usually not directly comparable between countries because of differences in analytical and modelling assumptions. We investigated the cost-effectiveness of rotavirus vaccination in five European Union countries (Belgium, England and Wales, Finland, France and the Netherlands) using a single model, burden of disease estimates supplied by national public health agencies and a subset of common assumptions. Under base case assumptions (vaccination with Rotarix, 3% discount rate, health care provider perspective, no herd immunity and quality of life of one caregiver affected by a rotavirus episode) and a cost-effectiveness threshold of euro30,000, vaccination is likely to be cost effective in Finland only. However, single changes to assumptions may make it cost effective in Belgium and the Netherlands. The estimated threshold price per dose for Rotarix (excluding administration costs) to be cost effective was euro41 in Belgium, euro28 in England and Wales, euro51 in Finland, euro36 in France and euro46 in the Netherlands.

  7. Overlooked Threats to Respondent Driven Sampling Estimators: Peer Recruitment Reality, Degree Measures, and Random Selection Assumption.

    PubMed

    Li, Jianghong; Valente, Thomas W; Shin, Hee-Sung; Weeks, Margaret; Zelenev, Alexei; Moothi, Gayatri; Mosher, Heather; Heimer, Robert; Robles, Eduardo; Palmer, Greg; Obidoa, Chinekwu

    2017-06-28

    Intensive sociometric network data were collected from a typical respondent driven sample (RDS) of 528 people who inject drugs residing in Hartford, Connecticut in 2012-2013. This rich dataset enabled us to analyze a large number of unobserved network nodes and ties for the purpose of assessing common assumptions underlying RDS estimators. Results show that several assumptions central to RDS estimators, such as random selection, enrollment probability proportional to degree, and recruitment occurring over recruiter's network ties, were violated. These problems stem from an overly simplistic conceptualization of peer recruitment processes and dynamics. We found nearly half of participants were recruited via coupon redistribution on the street. Non-uniform patterns occurred in multiple recruitment stages related to both recruiter behavior (choosing and reaching alters, passing coupons, etc.) and recruit behavior (accepting/rejecting coupons, failing to enter study, passing coupons to others). Some factors associated with these patterns were also associated with HIV risk.

  8. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study

    PubMed Central

    Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee

    2015-01-01

    Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512

  9. Analysis and design of second-order sliding-mode algorithms for quadrotor roll and pitch estimation.

    PubMed

    Chang, Jing; Cieslak, Jérôme; Dávila, Jorge; Zolghadri, Ali; Zhou, Jun

    2017-11-01

    The problem addressed in this paper is that of quadrotor roll and pitch estimation without any assumption about the knowledge of perturbation bounds when Inertial Measurement Units (IMU) data or position measurements are available. A Smooth Sliding Mode (SSM) algorithm is first designed to provide reliable estimation under a smooth disturbance assumption. This assumption is next relaxed with the second proposed Adaptive Sliding Mode (ASM) algorithm that deals with disturbances of unknown bounds. In addition, the analysis of the observers are extended to the case where measurements are corrupted by bias and noise. The gains of the proposed algorithms were deduced from the Lyapunov function. Furthermore, some useful guidelines are provided for the selection of the observer turning parameters. The performance of these two approaches is evaluated using a nonlinear simulation model and considering either accelerometer or position measurements. The simulation results demonstrate the benefits of the proposed solutions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Adjusting survival estimates for premature transmitter failure: A case study from the Sacramento-San Joaquin Delta

    USGS Publications Warehouse

    Holbrook, Christopher M.; Perry, Russell W.; Brandes, Patricia L.; Adams, Noah S.

    2013-01-01

    In telemetry studies, premature tag failure causes negative bias in fish survival estimates because tag failure is interpreted as fish mortality. We used mark-recapture modeling to adjust estimates of fish survival for a previous study where premature tag failure was documented. High rates of tag failure occurred during the Vernalis Adaptive Management Plan’s (VAMP) 2008 study to estimate survival of fall-run Chinook salmon (Oncorhynchus tshawytscha) during migration through the San Joaquin River and Sacramento-San Joaquin Delta, California. Due to a high rate of tag failure, the observed travel time distribution was likely negatively biased, resulting in an underestimate of tag survival probability in this study. Consequently, the bias-adjustment method resulted in only a small increase in estimated fish survival when the observed travel time distribution was used to estimate the probability of tag survival. Since the bias-adjustment failed to remove bias, we used historical travel time data and conducted a sensitivity analysis to examine how fish survival might have varied across a range of tag survival probabilities. Our analysis suggested that fish survival estimates were low (95% confidence bounds range from 0.052 to 0.227) over a wide range of plausible tag survival probabilities (0.48–1.00), and this finding is consistent with other studies in this system. When tags fail at a high rate, available methods to adjust for the bias may perform poorly. Our example highlights the importance of evaluating the tag life assumption during survival studies, and presents a simple framework for evaluating adjusted survival estimates when auxiliary travel time data are available.

  11. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  12. Aviation System Analysis Capability (ASAC) Quick Response System (QRS) Test Report

    NASA Technical Reports Server (NTRS)

    Roberts, Eileen; Villani, James A.; Ritter, Paul

    1997-01-01

    This document is the Aviation System Analysis Capability (ASAC) Quick Response System (QRS) Test Report. The purpose of this document is to present the results of the QRS unit and system tests in support of the ASAC QRS development effort. This document contains an overview of the project background and scope, defines the QRS system and presents the additions made to the QRS this year, explains the assumptions, constraints, and approach used to conduct QRS Unit and System Testing, and presents the schedule used to perform QRS Testing. The document also presents an overview of the Logistics Management Institute (LMI) Test Facility and testing environment and summarizes the QRS Unit and System Test effort and results.

  13. The Empirical Investigation of Perspective-Based Reading

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Green, Scott; Laitenberger, Oliver; Shull, Forrest; Sorumgard, Sivert; Zelkowitz, Marvin V.

    1996-01-01

    We consider reading techniques a fundamental means of achieving high quality software. Due to the lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with Perspective-Based Reading (PBR), a particular reading technique for requirements documents. The goal of PBR is to provide operational scenarios where members of a review team read a document from a particular perspective (e.g., tester, developer, user). Our assumption is that the combination of different perspectives provides better coverage of the document than the same number of readers using their usual technique.

  14. Surplus Highly Enriched Uranium Disposition Program plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-10-01

    The purpose of this document is to provide upper level guidance for the program that will downblend surplus highly enriched uranium for use as commercial nuclear reactor fuel or low-level radioactive waste. The intent of this document is to outline the overall mission and program objectives. The document is also intended to provide a general basis for integration of disposition efforts among all applicable sites. This plan provides background information, establishes the scope of disposition activities, provides an approach to the mission and objectives, identifies programmatic assumptions, defines major roles, provides summary level schedules and milestones, and addresses budget requirements.

  15. Improved arrival-date estimates of Arctic-breeding Dunlin (Calidris alpina arcticola)

    USGS Publications Warehouse

    Doll, Andrew C.; Lanctot, Richard B.; Stricker, Craig A.; Yezerinac, Stephen M.; Wunder, Michael B.

    2015-01-01

    The use of stable isotopes in animal ecology depends on accurate descriptions of isotope dynamics within individuals. The prevailing assumption that laboratory-derived isotopic parameters apply to free-living animals is largely untested. We used stable carbon isotopes (δ13C) in whole blood from migratory Dunlin (Calidris alpina arcticola) to estimate an in situ turnover rate and individual diet-switch dates. Our in situ results indicated that turnover rates were higher in free-living birds, in comparison to the results of an experimental study on captive Dunlin and estimates derived from a theoretical allometric model. Diet-switch dates from all 3 methods were then used to estimate arrival dates to the Arctic; arrival dates calculated with the in situ turnover rate were later than those with the other turnover-rate estimates, substantially so in some cases. These later arrival dates matched dates when local snow conditions would have allowed Dunlin to settle, and agreed with anticipated arrival dates of Dunlin tracked with light-level geolocators. Our study presents a novel method for accurately estimating arrival dates for individuals of migratory species in which return dates are difficult to document. This may be particularly appropriate for species in which extrinsic tracking devices cannot easily be employed because of cost, body size, or behavioral constraints, and in habitats that do not allow individuals to be detected easily upon first arrival. Thus, this isotopic method offers an exciting alternative approach to better understand how species may be altering their arrival dates in response to changing climatic conditions.

  16. 49 CFR 236.1009 - Procedural requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... fraud; (ii) Potentially invalidated assumptions determined as a result of in-service experience or one... inspect processes, procedures, facilities, documents, records, design and testing materials, artifacts, training materials and programs, and any other information used in the design, development, manufacture...

  17. Making the Case for School-to-Careers and Vocational Education. A Practical Guide to Demonstrating the Value of School-to-Careers Preparation for All Students and Debunking Outdated Stereotypes and False Assumptions...Plus Step-by-Step Instructions for Planning Promotional Campaigns and Events To Create Public Support in Your Community.

    ERIC Educational Resources Information Center

    American Vocational Association, Alexandria, VA.

    This document is a practical guide to demonstrating the value of school-to-careers preparation for all students and to debunking outdated stereotypes and false assumptions surrounding school-to-careers and vocational education programs. Part 1 explains the importance of political and policy advocacy in public education and outlines strategies for…

  18. Modelling heterogeneity variances in multiple treatment comparison meta-analysis – Are informative priors the better solution?

    PubMed Central

    2013-01-01

    Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298

  19. Principal Score Methods: Assumptions, Extensions, and Practical Considerations

    ERIC Educational Resources Information Center

    Feller, Avi; Mealli, Fabrizia; Miratrix, Luke

    2017-01-01

    Researchers addressing posttreatment complications in randomized trials often turn to principal stratification to define relevant assumptions and quantities of interest. One approach for the subsequent estimation of causal effects in this framework is to use methods based on the "principal score," the conditional probability of belonging…

  20. Stimulus-specific variability in color working memory with delayed estimation.

    PubMed

    Bae, Gi-Yeul; Olkkonen, Maria; Allred, Sarah R; Wilson, Colin; Flombaum, Jonathan I

    2014-04-08

    Working memory for color has been the central focus in an ongoing debate concerning the structure and limits of visual working memory. Within this area, the delayed estimation task has played a key role. An implicit assumption in color working memory research generally, and delayed estimation in particular, is that the fidelity of memory does not depend on color value (and, relatedly, that experimental colors have been sampled homogeneously with respect to discriminability). This assumption is reflected in the common practice of collapsing across trials with different target colors when estimating memory precision and other model parameters. Here we investigated whether or not this assumption is secure. To do so, we conducted delayed estimation experiments following standard practice with a memory load of one. We discovered that different target colors evoked response distributions that differed widely in dispersion and that these stimulus-specific response properties were correlated across observers. Subsequent experiments demonstrated that stimulus-specific responses persist under higher memory loads and that at least part of the specificity arises in perception and is eventually propagated to working memory. Posthoc stimulus measurement revealed that rendered stimuli differed from nominal stimuli in both chromaticity and luminance. We discuss the implications of these deviations for both our results and those from other working memory studies.

  1. Bias and variance reduction in estimating the proportion of true-null hypotheses

    PubMed Central

    Cheng, Yebin; Gao, Dexiang; Tong, Tiejun

    2015-01-01

    When testing a large number of hypotheses, estimating the proportion of true nulls, denoted by \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document}, becomes increasingly important. This quantity has many applications in practice. For instance, a reliable estimate of \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document} can eliminate the conservative bias of the Benjamini–Hochberg procedure on controlling the false discovery rate. It is known that most methods in the literature for estimating \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document} are conservative. Recently, some attempts have been paid to reduce such estimation bias. Nevertheless, they are either over bias corrected or suffering from an unacceptably large estimation variance. In this paper, we propose a new method for estimating \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document} that aims to reduce the bias and variance of the estimation simultaneously. To achieve this, we first utilize the probability density functions of false-null \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$p$\\end{document}-values and then propose a novel algorithm to estimate the quantity of \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document}. The statistical behavior of the proposed estimator is also investigated. Finally, we carry out extensive simulation studies and several real data analysis to evaluate the performance of the proposed estimator. Both simulated and real data demonstrate that the proposed method may improve the existing literature significantly. PMID:24963010

  2. Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium

    Treesearch

    Raymond L. Czaplewski

    1991-01-01

    The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...

  3. Poisson sampling - The adjusted and unadjusted estimator revisited

    Treesearch

    Michael S. Williams; Hans T. Schreuder; Gerardo H. Terrazas

    1998-01-01

    The prevailing assumption, that for Poisson sampling the adjusted estimator "Y-hat a" is always substantially more efficient than the unadjusted estimator "Y-hat u" , is shown to be incorrect. Some well known theoretical results are applicable since "Y-hat a" is a ratio-of-means estimator and "Y-hat u" a simple unbiased estimator...

  4. Component-Level Electronic-Assembly Repair (CLEAR) Operational Concept

    NASA Technical Reports Server (NTRS)

    Oeftering, Richard C.; Bradish, Martin A.; Juergens, Jeffrey R.; Lewis, Michael J.; Vrnak, Daniel R.

    2011-01-01

    This Component-Level Electronic-Assembly Repair (CLEAR) Operational Concept document was developed as a first step in developing the Component-Level Electronic-Assembly Repair (CLEAR) System Architecture (NASA/TM-2011-216956). The CLEAR operational concept defines how the system will be used by the Constellation Program and what needs it meets. The document creates scenarios for major elements of the CLEAR architecture. These scenarios are generic enough to apply to near-Earth, Moon, and Mars missions. The CLEAR operational concept involves basic assumptions about the overall program architecture and interactions with the CLEAR system architecture. The assumptions include spacecraft and operational constraints for near-Earth orbit, Moon, and Mars missions. This document addresses an incremental development strategy where capabilities evolve over time, but it is structured to prevent obsolescence. The approach minimizes flight hardware by exploiting Internet-like telecommunications that enables CLEAR capabilities to remain on Earth and to be uplinked as needed. To minimize crew time and operational cost, CLEAR exploits offline development and validation to support online teleoperations. Operational concept scenarios are developed for diagnostics, repair, and functional test operations. Many of the supporting functions defined in these operational scenarios are further defined as technologies in NASA/TM-2011-216956.

  5. Probabilistic description of probable maximum precipitation

    NASA Astrophysics Data System (ADS)

    Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin

    2017-04-01

    Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.

  6. An introduction to multidimensional measurement using Rasch models.

    PubMed

    Briggs, Derek C; Wilson, Mark

    2003-01-01

    The act of constructing a measure requires a number of important assumptions. Principle among these assumptions is that the construct is unidimensional. In practice there are many instances when the assumption of unidimensionality does not hold, and where the application of a multidimensional measurement model is both technically appropriate and substantively advantageous. In this paper we illustrate the usefulness of a multidimensional approach to measurement with the Multidimensional Random Coefficient Multinomial Logit (MRCML) model, an extension of the unidimensional Rasch model. An empirical example is taken from a collection of embedded assessments administered to 541 students enrolled in middle school science classes with a hands-on science curriculum. Student achievement on these assessments are multidimensional in nature, but can also be treated as consecutive unidimensional estimates, or as is most common, as a composite unidimensional estimate. Structural parameters are estimated for each model using ConQuest, and model fit is compared. Student achievement in science is also compared across models. The multidimensional approach has the best fit to the data, and provides more reliable estimates of student achievement than under the consecutive unidimensional approach. Finally, at an interpretational level, the multidimensional approach may well provide richer information to the classroom teacher about the nature of student achievement.

  7. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  8. Propensity score estimation: neural networks, support vector machines, decision trees (CART), and meta-classifiers as alternatives to logistic regression.

    PubMed

    Westreich, Daniel; Lessler, Justin; Funk, Michele Jonsson

    2010-08-01

    Propensity scores for the analysis of observational data are typically estimated using logistic regression. Our objective in this review was to assess machine learning alternatives to logistic regression, which may accomplish the same goals but with fewer assumptions or greater accuracy. We identified alternative methods for propensity score estimation and/or classification from the public health, biostatistics, discrete mathematics, and computer science literature, and evaluated these algorithms for applicability to the problem of propensity score estimation, potential advantages over logistic regression, and ease of use. We identified four techniques as alternatives to logistic regression: neural networks, support vector machines, decision trees (classification and regression trees [CART]), and meta-classifiers (in particular, boosting). Although the assumptions of logistic regression are well understood, those assumptions are frequently ignored. All four alternatives have advantages and disadvantages compared with logistic regression. Boosting (meta-classifiers) and, to a lesser extent, decision trees (particularly CART), appear to be most promising for use in the context of propensity score analysis, but extensive simulation studies are needed to establish their utility in practice. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  9. Procedures for estimating the frequency of commercial airline flights encountering high cabin ozone levels

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.

    1979-01-01

    Three analytical problems in estimating the frequency at which commercial airline flights will encounter high cabin ozone levels are formulated and solved: namely, estimating flight-segment mean levels, estimating maximum-per-flight levels, and estimating the maximum average level over a specified flight interval. For each problem, solution procedures are given for different levels of input information - from complete cabin ozone data, which provides a direct solution, to limited ozone information, such as ambient ozone means and standard deviations, with which several assumptions are necessary to obtain the required estimates. Each procedure is illustrated by an example case calculation that uses simultaneous cabin and ambient ozone data obtained by the NASA Global Atmospheric Sampling Program. Critical assumptions are discussed and evaluated, and the several solutions for each problem are compared. Example calculations are also performed to illustrate how variations in lattitude, altitude, season, retention ratio, flight duration, and cabin ozone limits affect the estimated probabilities.

  10. Robustness of survival estimates for radio-marked animals

    USGS Publications Warehouse

    Bunck, C.M.; Chen, C.-L.

    1992-01-01

    Telemetry techniques are often used to study the survival of birds and mammals; particularly whcn mark-recapture approaches are unsuitable. Both parametric and nonparametric methods to estimate survival have becn developed or modified from other applications. An implicit assumption in these approaches is that the probability of re-locating an animal with a functioning transmitter is one. A Monte Carlo study was conducted to determine the bias and variance of the Kaplan-Meier estimator and an estimator based also on the assumption of constant hazard and to eva!uate the performance of the two-sample tests associated with each. Modifications of each estimator which allow a re-Iocation probability of less than one are described and evaluated. Generallv the unmodified estimators were biased but had lower variance. At low sample sizes all estimators performed poorly. Under the null hypothesis, the distribution of all test statistics reasonably approximated the null distribution when survival was low but not when it was high. The power of the two-sample tests were similar.

  11. Hybrid Approaches and Industrial Applications of Pattern Recognition,

    DTIC Science & Technology

    1980-10-01

    emphasized that the probability distribution in (9) is correct only under the assumption that P( wIx ) is known exactly. In practice this assumption will...sufficient precision. The alternative would be to take the probability distribution of estimates of P( wix ) into account in the analysis. However, from the

  12. Estimating Causal Effects in Mediation Analysis Using Propensity Scores

    ERIC Educational Resources Information Center

    Coffman, Donna L.

    2011-01-01

    Mediation is usually assessed by a regression-based or structural equation modeling (SEM) approach that we refer to as the classical approach. This approach relies on the assumption that there are no confounders that influence both the mediator, "M", and the outcome, "Y". This assumption holds if individuals are randomly…

  13. Causal Mediation Analysis: Warning! Assumptions Ahead

    ERIC Educational Resources Information Center

    Keele, Luke

    2015-01-01

    In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…

  14. Testing a pollen-parent fecundity distribution model on seed-parent fecundity distributions in bee-pollinated forage legume polycrosses

    USDA-ARS?s Scientific Manuscript database

    Random mating (i.e., panmixis) is a fundamental assumption in quantitative genetics. In outcrossing bee-pollinated perennial forage legume polycrosses, mating is assumed by default to follow theoretical random mating. This assumption informs breeders of expected inbreeding estimates based on polycro...

  15. Timber value—a matter of choice: a study of how end use assumptions affect timber values.

    Treesearch

    John H. Beuter

    1971-01-01

    The relationship between estimated timber values and actual timber prices is discussed. Timber values are related to how, where, and when the timber is used. An analysis demonstrates the relative values of a typical Douglas-fir stand under assumptions about timber use.

  16. Link-topic model for biomedical abbreviation disambiguation.

    PubMed

    Kim, Seonho; Yoon, Juntae

    2015-02-01

    The ambiguity of biomedical abbreviations is one of the challenges in biomedical text mining systems. In particular, the handling of term variants and abbreviations without nearby definitions is a critical issue. In this study, we adopt the concepts of topic of document and word link to disambiguate biomedical abbreviations. We newly suggest the link topic model inspired by the latent Dirichlet allocation model, in which each document is perceived as a random mixture of topics, where each topic is characterized by a distribution over words. Thus, the most probable expansions with respect to abbreviations of a given abstract are determined by word-topic, document-topic, and word-link distributions estimated from a document collection through the link topic model. The model allows two distinct modes of word generation to incorporate semantic dependencies among words, particularly long form words of abbreviations and their sentential co-occurring words; a word can be generated either dependently on the long form of the abbreviation or independently. The semantic dependency between two words is defined as a link and a new random parameter for the link is assigned to each word as well as a topic parameter. Because the link status indicates whether the word constitutes a link with a given specific long form, it has the effect of determining whether a word forms a unigram or a skipping/consecutive bigram with respect to the long form. Furthermore, we place a constraint on the model so that a word has the same topic as a specific long form if it is generated in reference to the long form. Consequently, documents are generated from the two hidden parameters, i.e. topic and link, and the most probable expansion of a specific abbreviation is estimated from the parameters. Our model relaxes the bag-of-words assumption of the standard topic model in which the word order is neglected, and it captures a richer structure of text than does the standard topic model by considering unigrams and semantically associated bigrams simultaneously. The addition of semantic links improves the disambiguation accuracy without removing irrelevant contextual words and reduces the parameter space of massive skipping or consecutive bigrams. The link topic model achieves 98.42% disambiguation accuracy on 73,505 MEDLINE abstracts with respect to 21 three letter abbreviations and their 139 distinct long forms. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. An analysis of U.S. Army Corps of Engineers documents supporting the channelization of the Rio Piedras. Acta Cientifica. 27(1-3).

    Treesearch

    Ariel Lugo; C.J. Nytch; M. Ramsey

    2013-01-01

    This paper has three objectives: 1) to provide a synopsis of the content of the reports, documents, data, and arguments used by the Corps to justify the channelization of the Rio Piedras; 2) to evaluate the accuracy of predictions and assumptions used by the Corps to reach conclusions that justify the channelization of the Rio Piedras; and 3) to make a case for the...

  18. Model Considerations for Memory-based Automatic Music Transcription

    NASA Astrophysics Data System (ADS)

    Albrecht, Štěpán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  19. Airport Landside. Volume II. The Airport Landside Simulation Model (ALSIM) Description and Users Guide.

    DOT National Transportation Integrated Search

    1982-06-01

    This volume provides a general description of the Airport Landside Simulation Model. A summary of simulated passenger and vehicular processing through the landside is presented. Program operating characteristics and assumptions are documented and a c...

  20. Vehicle information exchange needs for mobility applications : version 3.0.

    DOT National Transportation Integrated Search

    1996-06-01

    The Evaluatory Design Document provides a unifying set of assumptions for other evaluations to utilize. Many of the evaluation activities require the definition of an actual implementation in order to be performed. For example, to cost the elements o...

  1. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines.more » In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.« less

  2. Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis; Gold, Dara

    2013-01-01

    We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.

  3. 2007 Wholesale Power Rate Case Initial Proposal : Wholesale Power Rate Development Study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    United States. Bonneville Power Administration.

    The Wholesale Power Rate Development Study (WPRDS) calculates BPA proposed rates based on information either developed in the WPRDS or supplied by the other studies that comprise the BPA rate proposal. All of these studies, and accompanying documentation, provide the details of computations and assumptions. In general, information about loads and resources is provided by the Load Resource Study (LRS), WP-07-E-BPA-01, and the LRS Documentation, WP-07-E-BPA-01A. Revenue requirements information, as well as the Planned Net Revenues for Risk (PNNR), is provided in the Revenue Requirement Study, WP-07-E-BPA-02, and its accompanying Revenue Requirement Study Documentation, WP-07-E-BPA-02A and WP-07-E-BPA-02B. The Market Pricemore » Forecast Study (MPFS), WP-07-E-BPA-03, and the MPFS Documentation, WP-07-E-BPA-03A, provide the WPRDS with information regarding seasonal and diurnal differentiation of energy rates, as well information regarding monthly market prices for Demand Rates. In addition, this study provides information for the pricing of unbundled power products. The Risk Analysis Study, WP-07-E-BPA-04, and the Risk Analysis Study Documentation, WP-07-E-BPA-04A, provide short-term balancing purchases as well as secondary energy sales and revenue. The Section 7(b)(2) Rate Test Study, WP-07-E-BPA-06, and the Section 7(b)(2) Rate Test Study Documentation, WP-07-E-BPA-06A, implement Section 7(b)(2) of the Northwest Power Act to ensure that BPA preference customers firm power rates applied to their general requirements are no higher than rates calculated using specific assumptions in the Northwest Power Act.« less

  4. Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma

    NASA Astrophysics Data System (ADS)

    Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira

    2013-02-01

    A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.

  5. Adaptive Filtering Using Recurrent Neural Networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  6. Understanding unmet need: history, theory, and measurement.

    PubMed

    Bradley, Sarah E K; Casterline, John B

    2014-06-01

    During the past two decades, estimates of unmet need have become an influential measure for assessing population policies and programs. This article recounts the evolution of the concept of unmet need, describes how demographic survey data have been used to generate estimates of its prevalence, and tests the sensitivity of these estimates to various assumptions in the unmet need algorithm. The algorithm uses a complex set of assumptions to identify women: who are sexually active, who are infecund, whose most recent pregnancy was unwanted, who wish to postpone their next birth, and who are postpartum amenorrheic. The sensitivity tests suggest that defensible alternative criteria for identifying four out of five of these subgroups of women would increase the estimated prevalence of unmet need. The exception is identification of married women who are sexually active; more accurate measurement of this subgroup would reduce the estimated prevalence of unmet need in most settings. © 2013 The Population Council, Inc.

  7. Evaluation of Potential Exposure to Metals in Laundered Shop Towels

    PubMed Central

    Greenberg, Grace; Beck, Barbara D.

    2013-01-01

    We reported in 2003 that exposure to metals on laundered shop towels (LSTs) could exceed toxicity criteria. New data from LSTs used by workers in North America document the continued presence of metals in freshly laundered towels. We assessed potential exposure to metals based on concentrations of metals on the LSTs, estimates of LST usage by employees, and the transfer of metals from LST-to-hand, hand-to-mouth, and LST-to-lip, under average- or high-exposure scenarios. Exposure estimates were compared to toxicity criteria. Under an average-exposure scenario (excluding metals' data outliers), exceedances of the California Environmental Protection Agency, U.S. Environmental Protection Agency, and the Agency for Toxic Substances and Disease Registry toxicity criteria may occur for aluminum, cadmium, cobalt, copper, iron, and lead. Calculated intakes for these metals were up to more than 400-fold higher (lead) than their respective toxicity criterion. For the high-exposure scenario, additional exceedances may occur, and high-exposure intakes were up to 1,170-fold higher (lead) than their respective toxicity criterion. A sensitivity analysis indicated that alternate plausible assumptions could increase or decrease the magnitude of exceedances, but were unlikely to eliminate certain exceedances, particularly for lead. PMID:24453472

  8. Using GIS to Estimate Lake Volume from Limited Data

    EPA Science Inventory

    Estimates of lake volume are necessary for estimating residence time or modeling pollutants. Modern GIS methods for calculating lake volume improve upon more dated technologies (e.g. planimeters) and do not require potentially inaccurate assumptions (e.g. volume of a frustum of ...

  9. A Methodology for Developing Army Acquisition Strategies for an Uncertain Future

    DTIC Science & Technology

    2007-01-01

    manuscript for publication. Acronyms ABP Assumption-Based Planning ACEIT Automated Cost Estimating Integrated Tool ACR Armored Cavalry Regiment ACTD...decisions. For example, they employ the Automated Cost Estimating Integrated Tools ( ACEIT ) to simplify life cycle cost estimates; other tools are

  10. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    USGS Publications Warehouse

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision making. Therefore, we also discuss alternative approaches to yield unbiased estimates of population state variables using similar data types, and we stress that there is no substitute for an effective sample design that is grounded upon well-defined management objectives.

  11. Effects of various assumptions on the calculated liquid fraction in isentropic saturated equilibrium expansions

    NASA Technical Reports Server (NTRS)

    Bursik, J. W.; Hall, R. M.

    1980-01-01

    The saturated equilibrium expansion approximation for two phase flow often involves ideal-gas and latent-heat assumptions to simplify the solution procedure. This approach is well documented by Wegener and Mack and works best at low pressures where deviations from ideal-gas behavior are small. A thermodynamic expression for liquid mass fraction that is decoupled from the equations of fluid mechanics is used to compare the effects of the various assumptions on nitrogen-gas saturated equilibrium expansion flow starting at 8.81 atm, 2.99 atm, and 0.45 atm, which are conditions representative of transonic cryogenic wind tunnels. For the highest pressure case, the entire set of ideal-gas and latent-heat assumptions are shown to be in error by 62 percent for the values of heat capacity and latent heat. An approximation of the exact, real-gas expression is also developed using a constant, two phase isentropic expansion coefficient which results in an error of only 2 percent for the high pressure case.

  12. Instrumental variable specifications and assumptions for longitudinal analysis of mental health cost offsets.

    PubMed

    O'Malley, A James

    2012-12-01

    Instrumental variables (IVs) enable causal estimates in observational studies to be obtained in the presence of unmeasured confounders. In practice, a diverse range of models and IV specifications can be brought to bear on a problem, particularly with longitudinal data where treatment effects can be estimated for various functions of current and past treatment. However, in practice the empirical consequences of different assumptions are seldom examined, despite the fact that IV analyses make strong assumptions that cannot be conclusively tested by the data. In this paper, we consider several longitudinal models and specifications of IVs. Methods are applied to data from a 7-year study of mental health costs of atypical and conventional antipsychotics whose purpose was to evaluate whether the newer and more expensive atypical antipsychotic medications lead to a reduction in overall mental health costs.

  13. Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne

    2012-03-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.

  14. How Much Does it Cost to Expand a Protected Area System? Some Critical Determining Factors and Ranges of Costs for Queensland

    PubMed Central

    Adams, Vanessa M.; Segan, Daniel B.; Pressey, Robert L.

    2011-01-01

    Many governments have recently gone on record promising large-scale expansions of protected areas to meet global commitments such as the Convention on Biological Diversity. As systems of protected areas are expanded to be more comprehensive, they are more likely to be implemented if planners have realistic budget estimates so that appropriate funding can be requested. Estimating financial budgets a priori must acknowledge the inherent uncertainties and assumptions associated with key parameters, so planners should recognize these uncertainties by estimating ranges of potential costs. We explore the challenge of budgeting a priori for protected area expansion in the face of uncertainty, specifically considering the future expansion of protected areas in Queensland, Australia. The government has committed to adding ∼12 million ha to the reserve system, bringing the total area protected to 20 million ha by 2020. We used Marxan to estimate the costs of potential reserve designs with data on actual land value, market value, transaction costs, and land tenure. With scenarios, we explored three sources of budget variability: size of biodiversity objectives; subdivision of properties; and legal acquisition routes varying with tenure. Depending on the assumptions made, our budget estimates ranged from $214 million to $2.9 billion. Estimates were most sensitive to assumptions made about legal acquisition routes for leasehold land. Unexpected costs (costs encountered by planners when real-world costs deviate from assumed costs) responded non-linearly to inability to subdivide and percentage purchase of private land. A financially conservative approach - one that safeguards against large cost increases while allowing for potential financial windfalls - would involve less optimistic assumptions about acquisition and subdivision to allow Marxan to avoid expensive properties where possible while meeting conservation objectives. We demonstrate how a rigorous analysis can inform discussions about the expansion of systems of protected areas, including the identification of factors that influence budget variability. PMID:21980459

  15. Estimation of bias with the single-zone assumption in measurement of residential air exchange using the perfluorocarbon tracer gas method.

    PubMed

    Van Ryswyk, K; Wallace, L; Fugler, D; MacNeill, M; Héroux, M È; Gibson, M D; Guernsey, J R; Kindzierski, W; Wheeler, A J

    2015-12-01

    Residential air exchange rates (AERs) are vital in understanding the temporal and spatial drivers of indoor air quality (IAQ). Several methods to quantify AERs have been used in IAQ research, often with the assumption that the home is a single, well-mixed air zone. Since 2005, Health Canada has conducted IAQ studies across Canada in which AERs were measured using the perfluorocarbon tracer (PFT) gas method. Emitters and detectors of a single PFT gas were placed on the main floor to estimate a single-zone AER (AER(1z)). In three of these studies, a second set of emitters and detectors were deployed in the basement or second floor in approximately 10% of homes for a two-zone AER estimate (AER(2z)). In total, 287 daily pairs of AER(2z) and AER(1z) estimates were made from 35 homes across three cities. In 87% of the cases, AER(2z) was higher than AER(1z). Overall, the AER(1z) estimates underestimated AER(2z) by approximately 16% (IQR: 5-32%). This underestimate occurred in all cities and seasons and varied in magnitude seasonally, between homes, and daily, indicating that when measuring residential air exchange using a single PFT gas, the assumption of a single well-mixed air zone very likely results in an under prediction of the AER. The results of this study suggest that the long-standing assumption that a home represents a single well-mixed air zone may result in a substantial negative bias in air exchange estimates. Indoor air quality professionals should take this finding into consideration when developing study designs or making decisions related to the recommendation and installation of residential ventilation systems. © 2014 Her Majesty the Queen in Right of Canada. Indoor Air published by John Wiley & Sons Ltd Reproduced with the permission of the Minister of Health Canada.

  16. How much does it cost to expand a protected area system? Some critical determining factors and ranges of costs for Queensland.

    PubMed

    Adams, Vanessa M; Segan, Daniel B; Pressey, Robert L

    2011-01-01

    Many governments have recently gone on record promising large-scale expansions of protected areas to meet global commitments such as the Convention on Biological Diversity. As systems of protected areas are expanded to be more comprehensive, they are more likely to be implemented if planners have realistic budget estimates so that appropriate funding can be requested. Estimating financial budgets a priori must acknowledge the inherent uncertainties and assumptions associated with key parameters, so planners should recognize these uncertainties by estimating ranges of potential costs. We explore the challenge of budgeting a priori for protected area expansion in the face of uncertainty, specifically considering the future expansion of protected areas in Queensland, Australia. The government has committed to adding ∼12 million ha to the reserve system, bringing the total area protected to 20 million ha by 2020. We used Marxan to estimate the costs of potential reserve designs with data on actual land value, market value, transaction costs, and land tenure. With scenarios, we explored three sources of budget variability: size of biodiversity objectives; subdivision of properties; and legal acquisition routes varying with tenure. Depending on the assumptions made, our budget estimates ranged from $214 million to $2.9 billion. Estimates were most sensitive to assumptions made about legal acquisition routes for leasehold land. Unexpected costs (costs encountered by planners when real-world costs deviate from assumed costs) responded non-linearly to inability to subdivide and percentage purchase of private land. A financially conservative approach--one that safeguards against large cost increases while allowing for potential financial windfalls--would involve less optimistic assumptions about acquisition and subdivision to allow Marxan to avoid expensive properties where possible while meeting conservation objectives. We demonstrate how a rigorous analysis can inform discussions about the expansion of systems of protected areas, including the identification of factors that influence budget variability.

  17. Implications of scaled δ15N fractionation for community predator-prey body mass ratio estimates in size-structured food webs.

    PubMed

    Reum, Jonathan C P; Jennings, Simon; Hunsicker, Mary E

    2015-11-01

    Nitrogen stable isotope ratios (δ(15) N) may be used to estimate community-level relationships between trophic level (TL) and body size in size-structured food webs and hence the mean predator to prey body mass ratio (PPMR). In turn, PPMR is used to estimate mean food chain length, trophic transfer efficiency and rates of change in abundance with body mass (usually reported as slopes of size spectra) and to calibrate and validate food web models. When estimating TL, researchers had assumed that fractionation of δ(15) N (Δδ(15) N) did not change with TL. However, a recent meta-analysis indicated that this assumption was not as well supported by data as the assumption that Δδ(15) N scales negatively with the δ(15) N of prey. We collated existing fish community δ(15) N-body size data for the Northeast Atlantic and tropical Western Arabian Sea with new data from the Northeast Pacific. These data were used to estimate TL-body mass relationships and PPMR under constant and scaled Δδ(15) N assumptions, and to assess how the scaled Δδ(15) N assumption affects our understanding of the structure of these food webs. Adoption of the scaled Δδ(15) N approach markedly reduces the previously reported differences in TL at body mass among fish communities from different regions. With scaled Δδ(15) N, TL-body mass relationships became more positive and PPMR fell. Results implied that realized prey size in these size-structured fish communities are less variable than previously assumed and food chains potentially longer. The adoption of generic PPMR estimates for calibration and validation of size-based fish community models is better supported than hitherto assumed, but predicted slopes of community size spectra are more sensitive to a given change or error in realized PPMR when PPMR is small. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.

  18. High Tech Educators Network Evaluation.

    ERIC Educational Resources Information Center

    O'Shea, Dan

    A process evaluation was conducted to assess the High Tech Educators Network's (HTEN's) activities. Four basic components to the evaluation approach were documentation review, program logic model, written survey, and participant interviews. The model mapped the basic goals and objectives, assumptions, activities, outcome expectations, and…

  19. 29 CFR 1607.9 - No assumption of validity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... accounts of selection practices or selection outcomes. B. Encouragement of professional supervision. Professional supervision of selection activities is encouraged but is not a substitute for documented evidence... was conducted and that careful development and use of a selection procedure in accordance with...

  20. Base Case v.5.15 Documentation Supplement to Support the Clean Power Plan

    EPA Pesticide Factsheets

    Learn about several modeling assumptions used as part of EPA's analysis of the Clean Power Plan (Carbon Pollution Guidelines for Existing Electric Generating Units) using the EPA v.5.15 Base Case using Integrated Planning Model (IPM).

  1. Accounting for imperfect detection of groups and individuals when estimating abundance.

    PubMed

    Clement, Matthew J; Converse, Sarah J; Royle, J Andrew

    2017-09-01

    If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.

  2. Accounting for imperfect detection of groups and individuals when estimating abundance

    USGS Publications Warehouse

    Clement, Matthew J.; Converse, Sarah J.; Royle, J. Andrew

    2017-01-01

    If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.

  3. Model documentation renewable fuels module of the National Energy Modeling System

    NASA Astrophysics Data System (ADS)

    1995-06-01

    This report documents the objectives, analytical approach, and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1995 Annual Energy Outlook (AEO95) forecasts. The report catalogs and describes modeling assumptions, computational methodologies, data inputs, and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described. The RFM consists of six analytical submodules that represent each of the major renewable energy resources -- wood, municipal solid waste (MSW), solar energy, wind energy, geothermal energy, and alcohol fuels. The RFM also reads in hydroelectric facility capacities and capacity factors from a data file for use by the NEMS Electricity Market Module (EMM). The purpose of the RFM is to define the technological, cost, and resource size characteristics of renewable energy technologies. These characteristics are used to compute a levelized cost to be competed against other similarly derived costs from other energy sources and technologies. The competition of these energy sources over the NEMS time horizon determines the market penetration of these renewable energy technologies. The characteristics include available energy capacity, capital costs, fixed operating costs, variable operating costs, capacity factor, heat rate, construction lead time, and fuel product price.

  4. Documentation of spreadsheets for the analysis of aquifer-test and slug-test data

    USGS Publications Warehouse

    Halford, Keith J.; Kuniansky, Eve L.

    2002-01-01

    Several spreadsheets have been developed for the analysis of aquifer-test and slug-test data. Each spreadsheet incorporates analytical solution(s) of the partial differential equation for ground-water flow to a well for a specific type of condition or aquifer. The derivations of the analytical solutions were previously published. Thus, this report abbreviates the theoretical discussion, but includes practical information about each method and the important assumptions for the applications of each method. These spreadsheets were written in Microsoft Excel 9.0 (use of trade names does not constitute endorsement by the USGS). Storage properties should not be estimated with many of the spreadsheets because most are for analyzing single-well tests. Estimation of storage properties from single-well tests is generally discouraged because single-well tests are affected by wellbore storage and by well construction. These non-ideal effects frequently cause estimates of storage to be erroneous by orders of magnitude. Additionally, single-well tests are not sensitive to aquifer-storage properties. Single-well tests include all slug tests (Bouwer and Rice Method, Cooper, Bredehoeft, Papadopulos Method, and van der Kamp Method), the Cooper-Jacob straight-line Method, Theis recovery-data analysis, Jacob-Lohman method for flowing wells in a confined aquifer, and the step-drawdown test. Multi-well test spreadsheets included in this report are; Hantush-Jacob Leaky Aquifer Method and Distance-Drawdown Methods. The distance-drawdown method is an equilibrium or steady-state method, thus storage cannot be estimated.

  5. A catastrophic flood caused by drainage of a caldera lake at Aniakchak Volcano, Alaska, and implications for volcanic hazards assessment

    USGS Publications Warehouse

    Waythomas, C.F.; Walder, J.S.; McGimsey, R.G.; Neal, C.A.

    1996-01-01

    Aniakchak caldera, located on the Alaska Peninsula of southwest Alaska, formerly contained a large lake (estimated volume 3.7 ?? 109 m3) that rapidly drained as a result of failure of the caldera rim sometime after ca. 3400 yr B.P. The peak discharge of the resulting flood was estimated using three methods: (1) flow-competence equations, (2) step-backwater modeling, and (3) a dam-break model. The results of the dam-break model indicate that the peak discharge at the breach in the caldera rim was at least 7.7 ?? 104 m3 s-1, and the maximum possible discharge was ???1.1 ?? 106 m3 s-1. Flow-competence estimates of discharge, based on the largest boulders transported by the flood, indicate that the peak discharge values, which were a few kilometers downstream of the breach, ranged from 6.4 ?? 105 to 4.8 ?? 106 m3 s-1. Similar but less variable results were obtained by step-backwater modeling. Finally, discharge estimates based on regression equations relating peak discharge to the volume and depth of the impounded water, although limited by constraining assumptions, provide results within the range of values determined by the other methods. The discovery and documentation of a flood, caused by the failure of the caldera rim at Aniakchak caldera, underscore the significance and associated hydrologic hazards of potential large floods at other lake-filled calderas.

  6. Estimating the Organizational Cost of Sexual Assault in the U.S. Military

    DTIC Science & Technology

    2013-12-01

    22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank...ASSUMPTIONS AND DATA ANALYSIS ..........................35 1. Overall Project Assumptions ............................................................35 2...Overall Project Calculations .............................................................36 a. Calculations Using Data from FY 2012 WGRA Tabular

  7. Causal Models with Unmeasured Variables: An Introduction to LISREL.

    ERIC Educational Resources Information Center

    Wolfle, Lee M.

    Whenever one uses ordinary least squares regression, one is making an implicit assumption that all of the independent variables have been measured without error. Such an assumption is obviously unrealistic for most social data. One approach for estimating such regression models is to measure implied coefficients between latent variables for which…

  8. An identifiable model for informative censoring

    USGS Publications Warehouse

    Link, W.A.; Wegman, E.J.; Gantz, D.T.; Miller, J.J.

    1988-01-01

    The usual model for censored survival analysis requires the assumption that censoring of observations arises only due to causes unrelated to the lifetime under consideration. It is easy to envision situations in which this assumption is unwarranted, and in which use of the Kaplan-Meier estimator and associated techniques will lead to unreliable analyses.

  9. Social-Psychological Factors Influencing Recreation Demand: Evidence from Two Recreational Rivers

    ERIC Educational Resources Information Center

    Smith, Jordan W.; Moore, Roger L.

    2013-01-01

    Traditional methods of estimating demand for recreation areas involve making inferences about individuals' preferences. Frequently, the assumption is made that recreationists' cost of traveling to a site is a reliable measure of the value they place on that resource and the recreation opportunities it provides. This assumption may ignore other…

  10. Links between causal effects and causal association for surrogacy evaluation in a gaussian setting.

    PubMed

    Conlon, Anna; Taylor, Jeremy; Li, Yun; Diaz-Ordaz, Karla; Elliott, Michael

    2017-11-30

    Two paradigms for the evaluation of surrogate markers in randomized clinical trials have been proposed: the causal effects paradigm and the causal association paradigm. Each of these paradigms rely on assumptions that must be made to proceed with estimation and to validate a candidate surrogate marker (S) for the true outcome of interest (T). We consider the setting in which S and T are Gaussian and are generated from structural models that include an unobserved confounder. Under the assumed structural models, we relate the quantities used to evaluate surrogacy within both the causal effects and causal association frameworks. We review some of the common assumptions made to aid in estimating these quantities and show that assumptions made within one framework can imply strong assumptions within the alternative framework. We demonstrate that there is a similarity, but not exact correspondence between the quantities used to evaluate surrogacy within each framework, and show that the conditions for identifiability of the surrogacy parameters are different from the conditions, which lead to a correspondence of these quantities. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Procedures used to estimate hardwood lumber consumption from 1963 to 2002

    Treesearch

    William Luppold; Matthew Bumgardner

    2008-01-01

    This paper presents an explanation for and procedures used to estimate hardwood lumber consumption by secondary hardwood processing industries from 1963 to 2002. This includes: classification of industry and industry groups, development of proxy prices used to estimate lumber consumption, assumptions used to convert dimension purchases to lumber consumption, estimation...

  12. Mapped Plot Patch Size Estimates

    Treesearch

    Paul C. Van Deusen

    2005-01-01

    This paper demonstrates that the mapped plot design is relatively easy to analyze and describes existing formulas for mean and variance estimators. New methods are developed for using mapped plots to estimate average patch size of condition classes. The patch size estimators require assumptions about the shape of the condition class, limiting their utility. They may...

  13. Estimating Mutual Information for High-to-Low Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michaud, Isaac James; Williams, Brian J.; Weaver, Brian Phillip

    Presentation shows that KSG 2 is superior to KSG 1 because it scales locally automatically; KSG estimators are limited to a maximum MI due to sample size; LNC extends the capability of KSG without onerous assumptions; iLNC allows LNC to estimate information gain.

  14. Estimating Alarm Thresholds for Process Monitoring Data under Different Assumptions about the Data Generating Mechanism

    DOE PAGES

    Burr, Tom; Hamada, Michael S.; Howell, John; ...

    2013-01-01

    Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation dates to the 1920s with the Shewhart control chart; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual = data − prediction. This paper reviews alarm threshold estimation, introduces model selection options, and considers a range of assumptions regarding the data-generating mechanism for PM residuals.more » Two PM examples from nuclear safeguards are included to motivate the need for alarm threshold estimation. The first example involves mixtures of probability distributions that arise in solution monitoring, which is a common type of PM. The second example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals.« less

  15. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study.

    PubMed

    Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee

    2016-06-01

    Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.

  16. Data and methods to characterize the role of sex work and to inform sex work programs in generalized HIV epidemics: evidence to challenge assumptions.

    PubMed

    Mishra, Sharmistha; Boily, Marie-Claude; Schwartz, Sheree; Beyrer, Chris; Blanchard, James F; Moses, Stephen; Castor, Delivette; Phaswana-Mafuya, Nancy; Vickerman, Peter; Drame, Fatou; Alary, Michel; Baral, Stefan D

    2016-08-01

    In the context of generalized human immunodeficiency virus (HIV) epidemics, there has been limited recent investment in HIV surveillance and prevention programming for key populations including female sex workers. Often implicit in the decision to limit investment in these epidemic settings are assumptions including that commercial sex is not significant to the sustained transmission of HIV, and HIV interventions designed to reach "all segments of society" will reach female sex workers and clients. Emerging empiric and model-based evidence is challenging these assumptions. This article highlights the frameworks and estimates used to characterize the role of sex work in HIV epidemics as well as the relevant empiric data landscape on sex work in generalized HIV epidemics and their strengths and limitations. Traditional approaches to estimate the contribution of sex work to HIV epidemics do not capture the potential for upstream and downstream sexual and vertical HIV transmission. Emerging approaches such as the transmission population attributable fraction from dynamic mathematical models can address this gap. To move forward, the HIV scientific community must begin by replacing assumptions about the epidemiology of generalized HIV epidemics with data and more appropriate methods of estimating the contribution of unprotected sex in the context of sex work. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Pregnancy intentions-a complex construct and call for new measures.

    PubMed

    Mumford, Sunni L; Sapra, Katherine J; King, Rosalind B; Louis, Jean Fredo; Buck Louis, Germaine M

    2016-11-01

    To estimate the prevalence of unintended pregnancies under relaxed assumptions regarding birth control use compared with a traditional constructed measure. Cross-sectional survey. Not applicable. Nationally representative sample of U.S. women aged 15-44 years. None. Prevalence of intended and unintended pregnancies as estimated by [1] a traditional constructed measure from the National Survey of Family Growth (NSFG), and [2] a constructed measure relaxing assumptions regarding birth control use, reasons for nonuse, and pregnancy timing. The prevalence of unintended pregnancies was 6% higher using the traditional constructed measure as compared with the approach with relaxed assumptions (NSFG: 44%, 95% confidence interval [CI] 41, 46; new construct 38%, 95% CI, 36, 41). Using the NSFG approach, only 92% of women who stopped birth control to become pregnant and 0 women who were not using contraceptives at the time of the pregnancy and reported that they did not mind getting pregnant were classified as having intended pregnancies, compared with 100% using the new construct. Current measures of pregnancy intention may overestimate rates of unintended pregnancy, with over 340,000 pregnancies in the United States misclassified as unintended using the current approach, corresponding to an estimated savings of $678 million in public health-care expenditures. Current constructs make assumptions that may not reflect contemporary reproductive practices, so improved measures are needed. Published by Elsevier Inc.

  18. Impact of viral load and the duration of primary infection on HIV transmission: systematic review and meta-analysis

    PubMed Central

    BLASER, Nello; WETTSTEIN, Celina; ESTILL, Janne; VIZCAYA, Luisa SALAZAR; WANDELER, Gilles; EGGER, Matthias; KEISER, Olivia

    2014-01-01

    Objectives HIV ‘treatment as prevention’ (TasP) describes early treatment of HIV-infected patients intended to reduce viral load (VL) and transmission. Crucial assumptions for estimating TasP's effectiveness are the underlying estimates of transmission risk. We aimed to determine transmission risk during primary infection, and of the relation of HIV transmission risk to VL. Design Systematic review and meta-analysis. Methods We searched PubMed and Embase databases for studies that established a relationship between VL and transmission risk, or primary infection and transmission risk, in serodiscordant couples. We analyzed assumptions about the relationship between VL and transmission risk, and between duration of primary infection and transmission risk. Results We found 36 eligible articles, based on six different study populations. Studies consistently found that larger VLs lead to higher HIV transmission rates, but assumptions about the shape of this increase varied from exponential increase to saturation. The assumed duration of primary infection ranged from 1.5 to 12 months; for each additional month, the log10 transmission rate ratio between primary and asymptomatic infection decreased by 0.40. Conclusions Assumptions and estimates of the relationship between VL and transmission risk, and the relationship between primary infection and transmission risk, vary substantially and predictions of TasP's effectiveness should take this uncertainty into account. PMID:24691205

  19. Estimation of bias with the single-zone assumption in measurement of residential air exchange using the perfluorocarbon tracer gas method

    PubMed Central

    Van Ryswyk, K; Wallace, L; Fugler, D; MacNeill, M; Héroux, M È; Gibson, M D; Guernsey, J R; Kindzierski, W; Wheeler, A J

    2015-01-01

    Residential air exchange rates (AERs) are vital in understanding the temporal and spatial drivers of indoor air quality (IAQ). Several methods to quantify AERs have been used in IAQ research, often with the assumption that the home is a single, well-mixed air zone. Since 2005, Health Canada has conducted IAQ studies across Canada in which AERs were measured using the perfluorocarbon tracer (PFT) gas method. Emitters and detectors of a single PFT gas were placed on the main floor to estimate a single-zone AER (AER1z). In three of these studies, a second set of emitters and detectors were deployed in the basement or second floor in approximately 10% of homes for a two-zone AER estimate (AER2z). In total, 287 daily pairs of AER2z and AER1z estimates were made from 35 homes across three cities. In 87% of the cases, AER2z was higher than AER1z. Overall, the AER1z estimates underestimated AER2z by approximately 16% (IQR: 5–32%). This underestimate occurred in all cities and seasons and varied in magnitude seasonally, between homes, and daily, indicating that when measuring residential air exchange using a single PFT gas, the assumption of a single well-mixed air zone very likely results in an under prediction of the AER. PMID:25399878

  20. Asymmetrical effects of mesophyll conductance on fundamental photosynthetic parameters and their relationships estimated from leaf gas exchange measurements.

    PubMed

    Sun, Ying; Gu, Lianhong; Dickinson, Robert E; Pallardy, Stephen G; Baker, John; Cao, Yonghui; DaMatta, Fábio Murilo; Dong, Xuejun; Ellsworth, David; Van Goethem, Davina; Jensen, Anna M; Law, Beverly E; Loos, Rodolfo; Martins, Samuel C Vitor; Norby, Richard J; Warren, Jeffrey; Weston, David; Winter, Klaus

    2014-04-01

    Worldwide measurements of nearly 130 C3 species covering all major plant functional types are analysed in conjunction with model simulations to determine the effects of mesophyll conductance (g(m)) on photosynthetic parameters and their relationships estimated from A/Ci curves. We find that an assumption of infinite g(m) results in up to 75% underestimation for maximum carboxylation rate V(cmax), 60% for maximum electron transport rate J(max), and 40% for triose phosphate utilization rate T(u) . V(cmax) is most sensitive, J(max) is less sensitive, and T(u) has the least sensitivity to the variation of g(m). Because of this asymmetrical effect of g(m), the ratios of J(max) to V(cmax), T(u) to V(cmax) and T(u) to J(max) are all overestimated. An infinite g(m) assumption also limits the freedom of variation of estimated parameters and artificially constrains parameter relationships to stronger shapes. These findings suggest the importance of quantifying g(m) for understanding in situ photosynthetic machinery functioning. We show that a nonzero resistance to CO2 movement in chloroplasts has small effects on estimated parameters. A non-linear function with gm as input is developed to convert the parameters estimated under an assumption of infinite gm to proper values. This function will facilitate gm representation in global carbon cycle models. © 2013 John Wiley & Sons Ltd.

  1. Improving the evidence base for better comparative effectiveness research.

    PubMed

    Brophy, James M

    2015-09-01

    The last 20 years has documented that the evidence base for informed clinical decision-making is often suboptimal. It is hoped that high-quality comparative effectiveness research may fill these knowledge gaps. Implicit in these changing paradigms is the underlying assumption that the published evidence, when available, is valid. It is posited here that this assumption is sometimes questionable. However, several recent methods that may improve the design and analysis of comparative effectiveness research have appeared and are discussed here. Examples from the cardiology literature are provided, but it is believed the highlighted principles are applicable to other branches of medicine.

  2. Documentation of volume 3 of the 1978 Energy Information Administration annual report to congress

    NASA Astrophysics Data System (ADS)

    1980-02-01

    In a preliminary overview of the projection process, the relationship between energy prices, supply, and demand is addressed. Topics treated in detail include a description of energy economic interactions, assumptions regarding world oil prices, and energy modeling in the long term beyond 1995. Subsequent sections present the general approach and methodology underlying the forecasts, and define and describe the alternative projection series and their associated assumptions. Short term forecasting, midterm forecasting, long term forecasting of petroleum, coal, and gas supplies are included. The role of nuclear power as an energy source is also discussed.

  3. Assessment of dietary exposure in the French population to 13 selected food colours, preservatives, antioxidants, stabilizers, emulsifiers and sweeteners.

    PubMed

    Bemrah, Nawel; Leblanc, Jean-Charles; Volatier, Jean-Luc

    2008-01-01

    The results of French intake estimates for 13 food additives prioritized by the methods proposed in the 2001 Report from the European Commission on Dietary Food Additive Intake in the European Union are reported. These 13 additives were selected using the first and second tiers of the three-tier approach. The first tier was based on theoretical food consumption data and the maximum permitted level of additives. The second tier used real individual food consumption data and the maximum permitted level of additives for the substances which exceeded the acceptable daily intakes (ADI) in the first tier. In the third tier reported in this study, intake estimates were calculated for the 13 additives (colours, preservatives, antioxidants, stabilizers, emulsifiers and sweeteners) according to two modelling assumptions corresponding to two different food habit scenarios (assumption 1: consumers consume foods that may or may not contain food additives, and assumption 2: consumers always consume foods that contain additives) when possible. In this approach, real individual food consumption data and the occurrence/use-level of food additives reported by the food industry were used. Overall, the results of the intake estimates are reassuring for the majority of additives studied since the risk of exceeding the ADI was low, except for nitrites, sulfites and annatto, whose ADIs were exceeded by either children or adult consumers or by both populations under one and/or two modelling assumptions. Under the first assumption, the ADI is exceeded for high consumers among adults for nitrites and sulfites (155 and 118.4%, respectively) and among children for nitrites (275%). Under the second assumption, the average nitrites dietary exposure in children exceeds the ADI (146.7%). For high consumers, adults exceed the nitrite and sulfite ADIs (223 and 156.4%, respectively) and children exceed the nitrite, annatto and sulfite ADIs (416.7, 124.6 and 130.6%, respectively).

  4. Cabrillo College Master Plan, 2001-2004.

    ERIC Educational Resources Information Center

    Cabrillo Coll., Aptos, CA. Office of Institutional Research.

    This document presents Cabrillo College's (California) master plan for 2001 to 2004. Major steps in compiling the master plan included: conducting environmental scanning, making planning assumptions, establishing goals, identifying objectives, developing strategies, and evaluating the plan. The plan is based on six broad goals: (1) to enable…

  5. International Education in Australian Universities: Concepts and Definitions.

    ERIC Educational Resources Information Center

    Clyne, Fiona; Marginson, Simon; Woock, Roger

    This paper grew out of the research study "Mapping the Internationalization of Higher Education," a 1998-2000 Australian Research Council-funded project. The project's objectives included: documenting the practices of international education in Australian universities; analyzing the cultural, political, and economic assumptions on which…

  6. 40 CFR 68.39 - Documentation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., and the rationale for selection; assumptions shall include use of any administrative controls and any... include the anticipated effect of the controls and mitigation on the release quantity and rate. (b) For... administrative controls and any mitigation that were assumed to limit the quantity that could be released...

  7. 40 CFR 68.39 - Documentation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., and the rationale for selection; assumptions shall include use of any administrative controls and any... include the anticipated effect of the controls and mitigation on the release quantity and rate. (b) For... administrative controls and any mitigation that were assumed to limit the quantity that could be released...

  8. 40 CFR 68.39 - Documentation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., and the rationale for selection; assumptions shall include use of any administrative controls and any... include the anticipated effect of the controls and mitigation on the release quantity and rate. (b) For... administrative controls and any mitigation that were assumed to limit the quantity that could be released...

  9. 40 CFR 68.39 - Documentation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., and the rationale for selection; assumptions shall include use of any administrative controls and any... include the anticipated effect of the controls and mitigation on the release quantity and rate. (b) For... administrative controls and any mitigation that were assumed to limit the quantity that could be released...

  10. 40 CFR 68.39 - Documentation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., and the rationale for selection; assumptions shall include use of any administrative controls and any... include the anticipated effect of the controls and mitigation on the release quantity and rate. (b) For... administrative controls and any mitigation that were assumed to limit the quantity that could be released...

  11. Painting, Poetry, and Science.

    ERIC Educational Resources Information Center

    Seaman, David W.

    The first half of this document examines the relationships between painting, poetry, and science, describing in particular their increasing focus on letters and typography. The poets and painters of mid-nineteenth century France, the Futurist school, and the Lettrist movement are discussed. Their common assumption, fundamental to modern…

  12. Hankin and Reeves' approach to estimating fish abundance in small streams: limitations and alternatives

    Treesearch

    William L. Thompson

    2003-01-01

    Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream fish studies across North America. However, their population estimator relies on two key assumptions: (1) removal estimates are equal to the true numbers of fish, and (2) removal estimates are highly correlated with snorkel counts within a subset of sampled...

  13. “Hybrid Topics” -- Facilitating the Interpretation of Topics Through the Addition of MeSH Descriptors to Bags of Words

    PubMed Central

    Yu, Zhiguo; Nguyen, Thang; Dhombres, Ferdinand; Johnson, Todd; Bodenreider, Olivier

    2018-01-01

    Extracting and understanding information, themes and relationships from large collections of documents is an important task for biomedical researchers. Latent Dirichlet Allocation is an unsupervised topic modeling technique using the bag-of-words assumption that has been applied extensively to unveil hidden thematic information within large sets of documents. In this paper, we added MeSH descriptors to the bag-of-words assumption to generate ‘hybrid topics’, which are mixed vectors of words and descriptors. We evaluated this approach on the quality and interpretability of topics in both a general corpus and a specialized corpus. Our results demonstrated that the coherence of ‘hybrid topics’ is higher than that of regular bag-of-words topics in the specialized corpus. We also found that the proportion of topics that are not associated with MeSH descriptors is higher in the specialized corpus than in the general corpus. PMID:29295179

  14. Advanced Platform Systems Technology study. Volume 2: Trade study and technology selection

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Three primary tasks were identified which include task 1-trade studies, task 2-trade study comparison and technology selection, and task 3-technology definition. Task 1 general objectives were to identify candidate technology trade areas, determine which areas have the highest potential payoff, define specific trades within the high payoff areas, and perform the trade studies. In order to satisfy these objectives, a structured, organized approach was employed. Candidate technology areas and specific trades were screened using consistent selection criteria and considering possible interrelationships. A data base comprising both manned and unmanned space platform documentation was used as a source of system and subsystem requirements. When requirements were not stated in the data base documentation, assumptions were made and recorded where necessary to characterize a particular spacecraft system. The requirements and assumptions were used together with the selection criteria to establish technology advancement goals and select trade studies. While both manned and unmanned platform data were used, the study was focused on the concept of an early manned space station.

  15. Development, Production and Validation of the NOAA Solar Irradiance Climate Data Record

    NASA Astrophysics Data System (ADS)

    Coddington, O.; Lean, J.; Pilewskie, P.; Snow, M. A.; Lindholm, D. M.

    2015-12-01

    A new climate data record of Total Solar Irradiance (TSI) and Solar Spectral Irradiance (SSI), including source code and supporting documentation is now publicly available as part of the National Oceanographic and Atmospheric Administration's (NOAA) National Centers for Environmental Information (NCEI) Climate Data Record (CDR) Program. Daily and monthly averaged values of TSI and SSI, with associated time and wavelength dependent uncertainties, are estimated from 1882 to the present with yearly averaged values since 1610, updated quarterly for the foreseeable future. The new Solar Irradiance Climate Data Record, jointly developed by the University of Colorado at Boulder's Laboratory for Atmospheric and Space Physics (LASP) and the Naval Research Laboratory (NRL), is constructed from solar irradiance models that determine the changes from quiet Sun conditions when bright faculae and dark sunspots are present on the solar disk. The magnitudes of the irradiance changes that these features produce are determined from linear regression of the proxy Mg II index and sunspot area indices against the approximately decade-long solar irradiance measurements made by instruments on the SOlar Radiation and Climate Experiment (SORCE) spacecraft. We describe the model formulation, uncertainty estimates, operational implementation and validation approach. Future efforts to improve the uncertainty estimates of the Solar Irradiance CDR arising from model assumptions, and augmentation of the solar irradiance reconstructions with direct measurements from the Total and Spectral Solar Irradiance Sensor (TSIS: launch date, July 2017) are also discussed.

  16. Estimating Bacterial Diversity for Ecological Studies: Methods, Metrics, and Assumptions

    PubMed Central

    Birtel, Julia; Walser, Jean-Claude; Pichon, Samuel; Bürgmann, Helmut; Matthews, Blake

    2015-01-01

    Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5). Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques. PMID:25915756

  17. Estimating site occupancy and detection probability parameters for meso- and large mammals in a coastal eosystem

    USGS Publications Warehouse

    O'Connell, Allan F.; Talancy, Neil W.; Bailey, Larissa L.; Sauer, John R.; Cook, Robert; Gilbert, Andrew T.

    2006-01-01

    Large-scale, multispecies monitoring programs are widely used to assess changes in wildlife populations but they often assume constant detectability when documenting species occurrence. This assumption is rarely met in practice because animal populations vary across time and space. As a result, detectability of a species can be influenced by a number of physical, biological, or anthropogenic factors (e.g., weather, seasonality, topography, biological rhythms, sampling methods). To evaluate some of these influences, we estimated site occupancy rates using species-specific detection probabilities for meso- and large terrestrial mammal species on Cape Cod, Massachusetts, USA. We used model selection to assess the influence of different sampling methods and major environmental factors on our ability to detect individual species. Remote cameras detected the most species (9), followed by cubby boxes (7) and hair traps (4) over a 13-month period. Estimated site occupancy rates were similar among sampling methods for most species when detection probabilities exceeded 0.15, but we question estimates obtained from methods with detection probabilities between 0.05 and 0.15, and we consider methods with lower probabilities unacceptable for occupancy estimation and inference. Estimated detection probabilities can be used to accommodate variation in sampling methods, which allows for comparison of monitoring programs using different protocols. Vegetation and seasonality produced species-specific differences in detectability and occupancy, but differences were not consistent within or among species, which suggests that our results should be considered in the context of local habitat features and life history traits for the target species. We believe that site occupancy is a useful state variable and suggest that monitoring programs for mammals using occupancy data consider detectability prior to making inferences about species distributions or population change.

  18. Documentation of the Ecological Risk Assessment Computer Model ECORSK.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony F. Gallegos; Gilbert J. Gonzales

    1999-06-01

    The FORTRAN77 ecological risk computer model--ECORSK.5--has been used to estimate the potential toxicity of surficial deposits of radioactive and non-radioactive contaminants to several threatened and endangered (T and E) species at the Los Alamos National Laboratory (LANL). These analyses to date include preliminary toxicity estimates for the Mexican spotted owl, the American peregrine falcon, the bald eagle, and the southwestern willow flycatcher. This work has been performed as required for the Record of Decision for the construction of the Dual Axis Radiographic Hydrodynamic Test (DARHT) Facility at LANL as part of the Environmental Impact Statement. The model is dependent onmore » the use of the geographic information system and associated software--ARC/INFO--and has been used in conjunction with LANL's Facility for Information Management and Display (FIMAD) contaminant database. The integration of FIMAD data and ARC/INFO using ECORSK.5 allows the generation of spatial information from a gridded area of potential exposure called an Ecological Exposure Unit. ECORSK.5 was used to simulate exposures using a modified Environmental Protection Agency Quotient Method. The model can handle a large number of contaminants within the home range of T and E species. This integration results in the production of hazard indices which, when compared to risk evaluation criteria, estimate the potential for impact from consumption of contaminants in food and ingestion of soil. The assessment is considered a Tier-2 type of analysis. This report summarizes and documents the ECORSK.5 code, the mathematical models used in the development of ECORSK.5, and the input and other requirements for its operation. Other auxiliary FORTRAN 77 codes used for processing and graphing output from ECORSK.5 are also discussed. The reader may refer to reports cited in the introduction to obtain greater detail on past applications of ECORSK.5 and assumptions used in deriving model parameters.« less

  19. Revised analyses of decommissioning for the reference pressurized Water Reactor Power Station. Volume 2, Effects of current regulatory and other considerations on the financial assurance requirements of the decommissioning rule and on estimates of occupational radiation exposure: Appendices, Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konzek, G.J.; Smith, R.I.; Bierschbach, M.C.

    1995-11-01

    With the issuance of the final Decommissioning Rule (July 27, 1998), owners and operators of licensed nuclear power plants are required to prepare, and submit to the US Nuclear Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. The NRC staff is in need of bases documentation that will assist them in assessing the adequacy of the licensee submittals, from the viewpoint of both the planned actions, including occupational radiation exposure, and the probable costs. The purpose of this reevaluation study is to provide some of the needed bases documentation. This report contains the results of a review andmore » reevaluation of the 1978 PNL decommissioning study of the Trojan nuclear power plant (NUREG/CR-0130), including all identifiable factors and cost assumptions which contribute significantly to the total cost of decommissioning the nuclear power plant for the DECON, SAFSTOR, and ENTOMB decommissioning alternatives. These alternatives now include an initial 5--7 year period during which time the spent fuel is stored in the spent fuel pool, prior to beginning major disassembly or extended safe storage of the plant. Included for information (but not presently part of the license termination cost) is an estimate of the cost to demolish the decontaminated and clean structures on the site and to restore the site to a ``green field`` condition. This report also includes consideration of the NRC requirement that decontamination and decommissioning activities leading to termination of the nuclear license be completed within 60 years of final reactor shutdown, consideration of packaging and disposal requirements for materials whose radionuclide concentrations exceed the limits for Class C low-level waste (i.e., Greater-Than-Class C), and reflects 1993 costs for labor, materials, transport, and disposal activities.« less

  20. Revised analyses of decommissioning for the reference pressurized Water Reactor Power Station. Effects of current regulatory and other considerations on the financial assurance requirements of the decommissioning rule and on estimates of occupational radiation exposure, Volume 1, Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konzek, G.J.; Smith, R.I.; Bierschbach, M.C.

    1995-11-01

    With the issuance of the final Decommissioning Rule (July 27, 1988), owners and operators of licensed nuclear power plants are required to prepare, and submit to the US Nuclear Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. The NRC staff is in need of bases documentation that will assist them in assessing the adequacy of the licensee submittals, from the viewpoint of both the planned actions, including occupational radiation exposure, and the probable costs. The purpose of this reevaluation study is to provide some of the needed bases documentation. This report contains the results of a review andmore » reevaluation of the {prime}978 PNL decommissioning study of the Trojan nuclear power plant (NUREG/CR-0130), including all identifiable factors and cost assumptions which contribute significantly to the total cost of decommissioning the nuclear power plant for the DECON, SAFSTOR, and ENTOMB decommissioning alternatives. These alternatives now include an initial 5--7 year period during which time the spent fuel is stored in the spent fuel pool, prior to beginning major disassembly or extended safe storage of the plant. Included for information (but not presently part of the license termination cost) is an estimate of the cost to demolish the decontaminated and clean structures on the site and to restore the site to a ``green field`` condition. This report also includes consideration of the NRC requirement that decontamination and decommissioning activities leading to termination of the nuclear license be completed within 60 years of final reactor shutdown, consideration of packaging and disposal requirements for materials whose radionuclide concentrations exceed the limits for Class C low-level waste (i.e., Greater-Than-Class C), and reflects 1993 costs for labor, materials, transport, and disposal activities.« less

  1. Search algorithm complexity modeling with application to image alignment and matching

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2014-05-01

    Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.

  2. Estimating trends in the global mean temperature record

    NASA Astrophysics Data System (ADS)

    Poppick, Andrew; Moyer, Elisabeth J.; Stein, Michael L.

    2017-06-01

    Given uncertainties in physical theory and numerical climate simulations, the historical temperature record is often used as a source of empirical information about climate change. Many historical trend analyses appear to de-emphasize physical and statistical assumptions: examples include regression models that treat time rather than radiative forcing as the relevant covariate, and time series methods that account for internal variability in nonparametric rather than parametric ways. However, given a limited data record and the presence of internal variability, estimating radiatively forced temperature trends in the historical record necessarily requires some assumptions. Ostensibly empirical methods can also involve an inherent conflict in assumptions: they require data records that are short enough for naive trend models to be applicable, but long enough for long-timescale internal variability to be accounted for. In the context of global mean temperatures, empirical methods that appear to de-emphasize assumptions can therefore produce misleading inferences, because the trend over the twentieth century is complex and the scale of temporal correlation is long relative to the length of the data record. We illustrate here how a simple but physically motivated trend model can provide better-fitting and more broadly applicable trend estimates and can allow for a wider array of questions to be addressed. In particular, the model allows one to distinguish, within a single statistical framework, between uncertainties in the shorter-term vs. longer-term response to radiative forcing, with implications not only on historical trends but also on uncertainties in future projections. We also investigate the consequence on inferred uncertainties of the choice of a statistical description of internal variability. While nonparametric methods may seem to avoid making explicit assumptions, we demonstrate how even misspecified parametric statistical methods, if attuned to the important characteristics of internal variability, can result in more accurate uncertainty statements about trends.

  3. Computed Tomography Screening for Lung Cancer in the National Lung Screening Trial

    PubMed Central

    Black, William C.

    2016-01-01

    The National Lung Screening Trial (NLST) demonstrated that screening with low-dose CT versus chest radiography reduced lung cancer mortality by 16% to 20%. More recently, a cost-effectiveness analysis (CEA) of CT screening for lung cancer versus no screening in the NLST was performed. The CEA conformed to the reference-case recommendations of the US Panel on Cost-Effectiveness in Health and Medicine, including the use of the societal perspective and an annual discount rate of 3%. The CEA was based on several important assumptions. In this paper, I review the methods and assumptions used to obtain the base case estimate of $81,000 per quality-adjusted life-year gained. In addition, I show how this estimate varied widely among different subsets and when some of the base case assumptions were changed and speculate on the cost-effectiveness of CT screening for lung cancer outside the NLST. PMID:25635704

  4. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    PubMed

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  5. Estimating Scale Economies and the Optimal Size of School Districts: A Flexible Form Approach

    ERIC Educational Resources Information Center

    Schiltz, Fritz; De Witte, Kristof

    2017-01-01

    This paper investigates estimation methods to model the relationship between school district size, costs per student and the organisation of school districts. We show that the assumptions on the functional form strongly affect the estimated scale economies and offer two possible solutions to allow for more flexibility in the estimation method.…

  6. Approximate Confidence Intervals for Moment-Based Estimators of the Between-Study Variance in Random Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-01-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…

  7. Residual Wage Differences by Gender: Bounding the Estimates.

    ERIC Educational Resources Information Center

    Sakellariou, Chris N.; Patrinos, Harry A.

    1996-01-01

    Uses data from the 1986 Canadian labor market activity survey file to derive estimates of residual gender wage gap differences. Investigates these estimates' dependence on experimental design and on assumptions about discrimination-free wage structures. Residual differences persist, even after restricting the sample to a group of highly motivated,…

  8. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  9. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  10. A Comparison of Methods for Nonparametric Estimation of Item Characteristic Curves for Binary Items

    ERIC Educational Resources Information Center

    Lee, Young-Sun

    2007-01-01

    This study compares the performance of three nonparametric item characteristic curve (ICC) estimation procedures: isotonic regression, smoothed isotonic regression, and kernel smoothing. Smoothed isotonic regression, employed along with an appropriate kernel function, provides better estimates and also satisfies the assumption of strict…

  11. Career Development.

    ERIC Educational Resources Information Center

    1996

    This document consists of four papers presented during a symposium on career development moderated by David Bjorkquist at the 1996 conference of the Academy of Human Resource Development (AHRD). "A Mentoring Model for Career Development" (Mary Finnegan) describes a study that created a model based on the assumption that mentoring is an essential…

  12. Interactivism: Change, Sensory-Emotional Intelligence, and Intentionality in Being and Learning.

    ERIC Educational Resources Information Center

    Bichelmeyer, Barbara A.

    This paper documents the theoretical framework of interactivism; articulates the pedagogical theory which frames its assumptions regarding effective educational practice; positions the pedagogy of interactivism against traditional pedagogical practice; and argues for the educational importance of the interactivist view. Interactivism is the term…

  13. Chapter 3: Design of the Saber-Tooth Project.

    ERIC Educational Resources Information Center

    Ward, Phillip

    1999-01-01

    Used data from interviews, surveys, and document analysis to describe the methods and reform processes of the Saber Tooth Project, examining selection of sites; demographics (school sites, teachers, data sources, and project assumptions); and project phases (development, planning, implementation, and support). The project's method of reform was…

  14. 78 FR 12308 - Proposed Information Collection Request; Comment Request: Procedures for Implementing the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-22

    ... Discharge Elimination System (NPDES) permits under section 402 of the Clean Water Act, certain research and...: (202) 564-0072; email address: [email protected] . SUPPLEMENTARY INFORMATION: Supporting documents... information, including the validity of the methodology and assumptions used; (iii) enhance the quality...

  15. Comprehensive Evaluation of Attitude and Orbit Estimation Using Actual Earth Magnetic Field Data

    NASA Technical Reports Server (NTRS)

    Deutschmann, Julie K.; Bar-Itzhack, Itzhack Y.

    2000-01-01

    A single, augmented Extended Kalman Filter (EKF), which simultaneously and autonomously estimates spacecraft attitude and orbit has been developed and successfully tested with real magnetometer and gyro data only. Because the earth magnetic field is a function of time and position, and because time is known quite precisely, the differences between the computed and measured magnetic field components, as measured by the magnetometers throughout the entire spacecraft orbit, are a function of both orbit and attitude errors. Thus, conceivably these differences could be used to estimate both orbit and attitude; an observability study validated this assumption. The results of testing the EKF with actual magnetometer and gyro data, from four satellites supported by the NASA Goddard Space Flight Center (GSFC) Guidance, Navigation, and Control Center, are presented and evaluated. They confirm the assumption that a single EKF can estimate both attitude and orbit when using gyros and magnetometers only.

  16. Bias correction of nutritional status estimates when reported age is used for calculating WHO indicators in children under five years of age.

    PubMed

    Quezada, Amado D; García-Guerra, Armando; Escobar, Leticia

    2016-06-01

    To assess the performance of a simple correction method for nutritional status estimates in children under five years of age when exact age is not available from the data. The proposed method was based on the assumption of symmetry of age distributions within a given month of age and validated in a large population-based survey sample of Mexican preschool children. The main distributional assumption was consistent with the data. All prevalence estimates derived from the correction method showed no statistically significant bias. In contrast, failing to correct attained age resulted in an underestimation of stunting in general and an overestimation of overweight or obesity among the youngest. The proposed method performed remarkably well in terms of bias correction of estimates and could be easily applied in situations in which either birth or interview dates are not available from the data.

  17. Instrumental variables I: instrumental variables exploit natural variation in nonexperimental data to estimate causal relationships.

    PubMed

    Rassen, Jeremy A; Brookhart, M Alan; Glynn, Robert J; Mittleman, Murray A; Schneeweiss, Sebastian

    2009-12-01

    The gold standard of study design for treatment evaluation is widely acknowledged to be the randomized controlled trial (RCT). Trials allow for the estimation of causal effect by randomly assigning participants either to an intervention or comparison group; through the assumption of "exchangeability" between groups, comparing the outcomes will yield an estimate of causal effect. In the many cases where RCTs are impractical or unethical, instrumental variable (IV) analysis offers a nonexperimental alternative based on many of the same principles. IV analysis relies on finding a naturally varying phenomenon, related to treatment but not to outcome except through the effect of treatment itself, and then using this phenomenon as a proxy for the confounded treatment variable. This article demonstrates how IV analysis arises from an analogous but potentially impossible RCT design, and outlines the assumptions necessary for valid estimation. It gives examples of instruments used in clinical epidemiology and concludes with an outline on estimation of effects.

  18. Instrumental variables I: instrumental variables exploit natural variation in nonexperimental data to estimate causal relationships

    PubMed Central

    Rassen, Jeremy A.; Brookhart, M. Alan; Glynn, Robert J.; Mittleman, Murray A.; Schneeweiss, Sebastian

    2010-01-01

    The gold standard of study design for treatment evaluation is widely acknowledged to be the randomized controlled trial (RCT). Trials allow for the estimation of causal effect by randomly assigning participants either to an intervention or comparison group; through the assumption of “exchangeability” between groups, comparing the outcomes will yield an estimate of causal effect. In the many cases where RCTs are impractical or unethical, instrumental variable (IV) analysis offers a nonexperimental alternative based on many of the same principles. IV analysis relies on finding a naturally varying phenomenon, related to treatment but not to outcome except through the effect of treatment itself, and then using this phenomenon as a proxy for the confounded treatment variable. This article demonstrates how IV analysis arises from an analogous but potentially impossible RCT design, and outlines the assumptions necessary for valid estimation. It gives examples of instruments used in clinical epidemiology and concludes with an outline on estimation of effects. PMID:19356901

  19. Survival estimation and the effects of dependency among animals

    USGS Publications Warehouse

    Schmutz, Joel A.; Ward, David H.; Sedinger, James S.; Rexstad, Eric A.

    1995-01-01

    Survival models assume that fates of individuals are independent, yet the robustness of this assumption has been poorly quantified. We examine how empirically derived estimates of the variance of survival rates are affected by dependency in survival probability among individuals. We used Monte Carlo simulations to generate known amounts of dependency among pairs of individuals and analyzed these data with Kaplan-Meier and Cormack-Jolly-Seber models. Dependency significantly increased these empirical variances as compared to theoretically derived estimates of variance from the same populations. Using resighting data from 168 pairs of black brant, we used a resampling procedure and program RELEASE to estimate empirical and mean theoretical variances. We estimated that the relationship between paired individuals caused the empirical variance of the survival rate to be 155% larger than the empirical variance for unpaired individuals. Monte Carlo simulations and use of this resampling strategy can provide investigators with information on how robust their data are to this common assumption of independent survival probabilities.

  20. Change-in-ratio estimators for populations with more than two subclasses

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1991-01-01

    Change-in-ratio methods have been developed to estimate the size of populations with two or three population subclasses. Most of these methods require the often unreasonable assumption of equal sampling probabilities for individuals in all subclasses. This paper presents new models based on the weaker assumption that ratios of sampling probabilities are constant over time for populations with three or more subclasses. Estimation under these models requires that a value be assumed for one of these ratios when there are two samples. Explicit expressions are given for the maximum likelihood estimators under models for two samples with three or more subclasses and for three samples with two subclasses. A numerical method using readily available statistical software is described for obtaining the estimators and their standard errors under all of the models. Likelihood ratio tests that can be used in model selection are discussed. Emphasis is on the two-sample, three-subclass models for which Monte-Carlo simulation results and an illustrative example are presented.

  1. Final Radiological Assessment of External Exposure for CLEAR-Line Americium Recovery Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Adam C.; Belooussova, Olga N.; Hetrick, Lucas Duane

    2014-11-12

    Los Alamos National Laboratory is currently planning to implement an americium recovery program. The americium, ordinarily isotopically pure 241Am, would be extracted from existing Pu materials, converted to an oxide and shipped to support fabrication of americium oxide-beryllium neutron sources. These operations would occur in the currently proposed Chloride Extraction and Actinide Recovery (CLEAR) line of glove boxes. This glove box line would be collocated with the currently-operational Experimental Chloride Extraction Line (EXCEL). The focus of this document is to provide an in-depth assessment of the currently planned radiation protection measures and to determine whether or not further design workmore » is required to satisfy design-goal and ALARA requirements. Further, this document presents a history of americium recovery operations in the Department of Energy and high-level descriptions of the CLEAR line operations to provide a basis of comparison. Under the working assumptions adopted by this study, it was found that the evaluated design appears to mitigate doses to a level that satisfies the ALARA-in-design requirements of 10 CFR 835 as implemented by the Los Alamos National Laboratory procedure P121. The analyses indicate that extremity doses would also meet design requirements. Dose-rate calculations were performed using the radiation transport code MCNP5 and doses were estimated using a time-motion study developed in consort with the subject matter expert. A copy of this report and all supporting documentation are located on the Radiological Engineering server at Y:\\Rad Engineering\\2013 PROJECTS\\TA-55 Clear Line.« less

  2. Cost-benefit analysis of riparian protection in an eastern Canadian watershed.

    PubMed

    Trenholm, Ryan; Lantz, Van; Martínez-Espiñeira, Roberto; Little, Shawn

    2013-02-15

    Forested riparian buffers have proved to be an effective management practice that helps maintain ecological goods and services in watersheds. In this study, we assessed the non-market benefits and opportunity costs associated with implementing these buffers in an eastern Canadian watershed using contingent valuation and wood supply modeling methods, respectively. A number of buffer scenarios were considered, including 30 and 60 m buffers on woodlots and on all land (including woodlots, agricultural, and residential lands) in the watershed. Household annual WTP estimates ranged from -$6.80 to $42.85, and total present value benefits ranged from -$11.7 to $121.7 million (CDN 2007), depending on the buffer scenario, affected population, time horizon, and econometric modeling assumptions considered. Opportunity cost estimates range from $1.3 to $10.4 million in present value terms, depending on silvicultural and agriculture land rental rate assumptions. Overall, we found that the net present value of riparian buffers was positive for the majority of scenarios and assumptions. Some exceptions were found under more conservative benefit, and higher unit cost, assumptions. These results provide decision makers with data on stated benefits and opportunity costs of riparian buffers, as well as insight into the importance of modeling assumptions when using this framework of analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. An Economic Evaluation of Food Safety Education Interventions: Estimates and Critical Data Gaps.

    PubMed

    Zan, Hua; Lambea, Maria; McDowell, Joyce; Scharff, Robert L

    2017-08-01

    The economic evaluation of food safety interventions is an important tool that practitioners and policy makers use to assess the efficacy of their efforts. These evaluations are built on models that are dependent on accurate estimation of numerous input variables. In many cases, however, there is no data available to determine input values and expert opinion is used to generate estimates. This study uses a benefit-cost analysis of the food safety component of the adult Expanded Food and Nutrition Education Program (EFNEP) in Ohio as a vehicle for demonstrating how results based on variable values that are not objectively determined may be sensitive to alternative assumptions. In particular, the focus here is on how reported behavioral change is translated into economic benefits. Current gaps in the literature make it impossible to know with certainty how many people are protected by the education (what are the spillover effects?), the length of time education remains effective, and the level of risk reduction from change in behavior. Based on EFNEP survey data, food safety education led 37.4% of participants to improve their food safety behaviors. Under reasonable default assumptions, benefits from this improvement significantly outweigh costs, yielding a benefit-cost ratio of between 6.2 and 10.0. Incorporation of a sensitivity analysis using alternative estimates yields a greater range of estimates (0.2 to 56.3), which highlights the importance of future research aimed at filling these research gaps. Nevertheless, most reasonable assumptions lead to estimates of benefits that justify their costs.

  4. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  5. Population and Activity of On road Vehicles in MOVES201X

    EPA Science Inventory

    This report documents changes to assumptions about the US national highway vehicle fleet population and activity data for the next version of the MOVES model. Fleet population and activity data is used to convert emission rates into emission inventory values and then is used to ...

  6. Towards Understanding the DO-178C / ED-12C Assurance Case

    NASA Technical Reports Server (NTRS)

    Holloway, C M.

    2012-01-01

    This paper describes initial work towards building an explicit assurance case for DO-178C / ED-12C. Two specific questions are explored: (1) What are some of the assumptions upon which the guidance in the document relies, and (2) What claims are made concerning test coverage analysis?

  7. Education for Discipleship: A Curriculum Orientation for Christian Educators

    ERIC Educational Resources Information Center

    Hull, John E.

    2009-01-01

    This article investigates the long-held assumption that Christian educators need their own curriculum orientation. Seminal documents published by Philip Jackson and Harro Van Brummelen in the nineties are analyzed against the background of a brief history of the field of curriculum theory. The author accepts Jackson's conclusion that curriculum…

  8. 75 FR 9572 - Submission for OMB Review; Comment Request: Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-03

    ... Quality Control Reviews'' OMB control number 0584-0074. The document contained incorrect burden hours. The... methodology and assumptions used; (c) ways to enhance the quality, utility and clarity of the information to... information unless the collection of information displays a currently valid OMB control number and the agency...

  9. Constructivism and Pedagogical Reform in China: Issues and Challenges

    ERIC Educational Resources Information Center

    Tan, Charlene

    2017-01-01

    This article critically discusses the constructivist ideas, assumptions and practices that undergird the current pedagogical reform in China. The pedagogical reform is part of a comprehensive curriculum reform that has been introduced across schools in Mainland China. Although the official documents did not specify the underpinning theories for…

  10. New Jersey State Library Technology Plan, 1999-2001.

    ERIC Educational Resources Information Center

    Breedlove, Elizabeth A., Ed.

    This document represents the New Jersey State Library Technology Plan for 1999-2001. Contents include: the mission statement; technology planning process of the Technology Committee (convened by the State Library); specific goals of the Technology Plan 1999-2001; technology assumptions for the operational library and statewide library services;…

  11. A Curriculum on Obedience to Authority.

    ERIC Educational Resources Information Center

    Bushman, Brad J.

    This document is a curriculum guide on obedience to authority based on the assumption that informed, educated, thoughtful individuals are more likely to make intelligent decisions regarding obedience or disobedience to authority figures' requests than are uninformed individuals. The intent of this curriculum is to expose students to a small number…

  12. Practical Assessment, Research and Evaluation, 2002-2003.

    ERIC Educational Resources Information Center

    Rudner, Lawrence M., Ed.; Schaefer, William D., Ed.

    2000-01-01

    This document consists of the first 10 articles of volume 8 of the electronic journal "Practical Assessment, Research & Evaluation" published in 2002-2003: (1) "Using Electronic Surveys: Advice from Survey Professionals" (David M. Shannon, Todd E. Johnson, Shelby Searcy, and Alan Lott); (2) "Four Assumptions of Multiple Regression That Researchers…

  13. The Music of Puerto Rico; A Classroom Music Handbook.

    ERIC Educational Resources Information Center

    Schmidt, Lloyd; Toro, Leonor

    With the assumption that the teacher of students with identifiable ethnic or cultural background must be prepared to deal with each student's heritage in a meaningful way, the document provides resource materials for Connecticut classroom teachers and/or music specialists with responsibilities for teaching children of Puerto Rican heritage. The…

  14. Document-Oriented E-Learning Components

    ERIC Educational Resources Information Center

    Piotrowski, Michael

    2009-01-01

    This dissertation questions the common assumption that e-learning requires a "learning management system" (LMS) such as Moodle or Blackboard. Based on an analysis of the current state of the art in LMSs, we come to the conclusion that the functionality of conventional e-learning platforms consists of basic content management and…

  15. Genders, Mathematics, and Feminisms.

    ERIC Educational Resources Information Center

    Damarin, Suzanne

    Historical studies reveal that mathematics has been claimed as a private domain by men, while studies of the popular press document that women and girls are considered incompetent in that field. The study of gender and mathematics as viewed through feminism can create a new reading which exposes hidden assumptions, unwarranted conclusions, and…

  16. FARSITE: Fire Area Simulator-model development and evaluation

    Treesearch

    Mark A. Finney

    1998-01-01

    A computer simulation model, FARSITE, includes existing fire behavior models for surface, crown, spotting, point-source fire acceleration, and fuel moisture. The model's components and assumptions are documented. Simulations were run for simple conditions that illustrate the effect of individual fire behavior models on two-dimensional fire growth.

  17. Analysis of Senate Amendment 2028, the Climate Stewardship Act of 2003

    EIA Publications

    2004-01-01

    On May 11, 2004, Senator Landrieu asked the Energy Information Administration (EIA) to evaluate SA 2028. This paper responds to that request, relying on the modeling methodology, data sources, and assumptions used to analyze the original bill, as extensively documented in EIA's June 2003 report.

  18. NLPIR: A Theoretical Framework for Applying Natural Language Processing to Information Retrieval.

    ERIC Educational Resources Information Center

    Zhou, Lina; Zhang, Dongsong

    2003-01-01

    Proposes a theoretical framework called NLPIR that integrates natural language processing (NLP) into information retrieval (IR) based on the assumption that there exists representation distance between queries and documents. Discusses problems in traditional keyword-based IR, including relevance, and describes some existing NLP techniques.…

  19. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    PubMed

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  20. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model

    PubMed Central

    Austin, Peter C.

    2017-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694

  1. Bayesian Estimation of Panel Data Fractional Response Models with Endogeneity: An Application to Standardized Test Rates

    ERIC Educational Resources Information Center

    Kessler, Lawrence M.

    2013-01-01

    In this paper I propose Bayesian estimation of a nonlinear panel data model with a fractional dependent variable (bounded between 0 and 1). Specifically, I estimate a panel data fractional probit model which takes into account the bounded nature of the fractional response variable. I outline estimation under the assumption of strict exogeneity as…

  2. Rasch Model Parameter Estimation in the Presence of a Nonnormal Latent Trait Using a Nonparametric Bayesian Approach

    ERIC Educational Resources Information Center

    Finch, Holmes; Edwards, Julianne M.

    2016-01-01

    Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…

  3. Comparing Fit and Reliability Estimates of a Psychological Instrument Using Second-Order CFA, Bifactor, and Essentially Tau-Equivalent (Coefficient Alpha) Models via AMOS 22

    ERIC Educational Resources Information Center

    Black, Ryan A.; Yang, Yanyun; Beitra, Danette; McCaffrey, Stacey

    2015-01-01

    Estimation of composite reliability within a hierarchical modeling framework has recently become of particular interest given the growing recognition that the underlying assumptions of coefficient alpha are often untenable. Unfortunately, coefficient alpha remains the prominent estimate of reliability when estimating total scores from a scale with…

  4. Pregnancy intentions – a complex construct and call for new measures

    PubMed Central

    Mumford, Sunni L.; Sapra, Katherine J.; King, Rosalind B.; Louis, Jean Fredo; Buck Louis, Germaine M.

    2016-01-01

    Objective To estimate the prevalence of unintended pregnancies under relaxed assumptions regarding birth control use compared with a traditional constructed measure. Design Cross-sectional survey. Setting Not applicable. Patients Nationally representative sample of U.S. females aged 15–44 years. Intervention(s) None. Main Outcome Measure(s) The prevalence of intended and unintended pregnancies as estimated by 1) a traditional constructed measure from the National Survey of Family Growth (NSFG), and 2) a constructed measure relaxing assumptions regarding birth control use, reasons for non-use, and pregnancy timing. Results The prevalence of unintended pregnancies was 6% higher using the traditional constructed measure as compared to the approach with relaxed assumptions (NSFG: 44%, 95% confidence interval [CI] 41, 46; new construct 38%, 95% CI 36, 41). Using the NSFG approach only 92% of women who stopped birth control to become pregnant and 0% of women who were not using contraceptives at the time of the pregnancy and reported that they did not mind getting pregnant were classified as having intended pregnancies, compared to 100% using the new construct. Conclusion Current measures of pregnancy intention may overestimate rates of unintended pregnancy, with over 340,000 pregnancies in the United States misclassified as unintended using the current approach, corresponding to an estimated savings of $678 million in public health care expenditures. Current constructs make assumptions that may not reflect contemporary reproductive practices and improved measures are needed. PMID:27490044

  5. Testing assumptions for unbiased estimation of survival of radiomarked harlequin ducks

    USGS Publications Warehouse

    Esler, Daniel N.; Mulcahy, Daniel M.; Jarvis, Robert L.

    2000-01-01

    Unbiased estimates of survival based on individuals outfitted with radiotransmitters require meeting the assumptions that radios do not affect survival, and animals for which the radio signal is lost have the same survival probability as those for which fate is known. In most survival studies, researchers have made these assumptions without testing their validity. We tested these assumptions by comparing interannual recapture rates (and, by inference, survival) between radioed and unradioed adult female harlequin ducks (Histrionicus histrionicus), and for radioed females, between right-censored birds (i.e., those for which the radio signal was lost during the telemetry monitoring period) and birds with known fates. We found that recapture rates of birds equipped with implanted radiotransmitters (21.6 ± 3.0%; x̄ ± SE) were similar to unradioed birds (21.7 ± 8.6%), suggesting that radios did not affect survival. Recapture rates also were similar between right-censored (20.6 ± 5.1%) and known-fate individuals (22.1 ± 3.8%), suggesting that missing birds were not subject to differential mortality. We also determined that capture and handling resulted in short-term loss of body mass for both radioed and unradioed females and that this effect was more pronounced for radioed birds (the difference between groups was 15.4 ± 7.1 g). However, no difference existed in body mass after recapture 1 year later. Our study suggests that implanted radios are an unbiased method for estimating survival of harlequin ducks and likely other species under similar circumstances.

  6. Handwritten document age classification based on handwriting styles

    NASA Astrophysics Data System (ADS)

    Ramaiah, Chetan; Kumar, Gaurav; Govindaraju, Venu

    2012-01-01

    Handwriting styles are constantly changing over time. We approach the novel problem of estimating the approximate age of Historical Handwritten Documents using Handwriting styles. This system will have many applications in handwritten document processing engines where specialized processing techniques can be applied based on the estimated age of the document. We propose to learn a distribution over styles across centuries using Topic Models and to apply a classifier over weights learned in order to estimate the approximate age of the documents. We present a comparison of different distance metrics such as Euclidean Distance and Hellinger Distance within this application.

  7. Advanced space power requirements and techniques. Task 1: Mission projections and requirements. Volume 3: Appendices. [cost estimates and computer programs

    NASA Technical Reports Server (NTRS)

    Wolfe, M. G.

    1978-01-01

    Contents: (1) general study guidelines and assumptions; (2) launch vehicle performance and cost assumptions; (3) satellite programs 1959 to 1979; (4) initiative mission and design characteristics; (5) satellite listing; (6) spacecraft design model; (7) spacecraft cost model; (8) mission cost model; and (9) nominal and optimistic budget program cost summaries.

  8. Empirical Benchmarks of Hidden Bias in Educational Research: Implication for Assessing How well Propensity Score Methods Approximate Experiments and Conducting Sensitivity Analysis

    ERIC Educational Resources Information Center

    Dong, Nianbo; Lipsey, Mark

    2014-01-01

    When randomized control trials (RCT) are not feasible, researchers seek other methods to make causal inference, e.g., propensity score methods. One of the underlined assumptions for the propensity score methods to obtain unbiased treatment effect estimates is the ignorability assumption, that is, conditional on the propensity score, treatment…

  9. Projections of the Population of the United States, by Age, Sex, and Race: 1983 to 2080.

    ERIC Educational Resources Information Center

    Spencer, Gregory

    1984-01-01

    Based on assumptions about fertility, mortality, and net immigration trends, statistical tables depict the future U.S. population by age, sex, and race. Figures are based on the July 1, 1982, population estimates and race definitions and are projected using the cohort-component method with alternative assumptions for future fertility, mortality,…

  10. Exploring the Estimation of Examinee Locations Using Multidimensional Latent Trait Models under Different Distributional Assumptions

    ERIC Educational Resources Information Center

    Jang, Hyesuk

    2014-01-01

    This study aims to evaluate a multidimensional latent trait model to determine how well the model works in various empirical contexts. Contrary to the assumption of these latent trait models that the traits are normally distributed, situations in which the latent trait is not shaped with a normal distribution may occur (Sass et al, 2008; Woods…

  11. Extrapolating Survival from Randomized Trials Using External Data: A Review of Methods

    PubMed Central

    Jackson, Christopher; Stevens, John; Ren, Shijie; Latimer, Nick; Bojke, Laura; Manca, Andrea; Sharples, Linda

    2016-01-01

    This article describes methods used to estimate parameters governing long-term survival, or times to other events, for health economic models. Specifically, the focus is on methods that combine shorter-term individual-level survival data from randomized trials with longer-term external data, thus using the longer-term data to aid extrapolation of the short-term data. This requires assumptions about how trends in survival for each treatment arm will continue after the follow-up period of the trial. Furthermore, using external data requires assumptions about how survival differs between the populations represented by the trial and external data. Study reports from a national health technology assessment program in the United Kingdom were searched, and the findings were combined with “pearl-growing” searches of the academic literature. We categorized the methods that have been used according to the assumptions they made about how the hazards of death vary between the external and internal data and through time, and we discuss the appropriateness of the assumptions in different circumstances. Modeling choices, parameter estimation, and characterization of uncertainty are discussed, and some suggestions for future research priorities in this area are given. PMID:27005519

  12. Implementation of Instrumental Variable Bounds for Data Missing Not at Random.

    PubMed

    Marden, Jessica R; Wang, Linbo; Tchetgen, Eric J Tchetgen; Walter, Stefan; Glymour, M Maria; Wirth, Kathleen E

    2018-05-01

    Instrumental variables are routinely used to recover a consistent estimator of an exposure causal effect in the presence of unmeasured confounding. Instrumental variable approaches to account for nonignorable missing data also exist but are less familiar to epidemiologists. Like instrumental variables for exposure causal effects, instrumental variables for missing data rely on exclusion restriction and instrumental variable relevance assumptions. Yet these two conditions alone are insufficient for point identification. For estimation, researchers have invoked a third assumption, typically involving fairly restrictive parametric constraints. Inferences can be sensitive to these parametric assumptions, which are typically not empirically testable. The purpose of our article is to discuss another approach for leveraging a valid instrumental variable. Although the approach is insufficient for nonparametric identification, it can nonetheless provide informative inferences about the presence, direction, and magnitude of selection bias, without invoking a third untestable parametric assumption. An important contribution of this article is an Excel spreadsheet tool that can be used to obtain empirical evidence of selection bias and calculate bounds and corresponding Bayesian 95% credible intervals for a nonidentifiable population proportion. For illustrative purposes, we used the spreadsheet tool to analyze HIV prevalence data collected by the 2007 Zambia Demographic and Health Survey (DHS).

  13. A child's garden of curses: a gender, historical, and age-related evaluation of the taboo lexicon.

    PubMed

    Jay, Kristin L; Jay, Timothy B

    2013-01-01

    Child swearing is a largely unexplored topic among language researchers, although assumptions about what children know about taboo language form the basis for language standards in many settings. The purpose of the studies presented here is to provide descriptive data about the emergence of adultlike swearing in children; specifically, we aim to document what words children of different ages know and use. Study 1 presents observational data from adults and children (ages 1-12). Study 2 compares perceptions of the inappropriateness of taboo words between adults and older (ages 9-12) and younger (ages 6-8) children. Collectively these data indicate that by the time children enter school they have the rudiments of adult swearing, although children and adults differ in their assessments of the inappropriateness of mild taboo words. Comparisons of these data with estimates obtained in the 1980s allow us to comment on whether swearing habits are changing over the years. Child swearing data can be applied to contemporary social problems and academic issues.

  14. Algae Production from Wastewater Resources: An Engineering and Cost Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoenung, Susan; Efroymson, Rebecca Ann

    Co-locating algae cultivation ponds near municipal wastewater (MWW) facilities provides the opportunity to make use of the nitrogen and phosphorus compounds in the wastewater as nutrient sources for the algae. This use benefits MWW facilities, the algae biomass and biofuel or bioproduct industry, and the users of streams where treated or untreated waste would be discharged. Nutrient compounds can lead to eutrophication, hypoxia, and adverse effects to some organisms if released downstream. This analysis presents an estimate of the cost savings made possible to cultivation facilities by using the nutrients from wastewater for algae growth rather than purchase of themore » nutrients. The analysis takes into consideration the cost of pipe transport from the wastewater facility to the algae ponds, a cost factor that has not been publicly documented in the past. The results show that the savings in nutrient costs can support a wastewater transport distance up to 10 miles for a 1000-acre-pond facility, with potential adjustments for different operating assumptions.« less

  15. Hanford Site Composite Analysis Technical Approach Description: Groundwater Pathway Dose Calculation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgans, D. L.; Lindberg, S. L.

    The purpose of this technical approach document (TAD) is to document the assumptions, equations, and methods used to perform the groundwater pathway radiological dose calculations for the revised Hanford Site Composite Analysis (CA). DOE M 435.1-1, states, “The composite analysis results shall be used for planning, radiation protection activities, and future use commitments to minimize the likelihood that current low-level waste disposal activities will result in the need for future corrective or remedial actions to adequately protect the public and the environment.”

  16. Robust small area estimation of poverty indicators using M-quantile approach (Case study: Sub-district level in Bogor district)

    NASA Astrophysics Data System (ADS)

    Girinoto, Sadik, Kusman; Indahwati

    2017-03-01

    The National Socio-Economic Survey samples are designed to produce estimates of parameters of planned domains (provinces and districts). The estimation of unplanned domains (sub-districts and villages) has its limitation to obtain reliable direct estimates. One of the possible solutions to overcome this problem is employing small area estimation techniques. The popular choice of small area estimation is based on linear mixed models. However, such models need strong distributional assumptions and do not easy allow for outlier-robust estimation. As an alternative approach for this purpose, M-quantile regression approach to small area estimation based on modeling specific M-quantile coefficients of conditional distribution of study variable given auxiliary covariates. It obtained outlier-robust estimation from influence function of M-estimator type and also no need strong distributional assumptions. In this paper, the aim of study is to estimate the poverty indicator at sub-district level in Bogor District-West Java using M-quantile models for small area estimation. Using data taken from National Socioeconomic Survey and Villages Potential Statistics, the results provide a detailed description of pattern of incidence and intensity of poverty within Bogor district. We also compare the results with direct estimates. The results showed the framework may be preferable when direct estimate having no incidence of poverty at all in the small area.

  17. Maximum sustainable yield estimates of Ladypees, Sillago sihama (Forsskål), fishery in Pakistan using the ASPIC and CEDA packages

    NASA Astrophysics Data System (ADS)

    Panhwar, Sher Khan; Liu, Qun; Khan, Fozia; Siddiqui, Pirzada J. A.

    2012-03-01

    Using surplus production model packages of ASPIC (a stock-production model incorporating covariates) and CEDA (Catch effort data analysis), we analyzed the catch and effort data of Sillago sihama fishery in Pakistan. ASPIC estimates the parameters of MSY (maximum sustainable yield), F msy (fishing mortality), q (catchability coefficient), K (carrying capacity or unexploited biomass) and B1/K (maximum sustainable yield over initial biomass). The estimated non-bootstrapped value of MSY based on logistic was 598 t and that based on the Fox model was 415 t, which showed that the Fox model estimation was more conservative than that with the logistic model. The R 2 with the logistic model (0.702) is larger than that with the Fox model (0.541), which indicates a better fit. The coefficient of variation (cv) of the estimated MSY was about 0.3, except for a larger value 88.87 and a smaller value of 0.173. In contrast to the ASPIC results, the R 2 with the Fox model (0.651-0.692) was larger than that with the Schaefer model (0.435-0.567), indicating a better fit. The key parameters of CEDA are: MSY, K, q, and r (intrinsic growth), and the three error assumptions in using the models are normal, log normal and gamma. Parameter estimates from the Schaefer and Pella-Tomlinson models were similar. The MSY estimations from the above two models were 398 t, 549 t and 398 t for normal, log-normal and gamma error distributions, respectively. The MSY estimates from the Fox model were 381 t, 366 t and 366 t for the above three error assumptions, respectively. The Fox model estimates were smaller than those for the Schaefer and the Pella-Tomlinson models. In the light of the MSY estimations of 415 t from ASPIC for the Fox model and 381 t from CEDA for the Fox model, MSY for S. sihama is about 400 t. As the catch in 2003 was 401 t, we would suggest the fishery should be kept at the current level. Production models used here depend on the assumption that CPUE (catch per unit effort) data used in the study can reliably quantify temporal variability in population abundance, hence the modeling results would be wrong if such an assumption is not met. Because the reliability of this CPUE data in indexing fish population abundance is unknown, we should be cautious with the interpretation and use of the derived population and management parameters.

  18. A Laboratory Study on the Reliability Estimations of the Mini-CEX

    ERIC Educational Resources Information Center

    de Lima, Alberto Alves; Conde, Diego; Costabel, Juan; Corso, Juan; Van der Vleuten, Cees

    2013-01-01

    Reliability estimations of workplace-based assessments with the mini-CEX are typically based on real-life data. Estimations are based on the assumption of local independence: the object of the measurement should not be influenced by the measurement itself and samples should be completely independent. This is difficult to achieve. Furthermore, the…

  19. Generalizations and Extensions of the Probability of Superiority Effect Size Estimator

    ERIC Educational Resources Information Center

    Ruscio, John; Gera, Benjamin Lee

    2013-01-01

    Researchers are strongly encouraged to accompany the results of statistical tests with appropriate estimates of effect size. For 2-group comparisons, a probability-based effect size estimator ("A") has many appealing properties (e.g., it is easy to understand, robust to violations of parametric assumptions, insensitive to outliers). We review…

  20. Are Structural Estimates of Auction Models Reasonable? Evidence from Experimental Data

    ERIC Educational Resources Information Center

    Bajari, Patrick; Hortacsu, Ali

    2005-01-01

    Recently, economists have developed methods for structural estimation of auction models. Many researchers object to these methods because they find the strict rationality assumptions to be implausible. Using bid data from first-price auction experiments, we estimate four alternative structural models: (1) risk-neutral Bayes-Nash, (2) risk-averse…

  1. Annual forest inventory estimates based on the moving average

    Treesearch

    Francis A. Roesch; James R. Steinman; Michael T. Thompson

    2002-01-01

    Three interpretations of the simple moving average estimator, as applied to the USDA Forest Service's annual forest inventory design, are presented. A corresponding approach to composite estimation over arbitrarily defined land areas and time intervals is given for each interpretation, under the assumption that the investigator is armed with only the spatial/...

  2. A Nonparametric Approach to Estimate Classification Accuracy and Consistency

    ERIC Educational Resources Information Center

    Lathrop, Quinn N.; Cheng, Ying

    2014-01-01

    When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA…

  3. Using Dirichlet Processes for Modeling Heterogeneous Treatment Effects across Sites

    ERIC Educational Resources Information Center

    Miratrix, Luke; Feller, Avi; Pillai, Natesh; Pati, Debdeep

    2016-01-01

    Modeling the distribution of site level effects is an important problem, but it is also an incredibly difficult one. Current methods rely on distributional assumptions in multilevel models for estimation. There it is hoped that the partial pooling of site level estimates with overall estimates, designed to take into account individual variation as…

  4. Lagrangian methods for blood damage estimation in cardiovascular devices--How numerical implementation affects the results.

    PubMed

    Marom, Gil; Bluestein, Danny

    2016-01-01

    This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.

  5. Homicide Rates of Transgender Individuals in the United States: 2010-2014.

    PubMed

    Dinno, Alexis

    2017-09-01

    To estimate homicide rates of transgender US residents and relative risks (RRs) of homicide with respect to cisgender comparators intersected with age, gender, and race/ethnicity. I estimated homicide rates for transgender residents and transfeminine, Black, Latin@, and young (aged 15-34 years) subpopulations during the period 2010 to 2014 using Transgender Day of Remembrance and National Coalition of Anti-Violence Programs transgender homicide data. I used estimated transgender prevalences to estimate RRs using cisgender comparators. I performed a sensitivity analysis to situate all results within assumptions about underreporting of transgender homicides and assumptions about the prevalence of transgender residents. The overall homicide rate of transgender individuals was likely to be less than that of cisgender individuals, with 8 of 12 RR estimates below 1.0. However, the homicide rates of young transfeminine Black and Latina residents were almost certainly higher than were those of cisfeminine comparators, with all RR estimates above 1.0 for Blacks and all above 1.0 for Latinas. Antiviolence public health programs should identify young and Black or Latina transfeminine women as an especially vulnerable population.

  6. Learning to Predict Combinatorial Structures

    NASA Astrophysics Data System (ADS)

    Vembu, Shankar

    2009-12-01

    The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.

  7. Measuring and managing risk improves strategic financial planning.

    PubMed

    Kleinmuntz, D N; Kleinmuntz, C E; Stephen, R G; Nordlund, D S

    1999-06-01

    Strategic financial risk assessment is a practical technique that can enable healthcare strategic decision makers to perform quantitative analyses of the financial risks associated with a given strategic initiative. The technique comprises six steps: (1) list risk factors that might significantly influence the outcomes, (2) establish best-guess estimates for assumptions regarding how each risk factor will affect its financial outcomes, (3) identify risk factors that are likely to have the greatest impact, (4) assign probabilities to assumptions, (5) determine potential scenarios associated with combined assumptions, and (6) determine the probability-weighted average of the potential scenarios.

  8. Identification and characterization of conservative organic tracers for use as hydrologic tracers for the Yucca Mountain Site characterization study; Progress report, April 1, 1993--June 30, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dombrowski, T.; Stetzenbach, K.

    1993-08-01

    This report is in two parts one for the fluorinated benzoic acids and one for the fluorinated aliphatic acids. The assumptions made in the report regarding the amount of tracer that will be used, dilution of the tracer during the test and the length of exposure (if any) to individuals drinking the water were made by the authors. These assumptions must really come from the USGS hydrologists in charge of the c-well tracer testing program. Accurate estimates of dilution of the tracer during the test are also important because of solubility limitations of some of the tracers. Three of themore » difluorobenzoic acids have relatively low solubilities and may not be usable if the dilution estimates are large. The toxicologist that reviewed the document agreed with our conclusion that the fluorinated benzoic and toluic acids do not represent a health hazard if used under the conditions as outlined in the report. We are currently testing 15 of these compounds, and if even if three difluorobenzoic acids cannot be used because of solubility limitations we will still have 12 tracers. The toxicologist felt that the aliphatic fluorinated acids potentially present more of a health risk than the aromatic. This assessment was based on the fact of a known allergic response to halothane anesthetic. This risk, although minimal, is known and he felt that was enough reason to recommend against their use. The authors feel that the toxicologists interpretation of this risk was overly conservative, however, we will not go against his recommendation at this time for the following reasons. First, without the aliphatic compounds we still have 12 to 15 fluorinated aromatic acids which, should be enough for the c-well tests. Second, to get a permit to use aliphatic compounds would undoubtedly require a hearing which could be quite lengthy.« less

  9. Transportation Sector Model of the National Energy Modeling System. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-01-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. The NEMS Transportation Model comprises a series of semi-independent models which address different aspects of the transportation sector. The primary purpose of this model is to provide mid-term forecasts of transportation energy demand by fuel type including, but not limited to, motor gasoline, distillate, jet fuel, and alternative fuels (such as CNG) not commonly associated with transportation. Themore » current NEMS forecast horizon extends to the year 2010 and uses 1990 as the base year. Forecasts are generated through the separate consideration of energy consumption within the various modes of transport, including: private and fleet light-duty vehicles; aircraft; marine, rail, and truck freight; and various modes with minor overall impacts, such as mass transit and recreational boating. This approach is useful in assessing the impacts of policy initiatives, legislative mandates which affect individual modes of travel, and technological developments. The model also provides forecasts of selected intermediate values which are generated in order to determine energy consumption. These elements include estimates of passenger travel demand by automobile, air, or mass transit; estimates of the efficiency with which that demand is met; projections of vehicle stocks and the penetration of new technologies; and estimates of the demand for freight transport which are linked to forecasts of industrial output. Following the estimation of energy demand, TRAN produces forecasts of vehicular emissions of the following pollutants by source: oxides of sulfur, oxides of nitrogen, total carbon, carbon dioxide, carbon monoxide, and volatile organic compounds.« less

  10. Cure models for estimating hospital-based breast cancer survival.

    PubMed

    Rama, Ranganathan; Swaminathan, Rajaraman; Venkatesan, Perumal

    2010-01-01

    Research on cancer survival is enriched by development and application of innovative analytical approaches in relation to standard methods. The aim of the present paper is to document the utility of a mixture model to estimate the cure fraction and compare it with other approaches. The data were for 1,107 patients with locally advanced breast cancer, who completed the neo-adjuvant treatment protocol during 1990-99 at the Cancer Institute (WIA), Chennai, India. Tumour stage, post-operative pathological node (PN) and tumour residue (TR) status were studied. Event free survival probability was estimated using the Kaplan-Meier method. Cure models under proportional and non-proportional hazard assumptions following log normal distribution for survival time were used to estimate both the cure fraction and the survival function for the uncured. Event free survival at 5 and 10 years were 64.2% and 52.6% respectively and cure fraction was 47.5% for all cases together. Follow up ranged between 0-15 years and survival probabilities showed minimal changes after 7 years of follow up. TR and PN emerged as independent prognostic factors using Cox and proportional hazard (PH) cure models. Proportionality condition was violated when tumour stage was considered and it was statistically significant only under PH and not under non PH cure models. However, TR and PN continued to be independent prognostic factors after adjusting for tumour stage using the non PH cure model. A consistent ordering of cure fractions with respect to factors of PN and TR was forthcoming across tumour stages using PH and non PH cure models, but perceptible differences in survival were observed between the two. If PH conditions are violated, analysis using a non PH model is advocated and mixture cure models are useful in estimating the cure fraction and constructing survival curves for non-cures.

  11. Use of anthropogenic radioisotopes to estimate rates of soil redistribution by wind II: The potential for future use of 239+240Pu

    NASA Astrophysics Data System (ADS)

    Van Pelt, R. Scott; Ketterer, Michael E.

    2013-06-01

    In the previous paper, the use of soilborne 137Cs from atmospheric fallout to estimate rates of soil redistribution, particularly by wind, was reviewed. This method relies on the assumption that the source of 137Cs in the soil profile is from atmospheric fallout following the period of atmospheric weapons testing so that the temporal and, to a certain extent, the spatial patterns of 137Cs deposition are known. One of the major limitations occurs when local or regional sources of 137Cs contamination mask the pulse from global fallout, making temporal estimates of redistribution difficult or impossible. Like 137Cs, Pu exhibits strong affinity for binding to soil particle surfaces, and therefore, re-distribution of Pu inventory indicates inferred soil re-distribution. Compared to 137Cs, 239Pu and 240Pu offer several important advantages: (a) the two major Pu isotopes have much longer half-lives than 137Cs and (b) the ratio 240Pu/239Pu is used to examine whether the Pu is from stratospheric fallout. In this paper, we review the literature concerning Pu in soil and of current attempts to use this tracer to estimate rates of soil redistribution. We also present preliminary, unpublished data from a pilot study designed to test whether or not 239+240Pu can be used to estimate rates of soil redistribution by wind. Based on similarities of profile distribution and relative inventories between 137Cs measurements and 239+240Pu measurements of split samples from a series of fields with documented wind erosion histories, we conclude that 239+240Pu may well be the anthropogenic radioisotope of choice for future soil redistribution investigations.

  12. Demographic estimation methods for plants with unobservable life-states

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.; Schaub, M.

    2005-01-01

    Demographic estimation of vital parameters in plants with an unobservable dormant state is complicated, because time of death is not known. Conventional methods assume that death occurs at a particular time after a plant has last been seen aboveground but the consequences of assuming a particular duration of dormancy have never been tested. Capture-recapture methods do not make assumptions about time of death; however, problems with parameter estimability have not yet been resolved. To date, a critical comparative assessment of these methods is lacking. We analysed data from a 10 year study of Cleistes bifaria, a terrestrial orchid with frequent dormancy, and compared demographic estimates obtained by five varieties of the conventional methods, and two capture-recapture methods. All conventional methods produced spurious unity survival estimates for some years or for some states, and estimates of demographic rates sensitive to the time of death assumption. In contrast, capture-recapture methods are more parsimonious in terms of assumptions, are based on well founded theory and did not produce spurious estimates. In Cleistes, dormant episodes lasted for 1-4 years (mean 1.4, SD 0.74). The capture-recapture models estimated ramet survival rate at 0.86 (SE~ 0.01), ranging from 0.77-0.94 (SEs # 0.1) in anyone year. The average fraction dormant was estimated at 30% (SE 1.5), ranging 16 -47% (SEs # 5.1) in anyone year. Multistate capture-recapture models showed that survival rates were positively related to precipitation in the current year, but transition rates were more strongly related to precipitation in the previous than in the current year, with more ramets going dormant following dry years. Not all capture-recapture models of interest have estimable parameters; for instance, without excavating plants in years when they do not appear aboveground, it is not possible to obtain independent timespecific survival estimates for dormant plants. We introduce rigorous computer algebra methods to identify the parameters that are estimable in principle. As life-states are a prominent feature in plant life cycles, multi state capture-recapture models are a natural framework for analysing population dynamics of plants with dormancy.

  13. Two Birds With One Stone: Estimating Population Vaccination Coverage From a Test-negative Vaccine Effectiveness Case-control Study.

    PubMed

    Doll, Margaret K; Morrison, Kathryn T; Buckeridge, David L; Quach, Caroline

    2016-10-15

    Vaccination program evaluation includes assessment of vaccine uptake and direct vaccine effectiveness (VE). Often examined separately, we propose a design to estimate rotavirus vaccination coverage using controls from a rotavirus VE test-negative case-control study and to examine coverage following implementation of the Quebec, Canada, rotavirus vaccination program. We present our assumptions for using these data as a proxy for coverage in the general population, explore effects of diagnostic accuracy on coverage estimates via simulations, and validate estimates with an external source. We found 79.0% (95% confidence interval, 74.3%, 83.0%) ≥2-dose rotavirus coverage among participants eligible for publicly funded vaccination. No differences were detected between study and external coverage estimates. Simulations revealed minimal bias in estimates with high diagnostic sensitivity and specificity. We conclude that controls from a VE case-control study may be a valuable resource of coverage information when reasonable assumptions can be made for estimate generalizability; high rotavirus coverage demonstrates success of the Quebec program. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  14. The millennium development goals and household energy requirements in Nigeria.

    PubMed

    Ibitoye, Francis I

    2013-01-01

    Access to clean and affordable energy is critical for the realization of the United Nations' Millennium Development Goals, or MDGs. In many developing countries, a large proportion of household energy requirements is met by use of non-commercial fuels such as wood, animal dung, crop residues, etc., and the associated health and environmental hazards of these are well documented. In this work, a scenario analysis of energy requirements in Nigeria's households is carried out to compare estimates between 2005 and 2020 under a reference scenario, with estimates under the assumption that Nigeria will meet the millennium goals. Requirements for energy under the MDG scenario are measured by the impacts on energy use, of a reduction by half, in 2015, (a) the number of household without access to electricity for basic services, (b) the number of households without access to modern energy carriers for cooking, and (c) the number of families living in one-room households in Nigeria's overcrowded urban slums. For these to be achieved, household electricity consumption would increase by about 41% over the study period, while the use of modern fuels would more than double. This migration to the use of modern fuels for cooking results in a reduction in the overall fuelwood consumption, from 5 GJ/capita in 2005, to 2.9 GJ/capita in 2015.

  15. Health impact assessment of active transportation: A systematic review.

    PubMed

    Mueller, Natalie; Rojas-Rueda, David; Cole-Hunter, Tom; de Nazelle, Audrey; Dons, Evi; Gerike, Regine; Götschi, Thomas; Int Panis, Luc; Kahlmeier, Sonja; Nieuwenhuijsen, Mark

    2015-07-01

    Walking and cycling for transportation (i.e. active transportation, AT), provide substantial health benefits from increased physical activity (PA). However, risks of injury from exposure to motorized traffic and their emissions (i.e. air pollution) exist. The objective was to systematically review studies conducting health impact assessment (HIA) of a mode shift to AT on grounds of associated health benefits and risks. Systematic database searches of MEDLINE, Web of Science and Transportation Research International Documentation were performed by two independent researchers, augmented by bibliographic review, internet searches and expert consultation to identify peer-reviewed studies from inception to December 2014. Thirty studies were included, originating predominantly from Europe, but also the United States, Australia and New Zealand. They compromised of mostly HIA approaches of comparative risk assessment and cost-benefit analysis. Estimated health benefit-risk or benefit-cost ratios of a mode shift to AT ranged between -2 and 360 (median=9). Effects of increased PA contributed the most to estimated health benefits, which strongly outweighed detrimental effects of traffic incidents and air pollution exposure on health. Despite different HIA methodologies being applied with distinctive assumptions on key parameters, AT can provide substantial net health benefits, irrespective of geographical context. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Cost Estimating Cases: Educational Tools for Cost Analysts

    DTIC Science & Technology

    1993-09-01

    only appropriate documentation should be provided. In other words, students should not submit all of the documentation possible using ACEIT , only that...case was their lack of understanding of the ACEIT software used to conduct the estimate. Specifically, many students misinterpreted the cost...estimating relationships (CERs) embedded in the 49 software. Additionally, few of the students were able to properly organize the ACEIT documentation output

  17. The Estimation Theory Framework of Data Assimilation

    NASA Technical Reports Server (NTRS)

    Cohn, S.; Atlas, Robert (Technical Monitor)

    2002-01-01

    Lecture 1. The Estimation Theory Framework of Data Assimilation: 1. The basic framework: dynamical and observation models; 2. Assumptions and approximations; 3. The filtering, smoothing, and prediction problems; 4. Discrete Kalman filter and smoother algorithms; and 5. Example: A retrospective data assimilation system

  18. FIA Estimation in the New Millennium

    Treesearch

    Francis A. Roesch

    2001-01-01

    In the new millennium, Forest Inventory and Analysis (FIA) will deliver most of its database information directly to the users over the Internet. This assumption indicates the need for a GIS-based estimation system to support the information delivery system. Presumably, as the data set evolves, it will free FIA and the users from exclusive estimation within political...

  19. Estimating canopy bulk density and canopy base height for interior western US conifer stands

    Treesearch

    Seth A. Ex; Frederick W. Smith; Tara L. Keyser; Stephanie A. Rebain

    2016-01-01

    Crown fire hazard is often quantified using effective canopy bulk density (CBD) and canopy base height (CBH). When CBD and CBH are estimated using nonlocal crown fuel biomass allometries and uniform crown fuel distribution assumptions, as is common practice, values may differ from estimates made using local allometries and nonuniform...

  20. Estimating Marginal Returns to Education. NBER Working Paper No. 16474

    ERIC Educational Resources Information Center

    Carneiro, Pedro; Heckman, James J.; Vytlacil, Edward J.

    2010-01-01

    This paper estimates the marginal returns to college for individuals induced to enroll in college by different marginal policy changes. The recent instrumental variables literature seeks to estimate this parameter, but in general it does so only under strong assumptions that are tested and found wanting. We show how to utilize economic theory and…

  1. Examining the Foundations of Methods That Assess Treatment Effect Heterogeneity across Intermediate Outcomes

    ERIC Educational Resources Information Center

    Feller, Avi; Miratrix, Luke

    2015-01-01

    The goal of this study is to better understand how methods for estimating treatment effects of latent groups operate. In particular, the authors identify where violations of assumptions can lead to biased estimates, and explore how covariates can be critical in the estimation process. For each set of approaches, the authors first review the…

  2. Comparing Characteristics of Sporadic and Outbreak-Associated Foodborne Illnesses, United States, 2004–2011

    PubMed Central

    Ebel, Eric D.; Cole, Dana; Travis, Curtis C.; Klontz, Karl C.; Golden, Neal J.; Hoekstra, Robert M.

    2016-01-01

    Outbreak data have been used to estimate the proportion of illnesses attributable to different foods. Applying outbreak-based attribution estimates to nonoutbreak foodborne illnesses requires an assumption of similar exposure pathways for outbreak and sporadic illnesses. This assumption cannot be tested, but other comparisons can assess its veracity. Our study compares demographic, clinical, temporal, and geographic characteristics of outbreak and sporadic illnesses from Campylobacter, Escherichia coli O157, Listeria, and Salmonella bacteria ascertained by the Foodborne Diseases Active Surveillance Network (FoodNet). Differences among FoodNet sites in outbreak and sporadic illnesses might reflect differences in surveillance practices. For Campylobacter, Listeria, and Escherichia coli O157, outbreak and sporadic illnesses are similar for severity, sex, and age. For Salmonella, outbreak and sporadic illnesses are similar for severity and sex. Nevertheless, the percentage of outbreak illnesses in the youngest age category was lower. Therefore, we do not reject the assumption that outbreak and sporadic illnesses are similar. PMID:27314510

  3. Dimensions of the Feminist Research Methodology Debate: Impetus, Definitions, Dilemmas & Stances.

    ERIC Educational Resources Information Center

    Reinharz, Shulamit

    For various well-documented reasons, the feminist social movement has been critical of academia as a worksetting and of the social sciences as a set of disciplines. For these reasons, feminists claim that the assumptions underlying several research designs and procedures are sexist. They have developed a feminist methodology to examine these…

  4. SSDA code to apply data assimilation in soil water flow modeling: Documentation and user manual

    USDA-ARS?s Scientific Manuscript database

    Soil water flow models are based on simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Data assimilation (DA) with the ensemble Kalman filter (EnKF) corrects modeling results based on measured s...

  5. 78 FR 52996 - 60-Day Notice of Proposed Information Collection: Voluntary Disclosures.

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-27

    ... System (FDMS) to comment on this notice by going to www.regulations.gov . You may search for the document by entering ``Public Notice '' in the search bar. If necessary, use the ``narrow by agency'' filter... collection, including the validity of the methodology and assumptions used. Enhance the quality, utility, and...

  6. Readability and Reading Ability.

    ERIC Educational Resources Information Center

    Wright, Benjamin D.; Stenner, A. Jackson

    This document discusses the measurement of reading ability and the readability of books by application of the Lexile framework. It begins by stating the importance of uniform measures. It then discusses the history of reading ability testing, based on the assumption that no researcher has been able to measure more than one kind of reading ability.…

  7. Extracurricular Involvement among Affluent Youth: A Scapegoat for "Ubiquitous Achievement Pressures"?

    ERIC Educational Resources Information Center

    Luthar, Suniya S.; Shoum, Karen A.; Brown, Pamela J.

    2006-01-01

    It has been suggested that overscheduling of upper-class youth might underlie the high distress and substance use documented among them. This assumption was tested by considering suburban 8th graders' involvement in different activities along with their perceptions of parental attitudes toward achievement. Results indicated negligible evidence for…

  8. Greening the Global Village: The Administrative Imperative To Educate Students for Global Awareness.

    ERIC Educational Resources Information Center

    Chaniot, Janet

    The first of the three chapters of this document on teaching global education to elementary and secondary school students begins with a literature review of perspectives on global studies and continues with a comparison of definitions, assumptions, goals, and objectives for global education programs. The obstacles to teaching this global…

  9. Teaching and Assessing the Nature of Science

    ERIC Educational Resources Information Center

    Clough, Michael P.

    2011-01-01

    Understanding the nature of science (NOS)--what science is and how it works, the assumptions that underlie scientific knowledge, how scientists function as a social group, and how society impacts and reacts to science--is prominent in science education reform documents (Rutherford and Ahlgren 1990; AAAS 1993; McComas and Olson 1998; NRC 1996; AAAS…

  10. Does Adult Educator Professional Development Make a Difference? Myths and Realities.

    ERIC Educational Resources Information Center

    Kerka, Sandra

    An evidence-based connection between adult educator professional development (PD) and learner outcomes is difficult to document, yet there is an intuitive assumption that professional development is linked to better teaching and learning outcomes. The field appears to be shifting away from one-shot PD to practitioner engagement in sustained,…

  11. Classifying University Employability Strategies: Three Case Studies and Implications for Practice and Research

    ERIC Educational Resources Information Center

    Farenga, Stéphane A.; Quinlan, Kathleen M.

    2016-01-01

    This qualitative study documents three main strategic models used by Russell Group Careers Services to support students' preparation for graduate careers. It is framed against the backdrop of a challenging graduate labour market, discussions of employability in the literature and the policy assumption that universities are responsible for…

  12. 77 FR 38804 - Wireline Competition Bureau Seeks Comment on Model Design and Data Inputs for Phase II of the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-29

    ...In this document, the Wireline Competition Bureau (the Bureau) seeks comment on a number of threshold decisions regarding the design of and data inputs to the forward looking cost model, and on other assumptions in the cost models currently in the record.

  13. Consumer Education: A Position on the State of the Art.

    ERIC Educational Resources Information Center

    Richardson, Lee; And Others

    Including the introduction, this document is a collection of seven short papers that discuss facets of consumer education (CE). The Introduction defines CE and lists five assumptions used throughout the report (e.g., CE is generally understood, but not precisely defined enough for the people implementing it to have a uniform understanding; schools…

  14. Analysis of partially observed clustered data using generalized estimating equations and multiple imputation

    PubMed Central

    Aloisio, Kathryn M.; Swanson, Sonja A.; Micali, Nadia; Field, Alison; Horton, Nicholas J.

    2015-01-01

    Clustered data arise in many settings, particularly within the social and biomedical sciences. As an example, multiple–source reports are commonly collected in child and adolescent psychiatric epidemiologic studies where researchers use various informants (e.g. parent and adolescent) to provide a holistic view of a subject’s symptomatology. Fitzmaurice et al. (1995) have described estimation of multiple source models using a standard generalized estimating equation (GEE) framework. However, these studies often have missing data due to additional stages of consent and assent required. The usual GEE is unbiased when missingness is Missing Completely at Random (MCAR) in the sense of Little and Rubin (2002). This is a strong assumption that may not be tenable. Other options such as weighted generalized estimating equations (WEEs) are computationally challenging when missingness is non–monotone. Multiple imputation is an attractive method to fit incomplete data models while only requiring the less restrictive Missing at Random (MAR) assumption. Previously estimation of partially observed clustered data was computationally challenging however recent developments in Stata have facilitated their use in practice. We demonstrate how to utilize multiple imputation in conjunction with a GEE to investigate the prevalence of disordered eating symptoms in adolescents reported by parents and adolescents as well as factors associated with concordance and prevalence. The methods are motivated by the Avon Longitudinal Study of Parents and their Children (ALSPAC), a cohort study that enrolled more than 14,000 pregnant mothers in 1991–92 and has followed the health and development of their children at regular intervals. While point estimates were fairly similar to the GEE under MCAR, the MAR model had smaller standard errors, while requiring less stringent assumptions regarding missingness. PMID:25642154

  15. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    PubMed

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Finite element model predictions of static deformation from dislocation sources in a subduction zone: Sensitivities to homogeneous, isotropic, Poisson-solid, and half-space assumptions

    USGS Publications Warehouse

    Masterlark, Timothy

    2003-01-01

    Dislocation models can simulate static deformation caused by slip along a fault. These models usually take the form of a dislocation embedded in a homogeneous, isotropic, Poisson-solid half-space (HIPSHS). However, the widely accepted HIPSHS assumptions poorly approximate subduction zone systems of converging oceanic and continental crust. This study uses three-dimensional finite element models (FEMs) that allow for any combination (including none) of the HIPSHS assumptions to compute synthetic Green's functions for displacement. Using the 1995 Mw = 8.0 Jalisco-Colima, Mexico, subduction zone earthquake and associated measurements from a nearby GPS array as an example, FEM-generated synthetic Green's functions are combined with standard linear inverse methods to estimate dislocation distributions along the subduction interface. Loading a forward HIPSHS model with dislocation distributions, estimated from FEMs that sequentially relax the HIPSHS assumptions, yields the sensitivity of predicted displacements to each of the HIPSHS assumptions. For the subduction zone models tested and the specific field situation considered, sensitivities to the individual Poisson-solid, isotropy, and homogeneity assumptions can be substantially greater than GPS. measurement uncertainties. Forward modeling quantifies stress coupling between the Mw = 8.0 earthquake and a nearby Mw = 6.3 earthquake that occurred 63 days later. Coulomb stress changes predicted from static HIPSHS models cannot account for the 63-day lag time between events. Alternatively, an FEM that includes a poroelastic oceanic crust, which allows for postseismic pore fluid pressure recovery, can account for the lag time. The pore fluid pressure recovery rate puts an upper limit of 10-17 m2 on the bulk permeability of the oceanic crust. Copyright 2003 by the American Geophysical Union.

  17. Lagrangian methods for blood damage estimation in cardiovascular devices - How numerical implementation affects the results

    PubMed Central

    Marom, Gil; Bluestein, Danny

    2016-01-01

    Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833

  18. Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.

    PubMed

    Spiess, Martin; Jordan, Pascal; Wendt, Mike

    2018-05-07

    In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  19. Upscaling gas permeability in tight-gas sandstones

    NASA Astrophysics Data System (ADS)

    Ghanbarian, B.; Torres-Verdin, C.; Lake, L. W.; Marder, M. P.

    2017-12-01

    Klinkenberg-corrected gas permeability (k) estimation in tight-gas sandstones is essential for gas exploration and production in low-permeability porous rocks. Most models for estimating k are a function of porosity (ϕ), tortuosity (τ), pore shape factor (s) and a characteristic length scale (lc). Estimation of the latter, however, has been the subject of debate in the literature. Here we invoke two different upscaling approaches from statistical physics: (1) the EMA and (2) critical path analysis (CPA) to estimate lc from pore throat-size distribution derived from mercury intrusion capillary pressure (MICP) curve. τ is approximated from: (1) concepts of percolation theory and (2) formation resistivity factor measurements (F = τ/ϕ). We then estimate k of eighteen tight-gas sandstones from lc, τ, and ϕ by assuming two different pore shapes: cylindrical and slit-shaped. Comparison with Klinkenberg-corrected k measurements showed that τ was estimated more accurately from F measurements than from percolation theory. Generally speaking, our results implied that the EMA estimated k within a factor of two of the measurements and more precisely than CPA. We further found that the assumption of cylindrical pores yielded more accurate k estimates when τ was estimated from concepts of percolation theory than the assumption of slit-shaped pores. However, the EMA with slit-shaped pores estimated k more precisely than that with cylindrical pores when τ was estimated from F measurements.

  20. Logistic regression of family data from retrospective study designs.

    PubMed

    Whittemore, Alice S; Halpern, Jerry

    2003-11-01

    We wish to study the effects of genetic and environmental factors on disease risk, using data from families ascertained because they contain multiple cases of the disease. To do so, we must account for the way participants were ascertained, and for within-family correlations in both disease occurrences and covariates. We model the joint probability distribution of the covariates of ascertained family members, given family disease occurrence and pedigree structure. We describe two such covariate models: the random effects model and the marginal model. Both models assume a logistic form for the distribution of one person's covariates that involves a vector beta of regression parameters. The components of beta in the two models have different interpretations, and they differ in magnitude when the covariates are correlated within families. We describe ascertainment assumptions needed to estimate consistently the parameters beta(RE) in the random effects model and the parameters beta(M) in the marginal model. Under the ascertainment assumptions for the random effects model, we show that conditional logistic regression (CLR) of matched family data gives a consistent estimate beta(RE) for beta(RE) and a consistent estimate for the covariance matrix of beta(RE). Under the ascertainment assumptions for the marginal model, we show that unconditional logistic regression (ULR) gives a consistent estimate for beta(M), and we give a consistent estimator for its covariance matrix. The random effects/CLR approach is simple to use and to interpret, but it can use data only from families containing both affected and unaffected members. The marginal/ULR approach uses data from all individuals, but its variance estimates require special computations. A C program to compute these variance estimates is available at http://www.stanford.edu/dept/HRP/epidemiology. We illustrate these pros and cons by application to data on the effects of parity on ovarian cancer risk in mother/daughter pairs, and use simulations to study the performance of the estimates. Copyright 2003 Wiley-Liss, Inc.

  1. The fossil record and taphonomy of butterflies and moths (Insecta, Lepidoptera): implications for evolutionary diversity and divergence-time estimates.

    PubMed

    Sohn, Jae-Cheon; Labandeira, Conrad C; Davis, Donald R

    2015-02-04

    It is conventionally accepted that the lepidopteran fossil record is significantly incomplete when compared to the fossil records of other, very diverse, extant insect orders. Such an assumption, however, has been based on cumulative diversity data rather than using alternative statistical approaches from actual specimen counts. We reviewed documented specimens of the lepidopteran fossil record, currently consisting of 4,593 known specimens that are comprised of 4,262 body fossils and 331 trace fossils. The temporal distribution of the lepidopteran fossil record shows significant bias towards the late Paleocene to middle Eocene time interval. Lepidopteran fossils also record major shifts in preservational style and number of represented localities at the Mesozoic stage and Cenozoic epoch level of temporal resolution. Only 985 of the total known fossil specimens (21.4%) were assigned to 23 of the 40 extant lepidopteran superfamilies. Absolute numbers and proportions of preservation types for identified fossils varied significantly across superfamilies. The secular increase of lepidopteran family-level diversity through geologic time significantly deviates from the general pattern of other hyperdiverse, ordinal-level lineages. Our statistical analyses of the lepidopteran fossil record show extreme biases in preservation type, age, and taxonomic composition. We highlight the scarcity of identified lepidopteran fossils and provide a correspondence between the latest lepidopteran divergence-time estimates and relevant fossil occurrences at the superfamily level. These findings provide caution in interpreting the lepidopteran fossil record through the modeling of evolutionary diversification and in determination of divergence time estimates.

  2. Issues in the economic evaluation of influenza vaccination by injection of healthy working adults in the US: a review and decision analysis of ten published studies.

    PubMed

    Hogan, Thomas J

    2012-05-01

    The objective was to review recent economic evaluations of influenza vaccination by injection in the US, assess their evidence, and conclude on their collective findings. The literature was searched for economic evaluations of influenza vaccination injection in healthy working adults in the US published since 1995. Ten evaluations described in nine papers were identified. These were synopsized and their results evaluated, the basic structure of all evaluations was ascertained, and sensitivity of outcomes to changes in parameter values were explored using a decision model. Areas to improve economic evaluations were noted. Eight of nine evaluations with credible economic outcomes were favourable to vaccination, representing a statistically significant result compared with a proportion of 50% that would be expected if vaccination and no vaccination were economically equivalent. Evaluations shared a basic structure, but differed considerably with respect to cost components, assumptions, methods, and parameter estimates. Sensitivity analysis indicated that changes in parameter values within the feasible range, individually or simultaneously, could reverse economic outcomes. Given stated misgivings, the methods of estimating influenza reduction ascribed to vaccination must be researched to confirm that they produce accurate and reliable estimates. Research is also needed to improve estimates of the costs per case of influenza illness and the costs of vaccination. Based on their assumptions, the reviewed papers collectively appear to support the economic benefits of influenza vaccination of healthy adults. Yet the underlying assumptions, methods and parameter estimates themselves warrant further research to confirm they are accurate, reliable and appropriate to economic evaluation purposes.

  3. Estimating causal effects with a non-paranormal method for the design of efficient intervention experiments

    PubMed Central

    2014-01-01

    Background Knockdown or overexpression of genes is widely used to identify genes that play important roles in many aspects of cellular functions and phenotypes. Because next-generation sequencing generates high-throughput data that allow us to detect genes, it is important to identify genes that drive functional and phenotypic changes of cells. However, conventional methods rely heavily on the assumption of normality and they often give incorrect results when the assumption is not true. To relax the Gaussian assumption in causal inference, we introduce the non-paranormal method to test conditional independence in the PC-algorithm. Then, we present the non-paranormal intervention-calculus when the directed acyclic graph (DAG) is absent (NPN-IDA), which incorporates the cumulative nature of effects through a cascaded pathway via causal inference for ranking causal genes against a phenotype with the non-paranormal method for estimating DAGs. Results We demonstrate that causal inference with the non-paranormal method significantly improves the performance in estimating DAGs on synthetic data in comparison with the original PC-algorithm. Moreover, we show that NPN-IDA outperforms the conventional methods in exploring regulators of the flowering time in Arabidopsis thaliana and regulators that control the browning of white adipocytes in mice. Our results show that performance improvement in estimating DAGs contributes to an accurate estimation of causal effects. Conclusions Although the simplest alternative procedure was used, our proposed method enables us to design efficient intervention experiments and can be applied to a wide range of research purposes, including drug discovery, because of its generality. PMID:24980787

  4. Estimating causal effects with a non-paranormal method for the design of efficient intervention experiments.

    PubMed

    Teramoto, Reiji; Saito, Chiaki; Funahashi, Shin-ichi

    2014-06-30

    Knockdown or overexpression of genes is widely used to identify genes that play important roles in many aspects of cellular functions and phenotypes. Because next-generation sequencing generates high-throughput data that allow us to detect genes, it is important to identify genes that drive functional and phenotypic changes of cells. However, conventional methods rely heavily on the assumption of normality and they often give incorrect results when the assumption is not true. To relax the Gaussian assumption in causal inference, we introduce the non-paranormal method to test conditional independence in the PC-algorithm. Then, we present the non-paranormal intervention-calculus when the directed acyclic graph (DAG) is absent (NPN-IDA), which incorporates the cumulative nature of effects through a cascaded pathway via causal inference for ranking causal genes against a phenotype with the non-paranormal method for estimating DAGs. We demonstrate that causal inference with the non-paranormal method significantly improves the performance in estimating DAGs on synthetic data in comparison with the original PC-algorithm. Moreover, we show that NPN-IDA outperforms the conventional methods in exploring regulators of the flowering time in Arabidopsis thaliana and regulators that control the browning of white adipocytes in mice. Our results show that performance improvement in estimating DAGs contributes to an accurate estimation of causal effects. Although the simplest alternative procedure was used, our proposed method enables us to design efficient intervention experiments and can be applied to a wide range of research purposes, including drug discovery, because of its generality.

  5. EnKF with closed-eye period - bridging intermittent model structural errors in soil hydrology

    NASA Astrophysics Data System (ADS)

    Bauser, Hannes H.; Jaumann, Stefan; Berg, Daniel; Roth, Kurt

    2017-04-01

    The representation of soil water movement exposes uncertainties in all model components, namely dynamics, forcing, subscale physics and the state itself. Especially model structural errors in the description of the dynamics are difficult to represent and can lead to an inconsistent estimation of the other components. We address the challenge of a consistent aggregation of information for a manageable specific hydraulic situation: a 1D soil profile with TDR-measured water contents during a time period of less than 2 months. We assess the uncertainties for this situation and detect initial condition, soil hydraulic parameters, small-scale heterogeneity, upper boundary condition, and (during rain events) the local equilibrium assumption by the Richards equation as the most important ones. We employ an iterative Ensemble Kalman Filter (EnKF) with an augmented state. Based on a single rain event, we are able to reduce all uncertainties directly, except for the intermittent violation of the local equilibrium assumption. We detect these times by analyzing the temporal evolution of estimated parameters. By introducing a closed-eye period - during which we do not estimate parameters, but only guide the state based on measurements - we can bridge these times. The introduced closed-eye period ensured constant parameters, suggesting that they resemble the believed true material properties. The closed-eye period improves predictions during periods when the local equilibrium assumption is met, but consequently worsens predictions when the assumption is violated. Such a prediction requires a description of the dynamics during local non-equilibrium phases, which remains an open challenge.

  6. Estimating Lake Volume from Limited Data: A Simple GIS Approach

    EPA Science Inventory

    Lake volume provides key information for estimating residence time or modeling pollutants. Methods for calculating lake volume have relied on dated technologies (e.g. planimeters) or used potentially inaccurate assumptions (e.g. volume of a frustum of a cone). Modern GIS provid...

  7. Quantifying Cancer Risk from Radiation.

    PubMed

    Keil, Alexander P; Richardson, David B

    2017-12-06

    Complex statistical models fitted to data from studies of atomic bomb survivors are used to estimate the human health effects of ionizing radiation exposures. We describe and illustrate an approach to estimate population risks from ionizing radiation exposure that relaxes many assumptions about radiation-related mortality. The approach draws on developments in methods for causal inference. The results offer a different way to quantify radiation's effects and show that conventional estimates of the population burden of excess cancer at high radiation doses are driven strongly by projecting outside the range of current data. Summary results obtained using the proposed approach are similar in magnitude to those obtained using conventional methods, although estimates of radiation-related excess cancers differ for many age, sex, and dose groups. At low doses relevant to typical exposures, the strength of evidence in data is surprisingly weak. Statements regarding human health effects at low doses rely strongly on the use of modeling assumptions. © 2017 Society for Risk Analysis.

  8. Calculating system reliability with SRFYDO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less

  9. Detection of sea otters in boat-based surveys of Prince William Sound, Alaska. Marine mammal study 6-19. Exxon Valdez oil spill state/federal natural resource damage assessment final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Udevitz, M.S.; Bodkin, J.L.; Costa, D.P.

    1995-05-01

    Boat-based surveys were used to monitor the Prince William Sound sea otter population before and after the Exxon Valdez oil spill. Population and loss estimates could be obtained from these surveys by direct expansion from the counts in the surveyed transects under the assumption that all otters in those transects were observed. The authors conducted a pilot study using ground-based observers in conjunction with the August 1990 survey of marine mammals and birds to investigate the validity of this assumption. The proportion of otters detected by boat crews was estimated by comparing boat and ground-based observations on 22 segments ofmore » shoreline transects. Overall, the authors estimated that only 70% of the otters in surveyed shoreline transects were detected by the boat crews. These results suggest that unadjusted expansions of boat survey transect counts will underestimate sea otter population size and that loss estimates based on comparisons of unadjusted population estimates will be biased.« less

  10. Age ratios as estimators of productivity: testing assumptions on a threatened seabird, the marbled murrelet (Brachyramphus marmoratus)

    Treesearch

    M. Zachariah Peery; Benjamin H. Becker; Steven R. Beissinger

    2007-01-01

    The ratio of hatch-year (HY) to after-hatch-year (AHY) individuals (HY:AHY ratio) can be a valuable metric for estimating avian productivity because it does not require monitoring individual breeding sites and can often be estimated across large geographic and temporal scales. However, rigorous estimation of age ratios requires that both young and adult age classes are...

  11. Latent class instrumental variables: A clinical and biostatistical perspective

    PubMed Central

    Baker, Stuart G.; Kramer, Barnett S.; Lindeman, Karen S.

    2015-01-01

    In some two-arm randomized trials, some participants receive the treatment assigned to the other arm as a result of technical problems, refusal of a treatment invitation, or a choice of treatment in an encouragement design. In some before-and-after studies, the availability of a new treatment changes from one time period to this next. Under assumptions that are often reasonable, the latent class instrumental variable (IV) method estimates the effect of treatment received in the aforementioned scenarios involving all-or-none compliance and all-or-none availability. Key aspects are four initial latent classes (sometimes called principal strata) based on treatment received if in each randomization group or time period, the exclusion restriction assumption (in which randomization group or time period is an instrumental variable), the monotonicity assumption (which drops an implausible latent class from the analysis), and the estimated effect of receiving treatment in one latent class (sometimes called efficacy, the local average treatment effect, or the complier average causal effect). Since its independent formulations in the biostatistics and econometrics literatures, the latent class IV method (which has no well-established name) has gained increasing popularity. We review the latent class IV method from a clinical and biostatistical perspective, focusing on underlying assumptions, methodological extensions, and applications in our fields of obstetrics and cancer research. PMID:26239275

  12. Estimating avian population size using Bowden's estimator

    USGS Publications Warehouse

    Diefenbach, D.R.

    2009-01-01

    Avian researchers often uniquely mark birds, and multiple estimators could be used to estimate population size using individually identified birds. However, most estimators of population size require that all sightings of marked birds be uniquely identified, and many assume homogeneous detection probabilities. Bowden's estimator can incorporate sightings of marked birds that are not uniquely identified and relax assumptions required of other estimators. I used computer simulation to evaluate the performance of Bowden's estimator for situations likely to be encountered in bird studies. When the assumptions of the estimator were met, abundance and variance estimates and confidence-interval coverage were accurate. However, precision was poor for small population sizes (N < 50) unless a large percentage of the population was marked (>75%) and multiple (≥8) sighting surveys were conducted. If additional birds are marked after sighting surveys begin, it is important to initially mark a large proportion of the population (pm ≥ 0.5 if N ≤ 100 or pm > 0.1 if N ≥ 250) and minimize sightings in which birds are not uniquely identified; otherwise, most population estimates will be overestimated by >10%. Bowden's estimator can be useful for avian studies because birds can be resighted multiple times during a single survey, not all sightings of marked birds have to uniquely identify individuals, detection probabilities among birds can vary, and the complete study area does not have to be surveyed. I provide computer code for use with pilot data to design mark-resight surveys to meet desired precision for abundance estimates.

  13. Space transfer concepts and analysis for exploration missions. Implementation plan and element description document (draft final). Volume 4: Solar electric propulsion vehicle

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This document presents the solar electric propulsion (SEP) concept design developed as part of the Space Transfer Concepts and Analysis for Exploration Missions (STCAEM) study. The evolution of the SEP concept is described along with the requirements, guidelines and assumptions for the design. Operating modes and options are defined and a systems description of the vehicle is presented. Artificial gravity configuration options and space and ground support systems are discussed. Finally, an implementation plan is presented which addresses technology needs, schedules, facilities, and costs.

  14. Additional nuclear criticality safety calculations for small-diameter containers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hone, M.J.

    This report documents additional criticality safety analysis calculations for small diameter containers, which were originally documented in Reference 1. The results in Reference 1 indicated that some of the small diameter containers did not meet the criteria established for criticality safety at the Portsmouth facility (K{sub eff} +2{sigma}<.95) when modeled under various contingency assumptions of reflection and moderation. The calculations performed in this report reexamine those cases which did not meet the criticality safety criteria. In some cases, unnecessary conservatism is removed, and in other cases mass or assay limits are established for use with the respective containers.

  15. Overview of Threats and Failure Models for Safety-Relevant Computer-Based Systems

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This document presents a high-level overview of the threats to safety-relevant computer-based systems, including (1) a description of the introduction and activation of physical and logical faults; (2) the propagation of their effects; and (3) function-level and component-level error and failure mode models. These models can be used in the definition of fault hypotheses (i.e., assumptions) for threat-risk mitigation strategies. This document is a contribution to a guide currently under development that is intended to provide a general technical foundation for designers and evaluators of safety-relevant systems.

  16. Space transfer concepts and analysis for exploration missions. Implementation plan and element description document (draft final). Volume 3: Nuclear thermal rocket vehicle

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This document presents the nuclear thermal rocket (NTR) concept design developed in support of the Space Transfer Concepts and Analysis for Exploration Missions (STCAEM) study. The evolution of the NTR concept is described along with the requirements, guidelines and assumptions for the design. Operating modes and options are defined and a systems description of the vehicle is presented. Artificial gravity configuration options and space and ground support systems are discussed. Finally, an implementation plan is presented which addresses technology needs, schedules, facilities and costs.

  17. The Number of Scholarly Documents on the Public Web

    PubMed Central

    Khabsa, Madian; Giles, C. Lee

    2014-01-01

    The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%. PMID:24817403

  18. The number of scholarly documents on the public web.

    PubMed

    Khabsa, Madian; Giles, C Lee

    2014-01-01

    The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.

  19. RESULTS OF COMPUTATIONS MADE FOR DASA-USNRDL FALLOUT SYMPOSIUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Read, R.; Wagner, L.; Moorehead, E.

    1962-11-01

    The regression techniques introduced by the Civil Defense Research Project for estimating fallout particle deposition coordinates, their standard ellipses, and isointensity contours have been applied to some of the homework problems assigned for the DASA-USNRDL Fallout Symposium. The results are reported and the estimates are contrasted with estimates based on the assumption that winds are invariant with time. (auth).

  20. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  1. Reliability Estimation When a Test Is Split into Two Parts of Unknown Effective Length.

    ERIC Educational Resources Information Center

    Feldt, Leonard S.

    2002-01-01

    Considers the situation in which content or administrative considerations limit the way in which a test can be partitioned to estimate the internal consistency reliability of the total test score. Demonstrates that a single-valued estimate of the total score reliability is possible only if an assumption is made about the comparative size of the…

  2. Estimation of Two-Parameter Logistic Item Response Curves. Research Report 83-1. Mathematical Sciences Technical Report No. 130.

    ERIC Educational Resources Information Center

    Tsutakawa, Robert K.

    This paper presents a method for estimating certain characteristics of test items which are designed to measure ability, or knowledge, in a particular area. Under the assumption that ability parameters are sampled from a normal distribution, the EM algorithm is used to derive maximum likelihood estimates to item parameters of the two-parameter…

  3. Sparse PCA with Oracle Property.

    PubMed

    Gu, Quanquan; Wang, Zhaoran; Liu, Han

    In this paper, we study the estimation of the k -dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank- k , and attains a [Formula: see text] statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.

  4. Sparse PCA with Oracle Property

    PubMed Central

    Gu, Quanquan; Wang, Zhaoran; Liu, Han

    2014-01-01

    In this paper, we study the estimation of the k-dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-k, and attains a s/n statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets. PMID:25684971

  5. Confounding by season in ecologic studies of seasonal exposures and outcomes: examples from estimates of mortality due to influenza.

    PubMed

    Jackson, Michael L

    2009-10-01

    Many health outcomes exhibit seasonal variation in incidence, including accidents, suicides, and infections. For seasonal outcomes it can be difficult to distinguish the causal roles played by factors that also vary seasonally, such as weather, air pollution, and pathogen circulation. Various approaches to estimating the association between a seasonal exposure and a seasonal outcome in ecologic studies are reviewed, using studies of influenza-related mortality as an example. Because mortality rates vary seasonally and circulation of other respiratory viruses peaks during influenza season, it is a challenge to estimate which winter deaths were caused by influenza. Results of studies that estimated the contribution of influenza to all-cause mortality using different methods on the same data are compared. Methods for estimating associations between season exposures and outcomes vary greatly in their advantages, disadvantages, and assumptions. Even when applied to identical data, different methods can give greatly different results for the expected contribution of influenza to all-cause mortality. When the association between exposures and outcomes that vary seasonally is estimated, models must be selected carefully, keeping in mind the assumptions inherent in each model.

  6. An Empirical Analysis of the Impact of Recruitment Patterns on RDS Estimates among a Socially Ordered Population of Female Sex Workers in China

    PubMed Central

    Yamanis, Thespina J.; Merli, M. Giovanna; Neely, William Whipple; Tian, Felicia Feng; Moody, James; Tu, Xiaowen; Gao, Ersheng

    2013-01-01

    Respondent-driven sampling (RDS) is a method for recruiting “hidden” populations through a network-based, chain and peer referral process. RDS recruits hidden populations more effectively than other sampling methods and promises to generate unbiased estimates of their characteristics. RDS’s faithful representation of hidden populations relies on the validity of core assumptions regarding the unobserved referral process. With empirical recruitment data from an RDS study of female sex workers (FSWs) in Shanghai, we assess the RDS assumption that participants recruit nonpreferentially from among their network alters. We also present a bootstrap method for constructing the confidence intervals around RDS estimates. This approach uniquely incorporates real-world features of the population under study (e.g., the sample’s observed branching structure). We then extend this approach to approximate the distribution of RDS estimates under various peer recruitment scenarios consistent with the data as a means to quantify the impact of recruitment bias and of rejection bias on the RDS estimates. We find that the hierarchical social organization of FSWs leads to recruitment biases by constraining RDS recruitment across social classes and introducing bias in the RDS estimates. PMID:24288418

  7. On the accuracy of palaeopole estimations from magnetic field measurements

    NASA Astrophysics Data System (ADS)

    Vervelidou, F.; Lesur, V.; Morschhauser, A.; Grott, M.; Thomas, P.

    2017-12-01

    Various techniques have been proposed for palaeopole position estimation based on magnetic field measurements. Such estimates can offer insights into the rotational dynamics and the dynamo history of moons and terrestrial planets carrying a crustal magnetic field. Motivated by discrepancies in the estimated palaeopole positions among various studies regarding the Moon and Mars, we examine the limitations of magnetic field measurements as source of information for palaeopole position studies. It is already known that magnetic field measurements cannot constrain the null space of the magnetization nor its full spectral content. However, the extent to which these limitations affect palaeopole estimates has not been previously investigated in a systematic way. In this study, by means of the vector Spherical Harmonics formalism, we show that inferring palaeopole positions from magnetic field measurements necessarily introduces, explicitly or implicitly, assumptions about both the null space and the full spectral content of the magnetization. Moreover, we demonstrate through synthetic tests that if these assumptions are inaccurate, then the resulting palaeopole position estimates are wrong. Based on this finding, we make suggestions that can allow future palaeopole studies to be conducted in a more constructive way.

  8. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  9. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  10. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  11. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  12. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  13. Using GIS to Estimate Lake Volume from Limited Data (Lake and Reservoir Management)

    EPA Science Inventory

    Estimates of lake volume are necessary for calculating residence time and modeling pollutants. Modern GIS methods for calculating lake volume improve upon more dated technologies (e.g. planimeters) and do not require potentially inaccurate assumptions (e.g. volume of a frustum of...

  14. Helicopter Toy and Lift Estimation

    ERIC Educational Resources Information Center

    Shakerin, Said

    2013-01-01

    A $1 plastic helicopter toy (called a Wacky Whirler) can be used to demonstrate lift. Students can make basic measurements of the toy, use reasonable assumptions and, with the lift formula, estimate the lift, and verify that it is sufficient to overcome the toy's weight. (Contains 1 figure.)

  15. Mediation misgivings: ambiguous clinical and public health interpretations of natural direct and indirect effects.

    PubMed

    Naimi, Ashley I; Kaufman, Jay S; MacLehose, Richard F

    2014-10-01

    Recent methodological innovation is giving rise to an increasing number of applied papers in medical and epidemiological journals in which natural direct and indirect effects are estimated. However, there is a longstanding debate on whether such effects are relevant targets of inference in population health. In light of the repeated calls for a more pragmatic and consequential epidemiology, we review three issues often raised in this debate: (i) the use of composite cross-world counterfactuals and the need for cross-world independence assumptions; (ii) interventional vs non-interventional identifiability; and (iii) the interpretational ambiguity of natural direct and indirect effect estimates. We use potential outcomes notation and directed acyclic graphs to explain 'cross-world' assumptions, illustrate implications of this assumption via regression models and discuss ensuing issues of interpretation. We argue that the debate on the relevance of natural direct and indirect effects rests on whether one takes as a target of inference the mathematical object per se, or the change in the world that the mathematical object represents. We further note that public health questions may be better served by estimating controlled direct effects. © The Author 2014; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  16. Quantifying Adventitious Error in a Covariance Structure as a Random Effect

    PubMed Central

    Wu, Hao; Browne, Michael W.

    2017-01-01

    We present an approach to quantifying errors in covariance structures in which adventitious error, identified as the process underlying the discrepancy between the population and the structured model, is explicitly modeled as a random effect with a distribution, and the dispersion parameter of this distribution to be estimated gives a measure of misspecification. Analytical properties of the resultant procedure are investigated and the measure of misspecification is found to be related to the RMSEA. An algorithm is developed for numerical implementation of the procedure. The consistency and asymptotic sampling distributions of the estimators are established under a new asymptotic paradigm and an assumption weaker than the standard Pitman drift assumption. Simulations validate the asymptotic sampling distributions and demonstrate the importance of accounting for the variations in the parameter estimates due to adventitious error. Two examples are also given as illustrations. PMID:25813463

  17. Multiple Imputation For Combined-Survey Estimation With Incomplete Regressors In One But Not Both Surveys

    PubMed Central

    Rendall, Michael S.; Ghosh-Dastidar, Bonnie; Weden, Margaret M.; Baker, Elizabeth H.; Nazarov, Zafar

    2013-01-01

    Within-survey multiple imputation (MI) methods are adapted to pooled-survey regression estimation where one survey has more regressors, but typically fewer observations, than the other. This adaptation is achieved through: (1) larger numbers of imputations to compensate for the higher fraction of missing values; (2) model-fit statistics to check the assumption that the two surveys sample from a common universe; and (3) specificying the analysis model completely from variables present in the survey with the larger set of regressors, thereby excluding variables never jointly observed. In contrast to the typical within-survey MI context, cross-survey missingness is monotonic and easily satisfies the Missing At Random (MAR) assumption needed for unbiased MI. Large efficiency gains and substantial reduction in omitted variable bias are demonstrated in an application to sociodemographic differences in the risk of child obesity estimated from two nationally-representative cohort surveys. PMID:24223447

  18. Time-Varying Transition Probability Matrix Estimation and Its Application to Brand Share Analysis.

    PubMed

    Chiba, Tomoaki; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru

    2017-01-01

    In a product market or stock market, different products or stocks compete for the same consumers or purchasers. We propose a method to estimate the time-varying transition matrix of the product share using a multivariate time series of the product share. The method is based on the assumption that each of the observed time series of shares is a stationary distribution of the underlying Markov processes characterized by transition probability matrices. We estimate transition probability matrices for every observation under natural assumptions. We demonstrate, on a real-world dataset of the share of automobiles, that the proposed method can find intrinsic transition of shares. The resulting transition matrices reveal interesting phenomena, for example, the change in flows between TOYOTA group and GM group for the fiscal year where TOYOTA group's sales beat GM's sales, which is a reasonable scenario.

  19. Overarching framework for data-based modelling

    NASA Astrophysics Data System (ADS)

    Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco

    2014-02-01

    One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.

  20. Time-Varying Transition Probability Matrix Estimation and Its Application to Brand Share Analysis

    PubMed Central

    Chiba, Tomoaki; Akaho, Shotaro; Murata, Noboru

    2017-01-01

    In a product market or stock market, different products or stocks compete for the same consumers or purchasers. We propose a method to estimate the time-varying transition matrix of the product share using a multivariate time series of the product share. The method is based on the assumption that each of the observed time series of shares is a stationary distribution of the underlying Markov processes characterized by transition probability matrices. We estimate transition probability matrices for every observation under natural assumptions. We demonstrate, on a real-world dataset of the share of automobiles, that the proposed method can find intrinsic transition of shares. The resulting transition matrices reveal interesting phenomena, for example, the change in flows between TOYOTA group and GM group for the fiscal year where TOYOTA group’s sales beat GM’s sales, which is a reasonable scenario. PMID:28076383

  1. Correlation techniques to determine model form in robust nonlinear system realization/identification

    NASA Technical Reports Server (NTRS)

    Stry, Greselda I.; Mook, D. Joseph

    1991-01-01

    The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.

  2. Comparison of parametric and bootstrap method in bioequivalence test.

    PubMed

    Ahn, Byung-Jin; Yim, Dong-Seok

    2009-10-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.

  3. Comparison of Parametric and Bootstrap Method in Bioequivalence Test

    PubMed Central

    Ahn, Byung-Jin

    2009-01-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699

  4. Spillover effects in epidemiology: parameters, study designs and methodological considerations

    PubMed Central

    Benjamin-Chung, Jade; Arnold, Benjamin F; Berger, David; Luby, Stephen P; Miguel, Edward; Colford Jr, John M; Hubbard, Alan E

    2018-01-01

    Abstract Many public health interventions provide benefits that extend beyond their direct recipients and impact people in close physical or social proximity who did not directly receive the intervention themselves. A classic example of this phenomenon is the herd protection provided by many vaccines. If these ‘spillover effects’ (i.e. ‘herd effects’) are present in the same direction as the effects on the intended recipients, studies that only estimate direct effects on recipients will likely underestimate the full public health benefits of the intervention. Causal inference assumptions for spillover parameters have been articulated in the vaccine literature, but many studies measuring spillovers of other types of public health interventions have not drawn upon that literature. In conjunction with a systematic review we conducted of spillovers of public health interventions delivered in low- and middle-income countries, we classified the most widely used spillover parameters reported in the empirical literature into a standard notation. General classes of spillover parameters include: cluster-level spillovers; spillovers conditional on treatment or outcome density, distance or the number of treated social network links; and vaccine efficacy parameters related to spillovers. We draw on high quality empirical examples to illustrate each of these parameters. We describe study designs to estimate spillovers and assumptions required to make causal inferences about spillovers. We aim to advance and encourage methods for spillover estimation and reporting by standardizing spillover parameter nomenclature and articulating the causal inference assumptions required to estimate spillovers. PMID:29106568

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonen, E.P.; Johnson, K.I.; Simonen, F.A.

    The Vessel Integrity Simulation Analysis (VISA-II) code was developed to allow calculations of the failure probability of a reactor pressure vessel subject to defined pressure/temperature transients. A version of the code, revised by Pacific Northwest Laboratory for the US Nuclear Regulatory Commission, was used to evaluate the sensitivities of calculated through-wall flaw probability to material, flaw and calculational assumptions. Probabilities were more sensitive to flaw assumptions than to material or calculational assumptions. Alternative flaw assumptions changed the probabilities by one to two orders of magnitude, whereas alternative material assumptions typically changed the probabilities by a factor of two or less.more » Flaw shape, flaw through-wall position and flaw inspection were sensitivities examined. Material property sensitivities included the assumed distributions in copper content and fracture toughness. Methods of modeling flaw propagation that were evaluated included arrest/reinitiation toughness correlations, multiple toughness values along the length of a flaw, flaw jump distance for each computer simulation and added error in estimating irradiated properties caused by the trend curve correlation error.« less

  6. On the validity of the incremental approach to estimate the impact of cities on air quality

    NASA Astrophysics Data System (ADS)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  7. Estimating juvenile Chinook salmon (Oncorhynchus tshawytscha) abundance from beach seine data collected in the Sacramento–San Joaquin Delta and San Francisco Bay, California

    USGS Publications Warehouse

    Perry, Russell W.; Kirsch, Joseph E.; Hendrix, A. Noble

    2016-06-17

    Resource managers rely on abundance or density metrics derived from beach seine surveys to make vital decisions that affect fish population dynamics and assemblage structure. However, abundance and density metrics may be biased by imperfect capture and lack of geographic closure during sampling. Currently, there is considerable uncertainty about the capture efficiency of juvenile Chinook salmon (Oncorhynchus tshawytscha) by beach seines. Heterogeneity in capture can occur through unrealistic assumptions of closure and from variation in the probability of capture caused by environmental conditions. We evaluated the assumptions of closure and the influence of environmental conditions on capture efficiency and abundance estimates of Chinook salmon from beach seining within the Sacramento–San Joaquin Delta and the San Francisco Bay. Beach seine capture efficiency was measured using a stratified random sampling design combined with open and closed replicate depletion sampling. A total of 56 samples were collected during the spring of 2014. To assess variability in capture probability and the absolute abundance of juvenile Chinook salmon, beach seine capture efficiency data were fitted to the paired depletion design using modified N-mixture models. These models allowed us to explicitly test the closure assumption and estimate environmental effects on the probability of capture. We determined that our updated method allowing for lack of closure between depletion samples drastically outperformed traditional data analysis that assumes closure among replicate samples. The best-fit model (lowest-valued Akaike Information Criterion model) included the probability of fish being available for capture (relaxed closure assumption), capture probability modeled as a function of water velocity and percent coverage of fine sediment, and abundance modeled as a function of sample area, temperature, and water velocity. Given that beach seining is a ubiquitous sampling technique for many species, our improved sampling design and analysis could provide significant improvements in density and abundance estimation.

  8. The problem of the second wind turbine - a note on a common but flawed wind power estimation method

    NASA Astrophysics Data System (ADS)

    Gans, F.; Miller, L. M.; Kleidon, A.

    2012-06-01

    Several recent wind power estimates suggest that this renewable energy resource can meet all of the current and future global energy demand with little impact on the atmosphere. These estimates are calculated using observed wind speeds in combination with specifications of wind turbine size and density to quantify the extractable wind power. However, this approach neglects the effects of momentum extraction by the turbines on the atmospheric flow that would have effects outside the turbine wake. Here we show with a simple momentum balance model of the atmospheric boundary layer that this common methodology to derive wind power potentials requires unrealistically high increases in the generation of kinetic energy by the atmosphere. This increase by an order of magnitude is needed to ensure momentum conservation in the atmospheric boundary layer. In the context of this simple model, we then compare the effect of three different assumptions regarding the boundary conditions at the top of the boundary layer, with prescribed hub height velocity, momentum transport, or kinetic energy transfer into the boundary layer. We then use simulations with an atmospheric general circulation model that explicitly simulate generation of kinetic energy with momentum conservation. These simulations show that the assumption of prescribed momentum import into the atmospheric boundary layer yields the most realistic behavior of the simple model, while the assumption of prescribed hub height velocity can clearly be disregarded. We also show that the assumptions yield similar estimates for extracted wind power when less than 10% of the kinetic energy flux in the boundary layer is extracted by the turbines. We conclude that the common method significantly overestimates wind power potentials by an order of magnitude in the limit of high wind power extraction. Ultimately, environmental constraints set the upper limit on wind power potential at larger scales rather than detailed engineering specifications of wind turbine design and placement.

  9. Examination of the reliability of the crash modification factors using empirical Bayes method with resampling technique.

    PubMed

    Wang, Jung-Han; Abdel-Aty, Mohamed; Wang, Ling

    2017-07-01

    There have been plenty of studies intended to use different methods, for example, empirical Bayes before-after methods, to get accurate estimation of CMFs. All of them have different assumptions toward crash count if there was no treatment. Additionally, another major assumption is that multiple sites share the same true CMF. Under this assumption, the CMF at an individual intersection is randomly drawn from a normally distributed population of CMFs at all intersections. Since CMFs are non-zero values, the population of all CMFs might not follow normal distributions, and even if it does, the true mean of CMFs at some intersections may be different from that at others. Therefore, a bootstrap method based on before-after empirical Bayes theory was proposed to estimate CMFs, but it did not make distributional assumptions. This bootstrap procedure has the added benefit of producing a measure of CMF stability. Furthermore, based on the bootstrapped CMF, a new CMF precision rating method was proposed to evaluate the reliability of CMFs. This study chose 29 urban four-legged intersections as treated sites, and their controls were changed from stop-controlled to signal-controlled. Meanwhile, 124 urban four-legged stop-controlled intersections were selected as reference sites. At first, different safety performance functions (SPFs) were applied to five crash categories, and it was found that each crash category had different optimal SPF form. Then, the CMFs of these five crash categories were estimated using the bootstrap empirical Bayes method. The results of the bootstrapped method showed that signalization significantly decreased Angle+Left-Turn crashes, and its CMF had the highest precision. While, the CMF for Rear-End crashes was unreliable. For KABCO, KABC, and KAB crashes, their CMFs were proved to be reliable for the majority of intersections, but the estimated effect of signalization may be not accurate at some sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Hepatitis C bio-behavioural surveys in people who inject drugs-a systematic review of sensitivity to the theoretical assumptions of respondent driven sampling.

    PubMed

    Buchanan, Ryan; Khakoo, Salim I; Coad, Jonathan; Grellier, Leonie; Parkes, Julie

    2017-07-11

    New, more effective and better-tolerated therapies for hepatitis C (HCV) have made the elimination of HCV a feasible objective. However, for this to be achieved, it is necessary to have a detailed understanding of HCV epidemiology in people who inject drugs (PWID). Respondent-driven sampling (RDS) can provide prevalence estimates in hidden populations such as PWID. The aims of this systematic review are to identify published studies that use RDS in PWID to measure the prevalence of HCV, and compare each study against the STROBE-RDS checklist to assess their sensitivity to the theoretical assumptions underlying RDS. Searches were undertaken in accordance with PRISMA systematic review guidelines. Included studies were English language publications in peer-reviewed journals, which reported the use of RDS to recruit PWID to an HCV bio-behavioural survey. Data was extracted under three headings: (1) survey overview, (2) survey outcomes, and (3) reporting against selected STROBE-RDS criteria. Thirty-one studies met the inclusion criteria. They varied in scale (range 1-15 survey sites) and the sample sizes achieved (range 81-1000 per survey site) but were consistent in describing the use of standard RDS methods including: seeds, coupons and recruitment incentives. Twenty-seven studies (87%) either calculated or reported the intention to calculate population prevalence estimates for HCV and two used RDS data to calculate the total population size of PWID. Detailed operational and analytical procedures and reporting against selected criteria from the STROBE-RDS checklist varied between studies. There were widespread indications that sampling did not meet the assumptions underlying RDS, which led to two studies being unable to report an estimated HCV population prevalence in at least one survey location. RDS can be used to estimate a population prevalence of HCV in PWID and estimate the PWID population size. Accordingly, as a single instrument, it is a useful tool for guiding HCV elimination. However, future studies should report the operational conduct of each survey in accordance with the STROBE-RDS checklist to indicate sensitivity to the theoretical assumptions underlying the method. PROSPERO CRD42015019245.

  11. Restoring 2D content from distorted documents.

    PubMed

    Brown, Michael S; Sun, Mingxuan; Yang, Ruigang; Yun, Lin; Seales, W Brent

    2007-11-01

    This paper presents a framework to restore the 2D content printed on documents in the presence of geometric distortion and non-uniform illumination. Compared with textbased document imaging approaches that correct distortion to a level necessary to obtain sufficiently readable text or to facilitate optical character recognition (OCR), our work targets nontextual documents where the original printed content is desired. To achieve this goal, our framework acquires a 3D scan of the document's surface together with a high-resolution image. Conformal mapping is used to rectify geometric distortion by mapping the 3D surface back to a plane while minimizing angular distortion. This conformal "deskewing" assumes no parametric model of the document's surface and is suitable for arbitrary distortions. Illumination correction is performed by using the 3D shape to distinguish content gradient edges from illumination gradient edges in the high-resolution image. Integration is performed using only the content edges to obtain a reflectance image with significantly less illumination artifacts. This approach makes no assumptions about light sources and their positions. The results from the geometric and photometric correction are combined to produce the final output.

  12. Sociocultural Theory, the L2 Writing Process, and Google Drive: Strange Bedfellows?

    ERIC Educational Resources Information Center

    Slavkov, Nikolay

    2015-01-01

    As familiar and widely used elements of second language pedagogy that can be leveraged in interesting new ways through the use of digital technology. The focus is on a set of affordances offered by Google Drive, a popular online storage and document-sharing technology. On the assumption that dynamic collaboration with peers, teacher feedback, and…

  13. Lessons Learned about Inclusion While Starting a New College

    ERIC Educational Resources Information Center

    Jones, Michelle D.

    2018-01-01

    Starting a college from scratch presents a unique opportunity to think about how to build an inclusive learning environment from the beginning by selecting people and strategies that do not carry the weight of the traditional academic model and its prejudices and assumptions about who belongs and who does not. This article documents the lessons…

  14. 77 FR 76996 - Lead; Renovation, Repair, and Painting Program for Public and Commercial Buildings; Request for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-31

    ... practice requirements as directed by the Toxic Substances Control Act (TSCA). This document opens a comment period to allow for additional data and other information to be submitted by the public and interested.... Describe any assumptions and provide any technical information and/or data that you used. v. If you...

  15. 14 CFR Appendix B of Part 415 - Safety Review Document Outline

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Performed by Certified Personnel 4.0Flight Safety (§ 415.115) 4.1Initial Flight Safety Analysis 4.1.1Flight Safety Sub-Analyses, Methods, and Assumptions 4.1.2Sample Calculation and Products 4.1.3 Launch Specific Updates and Final Flight Safety Analysis Data 4.2Radionuclide Data (where applicable) 4.3Flight Safety...

  16. 14 CFR Appendix B of Part 415 - Safety Review Document Outline

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Performed by Certified Personnel 4.0Flight Safety (§ 415.115) 4.1Initial Flight Safety Analysis 4.1.1Flight Safety Sub-Analyses, Methods, and Assumptions 4.1.2Sample Calculation and Products 4.1.3 Launch Specific Updates and Final Flight Safety Analysis Data 4.2Radionuclide Data (where applicable) 4.3Flight Safety...

  17. 14 CFR Appendix B of Part 415 - Safety Review Document Outline

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Performed by Certified Personnel 4.0Flight Safety (§ 415.115) 4.1Initial Flight Safety Analysis 4.1.1Flight Safety Sub-Analyses, Methods, and Assumptions 4.1.2Sample Calculation and Products 4.1.3 Launch Specific Updates and Final Flight Safety Analysis Data 4.2Radionuclide Data (where applicable) 4.3Flight Safety...

  18. Evaluation of Student Performance in the Health Professions.

    ERIC Educational Resources Information Center

    Hullinger, Ronald L.; And Others

    The proposal in this document is based on the assumption that learning to learn is the most important outcome of professional and all formal education. It is suggested that a primary task of instruction is to assist the student to move from external motivation to internal motivation for learning and continuing to learn. It is proposed that these…

  19. Reds, Greens, Yellows Ease the Spelling Blues.

    ERIC Educational Resources Information Center

    Irwin, Virginia

    1971-01-01

    This document reports on a color-coding innovation designed to improve the spelling ability of high school seniors. This color-coded system is based on two assumptions: that color will appeal to the students and that there are three principal reasons for misspelling. Two groups were chosen for the experiments. A basic list of spelling demons was…

  20. 75 FR 8411 - Office of New Reactors: Interim Staff Guidance on Assessing the Consequences of an Accidental...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-24

    ... the public will be better served by being able to review and comment on both documents at this time... Construction Inspection and Operational Programs, Office of New Reactors, U.S. Nuclear Regulatory Commission... conditions for such releases and define acceptable assumptions to describe exposure scenarios and pathways to...

  1. AFC-Enabled Vertical Tail System Integration Study

    NASA Technical Reports Server (NTRS)

    Mooney, Helen P.; Brandt, John B.; Lacy, Douglas S.; Whalen, Edward A.

    2014-01-01

    This document serves as the final report for the SMAAART AFC-Enabled Vertical Tail System Integration Study. Included are the ground rule assumptions which have gone into the study, layouts of the baseline and AFC-enabled configurations, critical sizing information, system requirements and architectures, and assumed system properties that result in an NPV assessment of the two candidate AFC technologies.

  2. 77 FR 75251 - 60-Day Notice of Proposed Information Collection: ECA Exchange Student Surveys

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-19

    ... notice by going to www.Regulations.gov . You can search for the document by entering ``Public Notice '' in the Search bar. If necessary, use the Narrow by Agency filter option on the Results page. Email... collection, including the validity of the methodology and assumptions used. Enhance the quality, utility, and...

  3. Reflective Practice on Leadership Committed to Social Justice: Counter Story of an African American Superintendent

    ERIC Educational Resources Information Center

    Dailey, Ardella

    2015-01-01

    The central assumption of this paper is that the use of autoethnography is the best approach to obtain a deeper understanding of the political context, organizational culture, and complex dynamics of a person's lived experience in a leadership position. The central narrative follows the accounts documented through systematic journaling, and…

  4. Making Practice Visible: A Collaborative Self-Study of Tiered Teaching in Teacher Education

    ERIC Educational Resources Information Center

    Garbett, Dawn; Heap, Rena

    2011-01-01

    In this article we document the impact of tiered teaching on making the complexity of pedagogy transparent when teaching science education to pre-service primary teachers. Teaching science methods classes together and researching our teaching has enabled us to reframe our assumptions and move beyond the simplistic and misleading idea that teacher…

  5. Both Sides Now: Visualizing and Drawing with the Right and Left Hemispheres of the Brain

    ERIC Educational Resources Information Center

    Schiferl, E. I.

    2008-01-01

    Neuroscience research provides new models for understanding vision that challenge Betty Edwards' (1979, 1989, 1999) assumptions about right brain vision and common conventions of "realistic" drawing. Enlisting PET and fMRI technology, neuroscience documents how the brains of normal adults respond to images of recognizable objects and scenes.…

  6. Methods of Evaluation To Determine the Preservation Needs in Libraries and Archives: A RAMP Study with Guidelines.

    ERIC Educational Resources Information Center

    Cunha, George M.

    This Records and Archives Management Programme (RAMP) study is intended to assist in the development of basic training programs and courses in document preservation and restoration, and to promote harmonization of such training both within the archival profession and within the broader information field. Based on the assumption that conservation…

  7. Educating European Citizenship: Elucidating Assumptions about Teaching Civic Competence

    ERIC Educational Resources Information Center

    Bengtsson, Anki

    2015-01-01

    In recent years, the idea of the contribution of education to citizenship has been reinitiated. The purpose of this paper is to investigate constructions of citizenship as they are articulated in European policy documents on teacher education. It is indicated that the normative form of active citizenship is put into play through the individual and…

  8. Is the European (Active) Citizenship Ideal Fostering Inclusion within the Union? A Critical Review

    ERIC Educational Resources Information Center

    Milana, Marcella

    2008-01-01

    This article reviews: (1) the establishment and functioning of EU citizenship: (2) the resulting perception of education for European active citizenship; and (3) the question of its adequacy for enhancing democratic values and practices within the Union. Key policy documents produced by the EU help to unfold the basic assumptions on which…

  9. Improving Child Management Practices of Parents and Teachers. Maxi I Practicum. Final Report.

    ERIC Educational Resources Information Center

    Adreani, Arnold J.; McCaffrey, Robert

    The practicum design reported in this document was based on one basic assumption, that the adult perceptions of children influence adult behavior toward children which in turn influences the child's behavior. Therefore, behavior changes by children could best be effected by changing the adult perception of, and behavior toward, the child.…

  10. Evidence-Based Practice in Mental Health Care to Ethnic Minority Communities: Has Its Practice Fallen Short of Its Evidence?

    ERIC Educational Resources Information Center

    Aisenberg, Eugene

    2008-01-01

    Evidence-based practice (EBP) has contributed substantially to the advancement of knowledge in the treatment and prevention of adult mental health disorders. A fundamental assumption, based on documented evidence of effectiveness with certain populations, is that EBP is equally effective and applicable to all populations. However, small sample…

  11. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    NASA Astrophysics Data System (ADS)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  12. A comparison of estimators from self-controlled case series, case-crossover design, and sequence symmetry analysis for pharmacoepidemiological studies.

    PubMed

    Takeuchi, Yoshinori; Shinozaki, Tomohiro; Matsuyama, Yutaka

    2018-01-08

    Despite the frequent use of self-controlled methods in pharmacoepidemiological studies, the factors that may bias the estimates from these methods have not been adequately compared in real-world settings. Here, we comparatively examined the impact of a time-varying confounder and its interactions with time-invariant confounders, time trends in exposures and events, restrictions, and misspecification of risk period durations on the estimators from three self-controlled methods. This study analyzed self-controlled case series (SCCS), case-crossover (CCO) design, and sequence symmetry analysis (SSA) using simulated and actual electronic medical records datasets. We evaluated the performance of the three self-controlled methods in simulated cohorts for the following scenarios: 1) time-invariant confounding with interactions between the confounders, 2) time-invariant and time-varying confounding without interactions, 3) time-invariant and time-varying confounding with interactions among the confounders, 4) time trends in exposures and events, 5) restricted follow-up time based on event occurrence, and 6) patient restriction based on event history. The sensitivity of the estimators to misspecified risk period durations was also evaluated. As a case study, we applied these methods to evaluate the risk of macrolides on liver injury using electronic medical records. In the simulation analysis, time-varying confounding produced bias in the SCCS and CCO design estimates, which aggravated in the presence of interactions between the time-invariant and time-varying confounders. The SCCS estimates were biased by time trends in both exposures and events. Erroneously short risk periods introduced bias to the CCO design estimate, whereas erroneously long risk periods introduced bias to the estimates of all three methods. Restricting the follow-up time led to severe bias in the SSA estimates. The SCCS estimates were sensitive to patient restriction. The case study showed that although macrolide use was significantly associated with increased liver injury occurrence in all methods, the value of the estimates varied. The estimations of the three self-controlled methods depended on various underlying assumptions, and the violation of these assumptions may cause non-negligible bias in the resulting estimates. Pharmacoepidemiologists should select the appropriate self-controlled method based on how well the relevant key assumptions are satisfied with respect to the available data.

  13. Comparing least-squares and quantile regression approaches to analyzing median hospital charges.

    PubMed

    Olsen, Cody S; Clark, Amy E; Thomas, Andrea M; Cook, Lawrence J

    2012-07-01

    Emergency department (ED) and hospital charges obtained from administrative data sets are useful descriptors of injury severity and the burden to EDs and the health care system. However, charges are typically positively skewed due to costly procedures, long hospital stays, and complicated or prolonged treatment for few patients. The median is not affected by extreme observations and is useful in describing and comparing distributions of hospital charges. A least-squares analysis employing a log transformation is one approach for estimating median hospital charges, corresponding confidence intervals (CIs), and differences between groups; however, this method requires certain distributional properties. An alternate method is quantile regression, which allows estimation and inference related to the median without making distributional assumptions. The objective was to compare the log-transformation least-squares method to the quantile regression approach for estimating median hospital charges, differences in median charges between groups, and associated CIs. The authors performed simulations using repeated sampling of observed statewide ED and hospital charges and charges randomly generated from a hypothetical lognormal distribution. The median and 95% CI and the multiplicative difference between the median charges of two groups were estimated using both least-squares and quantile regression methods. Performance of the two methods was evaluated. In contrast to least squares, quantile regression produced estimates that were unbiased and had smaller mean square errors in simulations of observed ED and hospital charges. Both methods performed well in simulations of hypothetical charges that met least-squares method assumptions. When the data did not follow the assumed distribution, least-squares estimates were often biased, and the associated CIs had lower than expected coverage as sample size increased. Quantile regression analyses of hospital charges provide unbiased estimates even when lognormal and equal variance assumptions are violated. These methods may be particularly useful in describing and analyzing hospital charges from administrative data sets. © 2012 by the Society for Academic Emergency Medicine.

  14. An application of model-fitting procedures for marginal structural models.

    PubMed

    Mortimer, Kathleen M; Neugebauer, Romain; van der Laan, Mark; Tager, Ira B

    2005-08-15

    Marginal structural models (MSMs) are being used more frequently to obtain causal effect estimates in observational studies. Although the principal estimator of MSM coefficients has been the inverse probability of treatment weight (IPTW) estimator, there are few published examples that illustrate how to apply IPTW or discuss the impact of model selection on effect estimates. The authors applied IPTW estimation of an MSM to observational data from the Fresno Asthmatic Children's Environment Study (2000-2002) to evaluate the effect of asthma rescue medication use on pulmonary function and compared their results with those obtained through traditional regression methods. Akaike's Information Criterion and cross-validation methods were used to fit the MSM. In this paper, the influence of model selection and evaluation of key assumptions such as the experimental treatment assignment assumption are discussed in detail. Traditional analyses suggested that medication use was not associated with an improvement in pulmonary function--a finding that is counterintuitive and probably due to confounding by symptoms and asthma severity. The final MSM estimated that medication use was causally related to a 7% improvement in pulmonary function. The authors present examples that should encourage investigators who use IPTW estimation to undertake and discuss the impact of model-fitting procedures to justify the choice of the final weights.

  15. A New Formulation of Equivalent Effective Stratospheric Chlorine (EESC)

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Daniel, J. S.; Waugh, D. W.; Nash, E. R.

    2007-01-01

    Equivalent effective stratospheric chlorine (EESC) is a convenient parameter to quantify the effects of halogens (chlorine and bromine) on ozone depletion in the stratosphere. We show and discuss a new formulation of EESC that now includes the effects of age-of-air dependent fractional release values and an age-of-air spectrum. This new formulation provides quantitative estimates of EESC that can be directly related to inorganic chlorine and bromine throughout the stratosphere. Using this EESC formulation, we estimate that human-produced ozone depleting substances will recover to 1980 levels in 2041 in the midlatitudes, and 2067 over Antarctica. These recovery dates are based upon the assumption that the international agreements for regulating ozone-depleting substances are adhered to. In addition to recovery dates, we also estimate the uncertainties in the estimated time of recovery. The midlatitude recovery of 2041 has a 95% confidence uncertainty from 2028 to 2049, while the 2067 Antarctic recovery has a 95% confidence uncertainty from 2056 to 2078. The principal uncertainties are from the estimated mean age-of-air, and the assumption that the mean age-of-air and fractional release values are time independent. Using other model estimates of age decrease due to climate change, we estimate that midlatitude recovery may be accelerated from 2041 to 2031.

  16. PROTOCOL - A COMPUTERIZED SOLID WASTE QUANTITY AND COMPOSITION ESTIMATION SYSTEM: OPERATIONAL MANUAL

    EPA Science Inventory

    The assumptions of traditional sampling theory often do not fit the circumstances when estimating the quantity and composition of solid waste arriving at a given location, such as a landfill site, or at a specific point in an industrial or commercial process. The investigator oft...

  17. Comparing process-based breach models for earthen embankments subjected to internal erosion

    USDA-ARS?s Scientific Manuscript database

    Predicting the potential flooding from a dam site requires prediction of outflow resulting from breach. Conservative estimates from the assumption of instantaneous breach or from an upper envelope of historical cases are readily computed, but these estimates do not reflect the properties of a speci...

  18. IRT-Estimated Reliability for Tests Containing Mixed Item Formats

    ERIC Educational Resources Information Center

    Shu, Lianghua; Schwarz, Richard D.

    2014-01-01

    As a global measure of precision, item response theory (IRT) estimated reliability is derived for four coefficients (Cronbach's a, Feldt-Raju, stratified a, and marginal reliability). Models with different underlying assumptions concerning test-part similarity are discussed. A detailed computational example is presented for the targeted…

  19. Empirically Driven Variable Selection for the Estimation of Causal Effects with Observational Data

    ERIC Educational Resources Information Center

    Keller, Bryan; Chen, Jianshen

    2016-01-01

    Observational studies are common in educational research, where subjects self-select or are otherwise non-randomly assigned to different interventions (e.g., educational programs, grade retention, special education). Unbiased estimation of a causal effect with observational data depends crucially on the assumption of ignorability, which specifies…

  20. Estimation of dose-response models for discrete and continuous data in weed science

    USDA-ARS?s Scientific Manuscript database

    Dose-response analysis is widely used in biological sciences and has application to a variety of risk assessment, bioassay, and calibration problems. In weed science, dose-response methodologies have typically relied on least squares estimation under an assumption of normality. Advances in computati...

Top